id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
13923393
pes2o/s2orc
v3-fos-license
Structure of Spinning Particle Suggested by Gravity, Supergravity and Low Energy String Theory The structure of spinning particle suggested by the rotating Kerr-Newman (black hole) solution, super-Kerr-Newman solution and the Kerr-Sen solution to low energy string theory is considered. Main peculiarities of the Kerr spinning particle are discussed: a vortex of twisting principal null congruence, singular ring and the Kerr source representing a rotating relativistic disk of the Compton size. A few stringy structures can be found in the real and complex Kerr geometry. Low-energy string theory predicts the existence of a heterotic string placed on the sharp boundary of this disk. The obtained recently supergeneralization of the Kerr-Newman solution suggests the existence of extra axial singular line and fermionic traveling waves concentrating near these singularities. We discuss briefly a possibility of experimental test of these predictions. Introduction The Kerr solution is well known as a field of the rotating black hole. However, for the case of a large angular momentum L, | a |= L/m ≥ m, all the horizons of the Kerr metric are absent, and the naked ring-like singularity is appeared. This naked singularity has many unpleasant manifestations and must be hidden inside a rotating disk-like source. The Kerr solution with | a |≫ m displays some remarkable features indicating a relation to the structure of the spinning elementary particles. In the 1969 Carter [1] observed, that if three parameters of the Kerr -Newman solution are adopted to be (h=c=1 ) e 2 ≈ 1/137, m ≈ 10 −22 , a ≈ 10 22 , L = ma = 1/2, then one obtains a model for the four parameters of the electron: charge, mass, spin and magnetic moment, and the giromagnetic ratio is automatically the same as that of the Dirac electron. Israel [2] has introduced a disk-like source for the Kerr field, and it was shown by Hamity [3] that this source represents a rigid relativistic rotator. A model of "microgeon" with Kerr metric was suggested [4] and an analogy of this model with string models [5]. Then a model of the Kerr-Newman source in the form of oblate spheroid was suggested [7]. It was shown that material of the source must have very exotic properties: null energy density and negative pressure. An attempt to explain these properties on the basis of the volume Casimir effect was given in work [6]. The electromagnetic properties of the material are close to those of a superconductor [6,7], that allows to consider singular ring of the Kerr source as a closed vortex string like the Nielsen-Olesen and Witten superconducting strings. Since 1992 black holes have paid attention of string theory. In 1992 the Kerr solution was generalized by Sen to low energy string theory [8]. It was shown that black holes can be considered as fundamental string states, and the point of view has appeared that some of black holes can be treated as elementary particles [9]. The obtained recently super-Kerr-Newman solution [10,11] represents a natural combination of the Kerr spinning particle and superparticle models and predicts the existence of extra axial singularity and fermionic traveling waves on the Kerr-Newman background. Kerr singular ring The Kerr string-like singularity appears in the rotating BH solutions instead of the point-like singularity of the non-rotating BH. The simple solution possessing the Kerr singular ring was obtained by Appel in 1887 (!) [12]. It can be considered as a Newton or Coulomb analogue to the Kerr solution. When the point-like source of the Coulomb solution f = 1/r = 1/ ( , the Kerr singular ring arises on the real slice of space-time. The complex equation of singularitỹ r = 0 represents a ring as an intersection of plane and sphere. The complex radial distancer can be expressed in the oblate spheroidal coordinates r and θ:r = r + ia cos θ. The Kerr singular ring is a branch line of the space on two sheets: "positive" one covered by r ≥ 0, and "negative" one, an anti-world, covered by r ≤ 0. The sheets are connected by disk r = 0 spanned by singular ring. The physical fields change signs and directions on the "negative "sheet. Truncation of the negative sheet allows one to avoid the twosheetedness. In this case the fields will acquire a shock crossing the disk, and some material sources have to be spread on the disk surface to satisfy the field equations. The structure of electromagnetic field near the disk suggests then that the "nega-tive" sheet of space can be considered as a mirror image of the real world in the rotating superconducting mirror. The source of Kerr-Newman solution, like the Appel solution, can be considered from complex point of view as a "particle" propagating along a complex world-line x i (τ ) [13,14] parametrized by complex time τ . The objects described by the complex world-lines occupy an intermediate position between particle and string. Like the string they form the two-dimensional surfaces or the world-sheets in the space-time. It was shown that the complex Kerr source may be considered as a complex hyperbolic string which requires an orbifoldlike structure of the world-sheet. It induces a related orbifold-like structure of the Kerr geometry [14] which is closely connected with the above mentioned twosheetedness. Kerr congruence and disk-like source Second remarkable peculiarity of the Kerr solution is the twisting principal null congruence (PNC) which can be considered as a vortex of null radiation. This vortex propagates via disk from negative sheet of space onto positive one forming a caustic at singular ring. PNC plays fundamental role in the structure of the Kerr geometry. The Kerr metric can be represented in the Kerr-Schild form g ik = η ik + 2hk i k k , where η is metric of an auxiliary Minkowski space and h is a scalar function. Vector field k i (x) is null, k i k i = 0, and tangent to PNC. The Kerr PNC is geodesic and shear free [15]. Congruences with such properties are described by the Kerr theorem via complex function Y (x) representing a projective spinor coordinate Y (x) =Ψ2/Ψ1. The null vector field k i (x) can be expressed in spinor form k ∼Ψσ i dx i Ψ. The above complex representation of source allows one to obtain the Kerr congruence by a retarded-time construction [13,14]. The complex light cone with the vertex at some point x 0 of the complex world line (x i −x 0i )(x i −x i 0 ) = 0 can be split into two families of null planes: "left" and "right". In spinor form this splitting can be described as where "right" (or "left") null planes can be obtained keeping Ψ constant and varyingΨ, or keepingΨ constant and varying Ψ. The rays of the twisting Kerr congruence arise as real slice of the "left" null planes of the complex light cones emanated from the complex world line [13,14]. Replacement of the negative sheet by a disk-like source at surface r = 0 allows one to avoid twosheetedness of the Kerr space. However, there is still a small region of causality violation on positive sheet of space. By the Löpez suggestion this region has to be also covered by source [7]. The minimal value of r covering this region is 'classical radius' r e = e 2 2m . The resulting disk-like source has a thickness of order r e and its degree of oblateness is α −1 ≈ 137. Stringy suggestions In 1974, in the frame of Einstein gravity the model of microgeon with the Kerr-Newman metric was considered [4], where singular ring was used as a waveguide for wave excitations. It was recognized soon [5] that singular ring represents in fact a string with traveling waves. Further, in dilaton gravity, the string solutions with traveling waves have paid considerable attention. The obtained by Sen generalization of the Kerr solution to low energy string theory with axion and dilaton [8] was analyzed in [16]. It was shown that, in spite of the strong deformation of metric by dilaton (leading to a change the type of metric from type D to type I), the Kerr PNC survives in the Kerr-Sen solution and retains the properties to be geodesic and shear free. It means that the Kerr theorem and the above complex representation are valid for the Kerr-Sen solution too. It has also been obtained that the field of the Kerr-Sen solution near the Kerr singular ring is similar to the field around a fundamental heterotic string that suggested stringy interpretation of the Kerr singular ring. Supergeneralization Description of spinning particle based only on the bosonic fields cannot be complete. On the other hand the fermionic models of spinning particles and superparticles based on Grassmann coordinates have paid considerable attention. In [10,11] a natural way to combine the Kerr spinning particle and superparticle models was suggested leading to a non-trivial super-Kerr-Newman black hole solution. The simplest consistent supergeneralization of Einstein gravity represents an unification of the gravitational field g ik , with a spin 3/2 Rarita-Schwinger field ψ i . There exists the problem of triviality of supergravity solutions. Any exact solution of Einstein gravity is indeed a trivial solution of supergravity field equations with a zero field ψ i . Starting from such a solution and using supertranslations, one can easily turn the gravity solution into a form containing the spin-3/2 field ψ i . However, since this spin-3/2 field can be gauged away by reverse transformations such supersolutions have to be considered as trivial. The hint how to avoid this triviality problem was given by complex representation of the Kerr geometry. One notes that from complex point of view the Schwarzschild and Kerr geometries are equivalent and connected by a trivial complex shift. The non-trivial twisting structure of the Kerr geometry arises as a result of the shifted real slice regarding the source [13,14]. Similarly, it is possible to turn a trivial super black hole solution into a non-trivial if one finds an analogue to the real slice in superspace. The trivial supershift can be represented as a replacement of the complex world line by a superworldline X i 0 (τ ) = x i 0 (τ ) − iθσ iζ + iζσ iθ , parametrized by Grassmann coordinates ζ,ζ, or as a corresponding coordinate supershift Assuming that coordinates x i before the supershift are the usual c-number coordinates one sees that coordinates acquire nilpotent Grassmann contributions after supertranslations. Therefore, there appears a natural splitting of the space-time coordinates on the c-number 'body'-part and a nilpotent part -the so called 'soul'. The 'body' subspace of superspace, or B-slice, is a submanifold where the nilpotent part is equal to zero, and it is a natural analogue to the real slice in complex case. Reproducing the real slice procedure of the Kerr geometry in superspace one obtains the condition of proportionality of the commuting spinorsΨ(x) determining the PNC of the Kerr geometry and anticommuting spinorsθ and ζ, As a consequence of the B-slice and superlightcone constraints one obtains a submanifold of superspace θ = θ(x),θ =θ(x). The initial supergauge freedom is lost now, and there appears a non-linear realization of broken supersymmetry introduced by Volkov and Akulov [17,18] and considered in N=1 supergravity by Deser and Zumino [19]. It is assumed that this construction is similar to the Higgs mechanism of the usual gauge theories, and ζ α (x),ζα(x) represent Goldstone fermion which can be eaten by appropriate local supertransformation with a corresponding redefinition of the tetrad and the spin-3/2 field ψ i . However, the complex character of supertranslations demands to extend this scheme to N=2 supergravity. In this way the self-consistent super-Kerr-Newman solutions to broken N=2 supergravity coupled to Goldstone fermion field was obtained [11]. The solution describes the massless Dirac wave field propagating on the Kerr-Newman background along the Kerr congruence. Besides the Kerr singular ring solution contains an extra axial singularity and traveling waves propagating along the ring-like and axial singularity. The 'axial' singularity represents a half-infinite line threading the Kerr singular ring and passing to 'negative' sheet of the Kerr geometry. The position and character of axial singularity depend on the index n of elementary excitation. The case n = −1/2 is exclusive: there are two 'decreasing' singularities which are situated symmetrically at θ = 0 and θ = π. Problem of hard core The obtained supergeneralization is based on the massless Goldstone field. At present stage of investigation our knowledge regarding the origin of the Goldstone fermion is very incompleted. Analyzing the Wess-Zumino model of super-QED and some other schemes of spontaneously broken supersymmetry [18], one sees that it can leads to massless Goldstone fermions, at least in the region of massless fields out of the BH horizons. However, for the known parameters of spinning particles, the angular momentum is very high, regarding the mass parameter, and the black hole horizons disappear. The resulting object is "neither black and nor hole", and the considered above disk-like 'hard core' region is naked. Structure of this region represents a very important and extremely complicated problem. Among the possible field models for description this region could be mentioned the Landau-Ginzburg model, super-QED, non-abelian gauge models, Seiberg-Witten theory, as well as the recent ideas on the confinement caused by extra dimensions in the bulk/boundary ( AdS/CFT correspondence) models [21]. Apparently, this problem is very far from resolution at present, and one of the most difficult points can be the concordance of the field model with the rotating disk-like bag of the Kerr geometry. Suggestions to experimental test The predicted comparative big size of the disk-like bag looks as a serious contradiction to the traditional point of view on the structureless, point-like electron. However, the suggested by QED virtual photons surrounding electron in the region of Compton size, zitterbewegung and the vacuum zero-point fluctuations, spreading the position of electron, can be treated as some indirect evidences for the existence of an geometrical structure in the Compton region. At least, one can assume that region of virtual photons has a tendency to be very ordered with formation of the Kerr congruence and the ring-like singularity. The modern progress in the formation of polarized beams of spinning particles suggests the possible methods for experimental test of the main predicted feature of the Kerr spinning particleits highly oblated form. In particular, it could be the method proposed in [20] based on estimation of the cross section differences between transversely and longitudinally polarized states in protonproton collisions. One proposes that similar experiment could be more effective for the electron-electron collisions. Another possible way of experimental test could be the analysis of the diffraction of photons on the polarized electrons. Apparently, the strong influence of vacuum fluctuations will not allow one to observe the predicted very high oblateness of electrons. Nevertheless, one expects that an essential effect should be observed if the Kerr source model reflects the reality.
2014-10-01T00:00:00.000Z
1999-10-06T00:00:00.000
{ "year": 1999, "sha1": "006351a4abc7d04c5eecbc382000aaefaf965cf1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9910045", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8581b8610faf307ff86f2716c0b95d3d611988fb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
72941082
pes2o/s2orc
v3-fos-license
MinAtar: An Atari-inspired Testbed for More Efficient Reinforcement Learning Experiments The Arcade Learning Environment (ALE) is a popular platform for evaluating reinforcement learning agents. Much of the appeal comes from the fact that Atari games are varied, showcase aspects of competency we expect from an intelligent agent, and are not biased towards any particular solution approach. The challenge of the ALE includes 1) the representation learning problem of extracting pertinent information from the raw pixels, and 2) the behavioural learning problem of leveraging complex, delayed associations between actions and rewards. Often, in reinforcement learning research, we care more about the latter, but the representation learning problem adds significant computational expense. In response, we introduce MinAtar, short for miniature Atari, a new evaluation platform that captures the general mechanics of specific Atari games, while simplifying certain aspects. In particular, we reduce the representational complexity to focus more on behavioural challenges. MinAtar consists of analogues to five Atari games which play out on a 10x10 grid. MinAtar provides a 10x10xn state representation. The n channels correspond to game-specific objects, such as ball, paddle and brick in the game Breakout. While significantly simplified, these domains are still rich enough to allow for interesting behaviours. To demonstrate the challenges posed by these domains, we evaluated a smaller version of the DQN architecture. We also tried variants of DQN without experience replay, and without a target network, to assess the impact of those two prominent components in the MinAtar environments. In addition, we evaluated a simpler agent that used actor-critic with eligibility traces, online updating, and no experience replay. We hope that by introducing a set of simplified, Atari-like games we can allow researchers to more efficiently investigate the unique behavioural challenges provided by the ALE. Motivation The arcade learning environment (Bellemare, Naddaf, Veness, & Bowling, 2013) (ALE) has become widely popular as a testbed for reinforcement learning (RL), and other AI algorithms. An important aspect of the ALE's appeal is that the environments are designed to be interesting for human players, and not to be amenable to any particular approach to AI. Because of this design, the platform is largely free of experimenter bias and provides diverse challenges which we associated with the kind of general intelligence seen in humans. The challenges provided by the ALE can be broadly divided into two aspects: 1) the representation learning problem of extracting pertinent information from the raw pixels, and 2) the behavioural learning problem of leveraging complex, delayed associations between actions and rewards. While it is important to have testbeds that provide this kind of broad-spectrum challenge, it is not always what we want as experimenters. Often, the work flow when evaluating a new RL idea is to first experiment with very simple domains, such as Mountain Car or a tabular MDP, then jump to complex domains, like those provided by the ALE, to validate the intuition. We believe that this jump tends to leave a wide gap in understanding that would be best filled by domains of intermediate complexity. MinAtar is intended to bridge this gap by providing environments designed to capture the spirit of specific Atari 2600 games, while simplifying certain aspects. One aspect of the ALE that makes it difficult to use as an RL testbed is that much of the computing power expended to train a deep RL agent goes toward learning a semantically meaningful representation from the raw pixel input. When the first deep RL agents were shown to succeed in the ALE, this was an interesting challenge. Presently, however, this challenge is usually addressed with some variant of convolutional neural network and often the interesting research questions come not from this visual representation learning problem, but instead from the higher level behavioural challenges involved in the various games. A major goal of MinAtar is, thus, to reduce the complexity of this representation learning problem while maintaining the mechanics of the original games as much as possible. While our simplification also reduces the behavioural complexity of the games, the MinAtar environments are still rich enough to showcase interesting behaviours, similar to those observed in the ALE. We emphasize that MinAtar is not a challenge problem, like Go, StarCraft (Vinyals et al., 2017) or the ALE when it was first introduced. The purpose is to serve as a more efficient way to validate intuition, and provide proof of concept for artificial intelligence ideas, which is closer to how the ALE is often used today. The MinAtar Platform Aside from replicating the spirit of a set of Atari 2600 games, the design goals of MinAtar can be broken down as follows: • Reduce spatial dimension: In MinAtar, each game takes place on a 10x10 grid. This is a significant reduction from the Atari 2600 screen size of 160x210. Often, in the ALE, the input to the learning agent is down-sampled. For example, Mnih et al. (2015) downsample to 64x64. MinAtar provides a much smaller input without the need for this step. • Reduce action space: In MinAtar, the action space consists of moving in one of the 4 cardinal directions, firing, or no-op. This makes for a total of just 6 actions. On the other hand, in the ALE, it is possible to move in 8 directions or stand still. For each of these choices, the player can also either fire or not fire. This makes for a total of 18 actions. • Provide semantically meaningful input: Instead of raw color channels, each MinAtar environment provides a number of semantically meaningful channels. For example, for the game Breakout, MinAtar provides channels for ball, paddle and brick. The total number of such channels is game-dependent. The state provided to the agent consists of a stack of 10x10 grids, one for each channel, giving a total dimensionality of 10x10xn where n is the number of channels. • Reduce partial observability: Many games in the ALE involve some benign form of partial observability. For example, the motion direction of objects is often not discernible from a single frame. Techniques like frame stacking (Mnih et al., 2015) reduce such partial observability. Minitar mitigates the need for such techniques by making the motion direction of objects discernible within a single frame. Depending on the situation, we convey motion direction either by providing a trail channel indicating the last location of certain objects, or by explicitly providing a channel for each possible direction of motion. We do not aim to eliminate partial observability, but merely mitigate the more trivial instances. • Simplify certain game mechanics: Reduction to a 10x10 grid means that some of the more nuanced mechanics of certain Atari 2600 games are difficult or impossible to replicate. Other mechanics we left out for simplicity. For example, in Space Invaders we do not include the destructible defence bunkers or the mystery ship which periodically crosses the top of the screen. We also limit each game to one life, terminating as soon as the agent dies. • Add stochasticity: The Atari 2600 is deterministic. Each game begins in a unique start state and the outcome is uniquely determined by the action sequence that follows. This deterministic behaviour can be exploited by simply repeating specific sequences of actions, rather than learning policies that generalize. Machado et al. (2017) address this by adding sticky-actions, where the environment repeats the last action with probability 0.25 instead of executing the agent's current action. We incorporate sticky-actions in MinAtar, but with a smaller probability of 0.1. This is based on the assumption that individual actions have a larger impact in MinAtar than in the ALE due to the larger granularity of the movement discritization, thus each sticky-action can have a potentially larger negative impact. In addition, we make the spawn location of certain entities random. For example, in Seaquest the enemy fish, enemy submarines and divers emerge from random locations on the side of the screen. So far, we have implemented five games for the MinAtar platform. Visualizations of each of these games are shown in Figure 1. MinAtar is available as an open-source python library under the terms of the GNU General Public License. The source code is available at: https://github.com/kenjyoung/MinAtar You can find links to videos of trained DQN agents playing the MinAtar games in the README. Seaquest Breakout Asterix Freeway Space Invaders Figure 1: Visualization of each MinAtar game. Colour indicates active channel at each spatial location, but note that the representation provided by the environment consists of binary values for each channel and not RGB values. Experiments We provide results for several variants of two main algorithms on each of the five MinAtar environments. We trained each agent for a total of 5 million frames, compared to 50 million for Mnih et al. (2015) (200 million if you count in terms of emulator frames since they use frame-skipping). The reduction in the number of frames allowed us to run more repeats without inordinate expense. We were able to train 30 different random-seeds per agent-environment combination to obtain results with tighter confidence intervals. These results serve to provide a baseline for future work, as well as to illustrate the challenge posed by these new environments. Deep Q-Network Our DQN architecture consisted of a single convolutional layer, followed by a fully connected hidden layer. Our convolutional layer used 16 3x3 convolutions with stride 1, while our fully connected layer had 128 units. 16 and 128 were chosen as one quarter of the final convolutional layer and fully connected hidden layer respectively of Mnih et al. (2015). We also reduced the replay buffer size, target network update frequency, epsilon annealing time and replay buffer fill time, each by a factor of ten relative to Mnih et al. (2015) based on the reasoning that our environments take fewer frames to master than the original Atari games. We trained on every frame and did not employ frame skipping. The reasoning behind this decision is that each frame of our environments is more information rich. Other hyperparameters, including the step-size parameter, were set to match Mnih et al. (2015). We also tested variants of the DQN architecture without experience replay and without a target network to assess the usefulness of these components in the simpler environments. The smaller architecture and input size means that running on CPU instead of GPU was feasible. For DQN with experience replay running on Seaquest, the total wall-clock time per frame was around 8 milliseconds when running on a single CPU, compared to 5 milliseconds when running on GPU. We report these times for Seaquest because it has the largest number of input channels and thus the largest number of network parameters. Actor-Critic with Eligibility Traces We also experimented with an online actor-critic with eligibility traces (AC(λ)) agent (Degris, Pilarski, & Sutton, 2012;Sutton & Barto, 2018). This agent used no experience replay or multiple parallel actors. We used a similar architecture to the one used in our DQN experiments, except that we replaced the relu activation functions with the SiLU and dSiLU activation functions introduced by Elfwing, Uchibe, and Doya (2018). Their work showed these activations to be helpful when using online eligibility traces with nonlinear function approximation. Specifically, we applied SiLU in the convolutional layer and dSiLU in the fully connected hidden layer. We set the step-size to 2 −8 , the largest value that was found to yield stable learning across games by Young, Wang, and Taylor (2018), who used online AC(λ) to train a similar architecture for several ALE games. We tested AC(λ) with trace decay parameter of 0.8 and 0, where the latter corresponds to one-step actor-critic with no eligibility trace. Discussion The results of our experiments are shown in Figure 2. The first thing to note is that the MinAtar environments clearly show the advantage of using experience replay in DQN. On the other hand, the target network appeared to have little performance impact. To verify that the poor performance of DQN without experience replay was not due to a poorly tuned step-size parameter, we tried running DQN without experience replay with various values of the step-size on Seaquest. We choose Seaquest for the step-size sweep because it showed the largest discrepancy in results with and without experience replay. Specifically, we tried 30 random-seeds with each value of the step-size from the set {α 0 · 2 i |i ∈ {1, 0, ..., −4}}, where the original value was α 0 = 0.00025. We swept primarily lower values, reasoning that the lack of batching would lead to higher variance, potentially requiring a lower step-size relative to DQN with experience replay. For all the step-size choices, none of the average returns over the final 100 training episodes were above 1.0. DQN significantly outperformed the simpler AC(λ) agent in 3 of the games. In each of these 3 games, AC(λ) barely learned at all. On the other hand, perhaps surprisingly, AC(λ) significantly outperformed DQN at Space Invaders. The two performed similarly at Breakout, though DQN converged faster to it's final performance. To verify that the AC(λ)'s poor performance was not due to a poorly tuned step-size, we performed a step-size sweep on Seaquest running AC(0.8). Specifically, we tried 30 random-seeds with each value of the step-size from the set {α 0 · 2 i |i ∈ {−1, 0, ..., 4}}, where α 0 = 2 −8 is the original AC(λ) step-size. We swept primarily higher step-size values, reasoning that the initial step-size, chosen for stability on ALE games, was potentially unnecessarily low for some MinAtar games. With the best step-size of α 0 · 2 3 , we observed performance improvement of AC(λ), achieving a final average return of 4.4 ± 0.6 over the final 100 episodes in 5 million training frames. However, this was still far below the performance of DQN with experience replay. Taken together, these results suggest that the MinAtar environments are effective at highlighting the strengths and weaknesses of different approaches. Conclusion We introduce MinAtar, a new evaluation platform for reinforcement learning designed to allow for more efficient experiments by providing simpler versions of Atari 2600 games. These environments aim to reduce the representation learning burden to focus more on interesting behavioral aspects of the games, which are often of greater interest to RL experimenters. Currently the platform consists of five environments. In the future, we plan to add more. Of particular interest are the games typically considered hard exploration problems, such as Montezuma's Revenge and Pitfall. It would also be interesting to explore how other DQN additions, such as double DQN, perform in the MinAtar environments.
2019-03-07T20:34:36.000Z
2019-03-07T00:00:00.000
{ "year": 2019, "sha1": "c77b96419feb2025cfd47b6c76b9e5cb598c5846", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c77b96419feb2025cfd47b6c76b9e5cb598c5846", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
246722353
pes2o/s2orc
v3-fos-license
Bio-Evaluation of the Wound Healing Activity of Artemisia judaica L. as Part of the Plant’s Use in Traditional Medicine; Phytochemical, Antioxidant, Anti-Inflammatory, and Antibiofilm Properties of the Plant’s Essential Oils Artemisia judaica (ArJ) is a Mediterranean aromatic plant used traditionally to treat gastrointestinal ailments, skin diseases, atherosclerosis, and as an immuno-stimulant. This study describes ArJ essential oil constituents and investigates their wound healing activity. The in vitro antioxidant and antibiofilm activities of ArJ essential oil were investigated. The in vivo pro/anti-inflammatory and oxidative/antioxidant markers were compared with standard silver sulfadiazine (SS) in a second-degree skin burn experimental rat model. The gas chromatography-equipped flame ionization detector (GC-FID) analysis of ArJ essential oil revealed the major classes of compounds as oxygenated monoterpenes (>57%) and cinnamic acid derivatives (18.03%). The antimicrobial tests of ArJ essential oil revealed that Bacillus cereus, Candida albicans, and Aspergillus niger were the most susceptible test organisms. Two second-degree burns (each 1 inch square in diameter) were created on the dorsum of rats using an aluminum cylinder heated to 120 °C for 10 s. The wounds were treated either with ArJ or SS ointments for 21 days, while the negative control remained untreated, and biopsies were obtained for histological and biochemical analysis. The ArJ group demonstrated a significant increase in antioxidant superoxide dismutase (SOD) and catalase (CAT) enzymatic activities, while lipid peroxide (LP) levels remained insignificant compared to the negative control group. Additionally, ArJ and SS groups demonstrated a significant decrease in inflammatory levels of tumor necrosis factor α (TNF-α) compared to the negative group, while interleukin 1 beta (IL-1b) and IL-6 were comparable to the negative group. At the same time, anti-inflammatory IL-10 and transforming growth factor beta 1 (TGF-b1) markers increased significantly in the ArJ group compared to the negative control. The ArJ results demonstrated potent wound healing effects, comparable to SS, attributable to antioxidant and anti-inflammatory effects as well as a high proportion of oxygenated monoterpenes and cinnamate derivatives. Introduction Artemisia is a genus of annual, perennial, and biennial herbs in the Asteraceae (Compositae) family [1]. The plants of the genus Artemisia are frequently used in traditional medicine as remedies for human and animal ailments. For instance, Artemisia species have been used in traditional medicine for respiratory disorders, including coughs and phlegm, as a pain killer, worm expelling agent, diaphoretic and diuretic agent, and for the treatment of wounds, hypertension, and allergies [2]. In addition, some of the Artemisia plants are traditionally used to treat seizures, and the activity is confirmed through in vivo animal experiments [3][4][5]. Artemisia species have been reported in in vitro and in vivo experiments and in clinical trials evaluating their anticancer, antimalarial, antimicrobial, and antiviral activities [6,7]. Furthermore, several side effects and misuses have also been reported for some of the genus' plants. For instance, A. monosperma leaves are not recommended in pregnancy and are used to induce abortion in Jordan [8]. However, this plant, in addition to other plants of Artemisia, e.g., A. vulgaris, has been used in folklore medicine for labor induction [8][9][10]. Besides abortion, vomiting, diarrhea, headache, pruritus, and rashes have been reported among young children and pregnant women who used A. annua to treat malaria [11]. The Artemisia plants' biological activities were attributed to the presence of essential oils, sesquiterpene lactones, flavonoids, bitter principles, coumarins, and phenolic acids [1,2,12,13]. Several Artemisia species grow wildly or as cultivated plants for their use as medication and as a herbal tea preparation in the Mediterranean region [9,14,15]. Artemisia judaica L. (ArJ) is widely grown in the Mediterranean region, including Algeria, Libya, Egypt, Jordan, and Saudi Arabia [16][17][18][19][20]. In Saudi Arabia, ArJ grows in the kingdom's northern region, including the border area of the Hail-Qassim regions [21]. ArJ has been reported for several traditional uses, e.g., healing external wounds and repairing snake and scorpion bites [22]. In addition, ArJ is traditionally used to treat gastrointestinal disorders, sexual inability, hyperglycemia, heart diseases, inflammatory disorders, arthritis, cancers [1,20], skin diseases, atherosclerosis, and enhance vision and immunity [23,24]. The Bedouins in Egypt (Sinai) and Saudi Arabia also use the plant as a herbal tea in treating GIT disorders [16]. Biologically, ArJ demonstrated antidiabetic, antioxidant, hepatoprotective, and anti-inflammatory activities in experimental animals [22,25,26] due to the properties inherent in the chemical structure of the compounds it contains [27]. The plant also exhibited weak antimicrobial activity against Gram-positive and Gram-negative bacteria [28,29]. In vitro studies reported the plant extract's potential antioxidant and anticancer activities [28,29]. ArJ chemical analysis revealed the presence of flavonoids, e.g., glycosides and aglycones of apigenin, luteolin, and quercetin [22]. Other natural classes, such as phenolics, triterpenes, bitter principles, and sesquiterpene lactones, i.e., judaicin, have also been reported from the plant [22,30]. Additionally, ArJ is an aromatic plant. Its essential constituents have been identified from the plant species growing in different areas and climatic regions [18,20,23,24,31]; as well known as the anthropogenic factors, environmental conditions primarily affect the composition of the plant [32]. The overall analysis of the essential oil constituents of ArJ indicated that the monoterpene, i.e., piperitone, is the major chemotypic constituent in the plant from different genotypes [1,24]. In addition, other essential constituents of the plants, such as camphor, ethyl cinnamate, and spathulenol, have also been identified in relatively high concentrations in individual plant genotypes [24]. In addition, environmental conditions and the geographical locations of the plant growing areas have been reported to affect the major chemotypic constituents of ArJ essential oils. Table 1 demonstrates the major constituents of the plant essential oils from different locations. Table 1. Major constituents of the ArJ essential oils from plant species growing in different areas. Locations Major Constituents Y% Ref. The methods used for essential oil production from aromatic plants vary and mostly depend on the nature of the volatile constituents, the amount of the essential oils, and the nature of the plant samples [36]. Thereby, distillation procedures are primarily used for the plants containing a considerable amount of the thermostable volatile constituents; however, volatile (e.g., diethyl ether) and non-volatile (e.g., lard) solvent extraction processes are used for the extraction of the highly delicate aromatic plants which contain heat-sensitive and small quantities of the essential oils [37]. In addition, modern extraction techniques, such as CO 2, supercritical CO 2 extraction and microwave-assisted extraction techniques, are used for the industrial-scale production of the essential oils with specific advantages, e.g., time-and quantity-based efficiency and environmentally friendly properties [38][39][40]. Burn injury traumas occur by friction, cold, heat, radiation, chemical, or electric sources, but hot liquids, solids, and fire contribute significantly towards burn injuries [41]. In Saudi Arabia, 52% of all burns occur in young children, and males are more prone to burns than females (1.42:1). Burn wounds require immediate attention to avoid hypovolemic shock and sepsis [42]. New approaches and drugs are being researched to facilitate faster burn wound healing [43], thereby minimizing adverse reactions, like allergy or irritation, due to topical agents that increase the rehabilitation period [44]. In addition to their general availability, herbal medicines have demonstrated a promising role in wound healing compared to silver sulfadiazine (SS) [45][46][47]. Nevertheless, modern approaches and methodologies are required to validate claims for herbal compounds [48]. The current study is designed to demonstrate the wound healing properties of ArJ essential oils as part of the plant's use in traditional medicine. Therefore, a phytochemical analysis of the ArJ essential oil for the species growing in the Northern Qassim region of Saudi Arabia was conducted. The study also investigated the antioxidant, antimicrobial, and antibiofilm activities of the plant essential oil as associated analyses related to the wound healing potential of the plant. Plant Materials and Distillation Procedure The aerial plant parts were collected during March 2020, in the morning, from the Northern Qassim region of Saudi Arabia and identified as Artemisia judaica L. by the taxonomists in the Department of Plant Production and Protection, College of Agriculture, Qassim University (Buraydah, Saudi Arabia). A sample of the plant with the registered number #090 was kept at the herbarium of the College of Pharmacy, Qassim University (Buraydah, Saudi Arabia). The plant was dried in the shade at room temperature for ten days before the distillation process. The plant materials, 200 g, were reduced to coarse powder form, backed to a 2 L conical flask with a stopper, and thoroughly mixed with 700 mL of distilled water. The flask was connected to the Clevenger apparatus and fixed over the heating mantel. The flask contents were allowed to boil for a continuous 5 h. The distillate essential oil was collected over anhydrous sodium sulfate and stored in an opaque glass vial in a −20 • C freezer. GC-FID Analysis of the Essential Oil A gas chromatography (Perkin Elmer Auto System XL, Waltham, MA, USA) equipped flame ionization detector (GC-FID) was used to analyze the essential oil of ArJ. The chromatographic separation of the oil samples was achieved on a fused silica capillary column ZB5 (60 m × 0.32 mm i.d. × 0.25 µm film thickness). The oven temperature was maintained initially at 50 • C and programmed from 50 to 240 • C at a rate of 3 • C/min. ArJ essential oil sample was dissolved in analytical grade diethyl ether (2.9 mg of the oil in 100 µL of the solvent). Then, 1 µL of the mixture was injected with a 1/20 split ratio. The helium was used as the carrier gas at a 1.1 mL/min flow rate. The injector and detector temperatures were 220 and 250 • C, respectively. Gas Chromatography-Mass Spectroscopy Analysis of the Volatile Oil The GC-MS analysis of ArJ essential oil was conducted using an Agilent 8890 GC system attached to a PAL RTC 120 auto-sampler and equipped with a mass detector, Agilent 9977B GC/MSD mass spectrometer (Agilent technology, Santa Clara, CA, USA). An HP-5 capillary column (30 m, 250 µm i.d., 0.25 µm film thickness) was used to separate target molecules. The initial column temperature (50 • C for 2 min, isothermal) was programmed up to 220 • C at a rate of 5 • C/min, and then 10 • C/min up to 280 • C and kept constant at 280 • C for 10 min (isothermal). The injector temperature was 230 • C. Helium was used as a carrier gas at 1 mL/min flow rate. All the mass spectra were recorded using the following conditions. The run time was about 65 min. The transfer line was set at 280 • C, and the ionization source and the mass analyzer temperatures were set at 230 and 150 • C, respectively. Diluted samples (1% v/v) were injected with split mode (split ratio 1:15). Identification of the Essential Oil Constituents The constituents of the oil were identified based on the experimental retention index (RI) calculated with references to a series of standard n-alkenes series (C8-C40) and the retention indexes reported for the ArJ essential constituents besides the reported retention indexes obtained for the analysis of different essential oils under similar GC experimental conditions. In addition, the National Institute of Standards and Technology (NIST-11) and mass fragmentation patterns of the peaks were also used to identify the compounds. The relative percentages of the constituents were calculated from the area under the peak obtained from the GC-FID chromatogram. 2.5. Antioxidant Activity of ArJ Essential Oil 2.5.1. Total Antioxidant Capacity (TAC) The method described by Aroua et al. [49] was followed to conduct this experiment. In brief, sulfuric acid (0.6 M) and ammonium molybdate (4 mM) in sodium phosphate buffer (28 mM) were mixed to prepare the molybdate reagent. Then, 3.6 mL of the molybdate reagent was mixed with 0.4 mL of ArJ essential oil (containing 200 µg of the oil) in a stoppered glass test tube. The tube was vortexed and warmed for 30 min at 90 • C in a water bath. After cooling, the absorbance of the developed blue color was recorded at 695 nm using a spectrophotometer against a blank prepared essential oil. The TAC of ArJ essential oil was calculated equivalent to the Trolox using the standard calibration curve. The method was conducted according to Shimada et al. [50]: 1 mL of the diluted ArJ essential oil (containing 200 µg of the oil in methanol) was mixed with 1 mL of DPPH (prepared by dissolving 6 mg of the DPPH in 50 mL of methanol). The mixture absorbance was measured at 517 nm after 30 min of standing at room temperature in a dark place. The DPPH-SA was calculated equivalent to Trolox from three independent measurements. Minor modifications to the method of Benzie and Strain [51] were carried out to measure the FRAP of ArJ essential oil. FRAP working reagent was freshly prepared by adding TPTZ (2,4,6-Tris(2-pyridyl)-s-triazine, 10 mM prepared in 40 mM HCl) to FeCl 3 ·6H 2 O (20 mM) and acetate buffer (300 mM, pH 3.6) in a ratio 1:1:10. Then, 2 mL of the FRAP reagent was added to 0.1 mL of the ArJ essential oil (containing 200 µg the oil), the mixtures were incubated for 30 min at room temperature, and the absorbance was recorded at 593 nm. The procedure was conducted in triplicate, and the prepared FRAP-Trolox calibration curve was used to calculate the extract activity as mg Trolox equivalent per gram of the used plant's dried extract. Metal Chelating Activity Assay (MCA) The ArJ essential oil ability to chelate iron compared to the EDTA was evaluated using Zengin et al.'s method [52]. Briefly, a mixture of the ArJ essential oil (2 mL of ethanol containing 200 µg of the oil) and ferrous chloride (25 µL, 2 mM) was added to 100 µL of ferrozine to inchoate the color. The mixture's absorbance was recorded at 562 nm against a blank (2 mL of the ArJ essential oil plus 200 µL of the ferrous chloride without ferrozine). The standard calibration curve of EDTA was prepared, and the chelating activity of the ArJ essential oil was calculated in equivalents of the EDTA. 2.6. Antimicrobial Activity of ArJ Essential Oil 2.6.1. Preliminary Antimicrobial Activity The preliminary antimicrobial activity of ArJ essential oil was determined by the disc diffusion method [53]. Modified Mueller-Hinton agar (MMHA) and potato dextrose agar (PDA) were used as test media. MMHA plates were prepared according to the protocol mentioned in the literature [54]. The sterile paper discs (6 mm in diameter) were impregnated with 20 µL of pure ArJ essential oil and then used to evaluate the antimicrobial potential of ArJ essential oil against the selected human pathogens. Levofloxacin (5 µg/disc) and clotrimazole (50 µg/disc) were used as antibacterial and antifungal control (C) drugs. Each test organism's inoculum was prepared in sterile tryptic soy broth (TSB), and the turbidity of each suspension was adjusted equal to 0.5 MacFarland standard, which is equal to 1.5 × 10 8 colony forming units (CFU/mL) for bacteria, 1 × 10 6 -5 × 10 6 CFU/mL for yeast and 4 × 10 5 to 5 × 10 6 CFU/mL for mold. Following that, 100 µL suspensions of each adjusted inoculum were poured individually over the surface of the test agar plates and then uniformly spread using sterile swabs. The prepared discs of ArJ and control drugs were then put on the inoculated plates. The plates were incubated at 35 • C for 24 h for bacteria and 48 h for fungi. After incubation, the diameters of inhibitory zones were calculated on a millimeter (mm) scale. Each test was performed in triplicate. The results are expressed in mm ± standard deviation (SD). Minimum Inhibitory Concentration (MIC) and Minimum Biocidal Concentration (MBC) MIC was determined by the resazurin-based micro-broth dilution method, while MBC was performed following the standard spot inoculation method [53,55]. The inocula of each test bacteria were prepared in TSB, following the Clinical and Laboratory Standards Institute (CLSI) guidelines (https://clsi.org/, accessed on 1st December 2021), where the OD 600 value (0.08-0.12) was adjusted, resulting in~1 × 10 8 CFU/mL. Then, adjusted inocula were further diluted by 1:100 in TSB, resulting in~1 × 10 6 CFU/mL. In contrast, the inocula of test fungi were prepared in potato dextrose broth (PDB) following the CLSI guidelines, where the OD 600 value (0.08-0.12) was adjusted, the resulting stock suspension contained 1 × 10 6 to 5 × 10 6 CFU/mL for yeast and 4 × 10 5 to 5 × 10 6 CFU/mL for mold. A working yeast suspension was prepared by a 1:100 dilution followed by a 1:20 dilution of the stock suspension with PDB, resulting in 5.0 × 10 2 to 2.5 × 10 3 cells/mL, while a working mold suspension was prepared by a 1:50 dilution of the stock suspension with PDB, resulting in 0.8 × 10 4 to 1 × 10 5 cells/mL. The initial stock solution of ArJ essential oil was prepared in DMSO (dimethyl sulfoxide) at a 200 µL/mL concentration. Each well in column 1 was dispensed with 200 µL of stock solution of ArJ essential oil. At the same time, each well of columns 2 to 10 contained 100 µL of tryptic soy broth (TSB) for antibacterial evaluation, while for antifungal assessment, 100 µL of potato dextrose broth (PDB) was used. A two-fold serial dilution of ArJ essential oil was made from columns 1 to 10 using a multichannel micropipette, resulting in concentrations of ArJ essential oil ranging from 200-0.39 µL/mL in columns 1 to 10. Column 11 had 200 µL of standardized inoculum suspensions, which served as negative control (NC), and column 12 had 200 µL of sterile broth, which served as sterility control (SC). Each organism's adjusted inoculum was dispensed, 100 µL into each test well in columns 1-10, respectively. The 100 µL of adjusted microbial inocula were dispensed in all the wells of columns 1 to 10, resulting iñ 5 × 10 5 CFU/mL for bacteria and~2.5 × 10 2 to 1.25 × 10 3 CFU/mL for C. albicans, and 0.4 × 10 4 to 5 × 10 4 CFU/mL for A. niger. At this stage, the final concentrations of ArJ essential oil were 100 to 0.195 µL/mL in columns 1 to 10. The time taken to prepare and dispense the OD-adjusted microbial inocula did not exceed 15 min. The inoculated plates were incubated at 35 • C for 24 h for bacteria and 48 h for fungi. Following incubation, 30 µL of sterile resazurin dye (0.015% w/v) was dispensed into each well of columns 1 to 12, and then plates were re-incubated for 1-2 h to observe color change. After incubation, columns with the lowest concentrations showing no color change (blue resazurin color stayed intact) were scored as MIC. MBC was determined by directly plating the contents of wells with concentrations above the MIC on sterile tryptic soy agar (TSA) plates for bacteria, while potato dextrose agar (PDA) plates were used for fungi. The contents from the wells, which did not change from blue to pink, were inoculated on sterile tryptic soy agar (TSA) plates and incubated at 35 • C for 24 h for bacteria and 48 h for fungi. The lowest concentration of ArJ did not produce isolated colonies of the test organisms on inoculated agar plates considered as the MBC. The results are recorded in µL/mL. Minimum Biofilm Inhibitory Concentration (MBIC) and Minimum Biofilm Eradication Concentration (MBEC) MBIC Assay MBIC is defined as the lowest concentration of the antimicrobial agent (ArJ), preventing the biofilm formation of the tested organism. MBIC was conducted against the bacteria only. The 96-well microtiter plate was used to evaluate the anti-biofilm activity of ArJ [54]. The inocula of the test organisms were prepared in TSB equal to 0.5 MacFarland standard (1-2 × 10 8 CFU/mL). An aliquot of 100 µL from the adjusted inocula was dispensed into each test well of a 96-well plate. Then 100 µL of different concentrations of ArJ were dispensed into test wells. Thus, the final concentrations for MBIC assessment were MIC, 2 × MIC, and 4 × MIC. The wells containing only 200 µL of TSB served as a blank control (BC), whereas those containing bacterial cultures without ArJ served as negative control (NC). The plates were incubated in a shaking water bath at 35 • C for 24 h at 100 rpm shaking speed. After incubation, the supernatants from each well were decanted gently by reversing the plates on a tissue paper bed/or removed by a pipette without disturbing the biofilms. The plates were dried in air for 30 min, stained with 0.1% (w/v) crystal violet at room temperature for 30 min, and then washed three times with distilled water. Subsequently, the crystal violet was solubilized by adding 200 µL of 95% ethanol to each test well. The absorbance was recorded in a microplate reader (xMark™ Microplate Absorbance Spectrophotometer-Bio-Rad, Hercules, CA, USA) at 650 nm. The lowest concentration of ArJ at which the absorbance equals or falls below the negative control is considered MBIC. Each test was performed in triplicate. The mean of three independent tests was taken. The results are expressed in µL/mL. MBEC Assay MBEC is defined as the minimum concentration of an antimicrobial agent (ArJ) that eradicates the biofilm of the test organism [54]. A 200 µL (1-2 × 10 8 CFU/mL) inoculum of each test organism was inoculated into each test well of a flat-bottom 96-well microtiter plate. The plates were incubated at 35 • C for 48 h in a shaking water bath at 100 rpm shaking speed for biofilm formation. After the biofilms had formed, the contents of the test wells were decanted gently by reversing the plates on a tissue paper bed/or removed by a pipette without disturbing the biofilms. The various concentrations, i.e., MIC, 2 × MIC, and 4 × MIC of ArJ, were added to different test wells (200 µL/well). The inoculated plates were re-incubated at 35 • C for 24 h. After incubation, the contents of each test well were discarded by inverting the plates on a tissue bed. The plates were dried in air for 30 min, and then 200 µL of sterile TSB was dispensed in each test well. Then, 30 µL of 0.015% w/v resazurin dye was added into each test well. The plates were re-incubated for 1-2 h. After re-incubation, the MBEC was recorded by observing the color change from blue to pink. The column with no color change (blue resazurin color stayed intact) was scored MBEC. Biofilm without ArJ served as a negative control (NC). Each test was performed in triplicate. The mean of three independent tests was taken. The results are expressed in µL/mL. Preparation of Ointment Formulation Loaded ArJ Essential Oil Ointment formulation of 5% w/w strength of ArJ essential oil was prepared. The simple ointment base was prepared by the fusion method according to the British Pharmacopoeia 1988 [56]. Briefly, 100 g of simple ointment base was prepared by melting hard paraffin (5 g) in a beaker at 61 • C. The other ingredients, i.e., cetostearyl alcohol (5 g), wool fat (5 g), and soft white paraffin (85 g), were added in descending order of melting point. The homogenous mixture was removed from the heat and stirred until cold. Then, 5% w/w strength ArJ essential oil ointment was prepared by incorporating 5 g of the essential oil into 95 g of a simple ointment base in small portions by mixing with trituration using an ointment mortar and pestle. Finally, the ArJ ointment was transferred to a clean container. The control ointment, 50 g of the entire base ingredients, was taken and treated in the same way to formulate without the essential oil. The prepared ArJ ointment was physically examined and was consistent, homogenous, and stable for the measured one month. In Vivo Wound Healing Animal Experiment Twenty healthy 3-month-old Sprague Dawley female rats weighing about 150 ± 50 g were individually maintained in the cage under 25 ± 2 • , 65% humidity, 12:12 light/dark cycle. Animals were fed with a standard chow diet with water ad libitum, and the wound healing study was conducted following the guidelines of the Institutional Animal Ethics Committee (Registration # 21-04-06). The animal groups involved intact, negative control, positive control (1% silver sulfadiazine, SS), and ArJ 5% ointment. Skin Burn Induction Model Briefly, the animals were anesthetized using xylazine 5 mg/kg and ketamine 50 mg/kg, and the rat's dorsum was shaved with a hair trimmer (GEEPAS ® , Guangzhou, China) at a 45 • angle to minimize the angle skin injury during shaving and disinfected using 70% ethanol. An aluminum cylinder (1-inch square diameter, 86 g weight) was heated using a hot water bath at 120 • C for at least 60 min to ensure thermal equilibrium with the water. The exact temperature of the cylinder and the water was measured before inducing the burns using a dual probe thermometer (UT320D Mini Contact Type Thermometer Dual Channel K/J Thermocouple, UNI-T, Dongguan, China). Second-degree burns of 1-inch square diameter were induced on the rat's shaved dorsum by patching the aluminum cylinder on the rat's dorsum for 10 s, allowing it to stand on its own weight to ensure symmetrical burns across all rats [57][58][59]. The animals were administered with 0.9% normal saline i.p injection 10 mL/kg. The treated groups were applied topically twice daily for three weeks with ArJ 5% ointment or 1% SS cream topically on the wound area. Biopsy At the end of the experiment on day 21, the animals were euthanized and a biopsy measuring 1 × 1 cm diameter was collected using scissors and tweezers from the underlying tissue. One part of the biopsy was fixed in 3.7% formalin for paraffin embedding, while another part was homogenized and the supernatant isolated and stored at −20 • C for biochemistry analysis. Histological Staining Tissue was processed within 48 h of collection by dehydration with increasing ethanol percentages before being cleared with xylene and embedded in paraffin wax. Tissue sections of 5 microns were cut using a microtome (MEDIMEAS, Haryana, India), allowing simultaneous sectioning of the epidermis and dermis. Sections were stained using hematoxylin and eosin (H&E) and visualized under a light microscope at 40× magnification. Five fields/sections were counted for the amount of fibroblast, collagen, inflammation, and neovascularization, and the data was scored from 0-4, where 0, 1, 2, 3, and 4 represented normal, low, moderate, high, and very high, respectively, as described previously [60]. Determination of Oxidants and Antioxidants The catalase (CAT, Serial No. 24IF07D5A0) and superoxide dismutase (SOD, Serial No. 745402C55B) activity, and lipid peroxide (LP, malondialdehyde, Serial No. 1F4346D808) levels were determined in skin wound tissue homogenate by enzyme-linked immunosorbent assay (ELISA) kits (Cloud Clone Corp Company, Houston, TX, USA), according to the manufacturer's instructions. The absorbance was measured at 450 nm by a microplate ELISA reader and the concentration was calculated using a standard curve. Wound Area Measurement Using a standard camera, images of skin burn for all the animals were captured on the day of burn induction and at different time points (week 1, 2, and 3), while the wound measurement was performed before the treatment and 2 weeks after the treatment using freely available Image J software (version 1.8.1, Public Domain, Madison, WI, USA). Due to hair regrowth on the wound area, wound size could not be measured accurately after 2 weeks. Statistical Analysis Data were expressed as the mean ± standard error of the mean (SEM) (n = 5). Differences between groups were analyzed using one-way ANOVA, except for wound area measurement at different time-points, which was analyzed using two-way ANOVA followed by a post hoc test using Tukey's multi-group comparison on GraphPad Prism 8.0.2 (GraphPad Software, San Diego, CA, USA). The data were considered significant if p < 0.05 [61]. The superscripts (A-C) describing significance among the groups in the tables were obtained using Minitab 19.1 (Minitab LLC, State College, PA, USA). Essential Oil Constituents of A. judaica Several parameters have been reported as influencing factors affecting essential oil production, constituents, and quality; the parameters include the maturity stage of the plant, the oil extraction processes, and the drying methods applied to the aromatic plant samples, as well as the environmental conditions where the aromatic plant grows [62][63][64][65]. The essential oil of wild ArJ growing in the Northern Qassim region of Saudi Arabia has been isolated by the hydro-distillation technique using a Clevenger apparatus from the shade-dried aerial parts of the plant. Three different distillation experiments have been used to calculate the essential oil production percentage of 1.71 ± 0.3% w/w of the dried plant aerial parts. The percentage yield was higher than the reported yields for the cultivated species of the plant growing in Saudi Arabia (0.18% v/w) [24], indicating the higher capacity of the wild species of ArJ to biosynthesize essential oils. In addition, the nature of the plant sample, i.e., fresh or dried, and the conditions of the drying process could be factors affecting oil production percentage. The reported oil production percentage (0.18% w/w) has been calculated for the fresh plant samples [24]. However, the current percentage (1.71 ± 0.3% w/w) of essential oil production resulted from the distillation of the ten-day dried plant sample, which is consistent with the reported percentages of the essential oil production from dried samples of the aromatic plants, i.e., rosemary and sage [66,67]. Moreover, the current essential oil recovery percentage (1.71 ± 0.3% w/w) was nearly similar to the recorded data reported for the wild species of ArJ growing in the Southern region of Jordan (1.62%) [20]. The produced oil samples obtained from each distillation experiment were independently subjected to GC-FID analysis (Supplementary file, Figure S1). The results expressed in Table 1 show the mean relative percentage of the individual compounds plus standard deviations obtained from the three GC-FID spectroscopic runs. Kovats retention index was calculated with the C 8 -C 40 series of n-alkenes analyzed under identical extermination conditions. The reported retention indexes were also used to identify the ArJ essential constituents. The results shown in Table 2 indicated that oxygenated monoterpenes represented ≈ 57% of the plants' essential constituents among all essential oil classes. The higher percentage of oxygenated monoterpenes was attributed to the presence of piperitone in a high concentration (31.99% of the total essential oils in the plant). In addition, other oxygenated monoterpenes, e.g., terpinene-4-ol, α-thujone, β-thujone, 1,8-cineole, camphor, and linalool, were represented at relatively high concentrations of 6.42, 5.94, 3.61, 2.56, 1.92, and 1.21%, respectively, with a total percentage of 21.66%. The concentration of piperitone (31.99%) and the total oxygenated monoterpene concentrations (57%) among the total essential oil constituents ( Figure 1) were consistent with the reported chemotypic properties of the plant [24] that have been found, 30-70% of piperitone in the essential oil of ArJ growing wildly in different regions of the Mediterranean countries, such as Egypt, Algeria, and Jordan [20,31,68,69]. Besides the monoterpenes, GC-FID analysis of ArJ also showed a comparatively high percentage of cinnamic acid derivatives (18.03%), represented by the presence of three essential constituents, i.e., (E)-methyl cinnamate (0.35%), cis-ethyl cinnamate (4.02%), and trans-ethyl cinnamate (13.67%). Notably, ethyl cinnamate has been reported as one of the major chemotypes of the plant [24]. Monoterpene hydrocarbons, sesquiterpene hydrocarbons, oxygenated sesquiterpenes, and phenolic essential oils were also represented in the essential oil to a lesser extent, with 5.74, 10.88, 4.66, and 1.87%, respectively ( Table 2). Besides the monoterpenes, GC-FID analysis of ArJ also showed a comparatively high percentage of cinnamic acid derivatives (18.03%), represented by the presence of three essential constituents, i.e., (E)-methyl cinnamate (0.35%), cis-ethyl cinnamate (4.02%), and trans-ethyl cinnamate (13.67%). Notably, ethyl cinnamate has been reported as one of the major chemotypes of the plant [24]. Monoterpene hydrocarbons, sesquiterpene hydrocarbons, oxygenated sesquiterpenes, and phenolic essential oils were also represented in the essential oil to a lesser extent, with 5.74, 10.88, 4.66, and 1.87%, respectively ( Table 2). In Vitro Antioxidant Activity of the ArJ Essential Oil Antioxidants are promising therapeutic agents in wound healing [70]. Most of the reported plants with wound healing activity possess noticeable antioxidant potency, which has been examined by different in vitro and in vivo assays [71]. The essential oils obtained from several plants of the genus Artemisia, e.g., A. diffusa and A. herba-alba, have exhibited potential free radical scavenging, reducing, and metal-chelating properties [72,73]. The measurements, i.e., TAC, DPPH-SA, FRAP, and MCA, were conducted for the essential oil of ArJ quantitatively. The plant's essential oil reduced the molybdate ions (VI) to molybdenum (V) in the TAC assay at a level of 59.32 mg of Trolox equivalents per gram of the plant essential oil. Moreover, the ArJ essential oil exhibited notable reducing characteristics towards the ferric ion measured by the FRAP assay (22.34 mg of Trolox equivalent per gram of the essential oil). This ArJ essential oil-reducing characteristic in the TAC and FRAP contributes to the overall activity of this oil as an antioxidant agent [74]. The noticeable reducing characteristic of the ArJ essential oil could be attributed to the presence of camphor (1.92%), ethyl cinnamate (4.02%), and piperitone (31.99%) in relatively high concentrations [75][76][77]. The results also revealed that the essential oil of ArJ can chelate iron by 26.99 mg of EDTA equivalents per gram of essential oil, which was consistent with the reported ferrous ionchelating activity of ArJ [35]. As iron has a primary role in the Fenton reaction involving the conversion of the oxidizing agent hydrogen peroxide (H2O2) into the more reactive hydroxyl radical (HO·), the iron-chelating agents, such as the ArJ essential oil, interfere with the progression and exaggeration of the oxidative stress [78]. Furthermore, scavenging activity of ArJ essential oil has been reported [35,69]. In the current study, the essential oil of the ArJ also exhibited scavenging activity, measured as 10.70 mg of Trolox equivalent per gram of the essential oil against the stable free radical DPPH. The results of the antioxidant activity of the plant essential oil seems also to be attributable to the presence of considerable percentages of oxygenated monoterpenes, cinnamate derivatives, and phenolics in the essential oils of the plants, all of which are known for their antioxidant activity [79][80][81]. The overall results obtained from quantitative in vitro antioxidant assays confirmed the antioxidant activity of the ArJ essential oil and supported the association between the wound healing potential of the plant and its antioxidant activity. Preliminary Antimicrobial Activity The results of preliminary antibacterial activity demonstrated that all the tested organisms, including Gram-positive and Gram-negative bacteria, are susceptible to ArJ essential oil, except Pseudomonas aerugenosa ATCC 9027, which showed resistance at the given concentration of ArJ essential oil, i.e., 20 µL/disc (Figure 2 and Table 3). The results further demonstrated that Bacillus cereus is a highly susceptible test organism, with an inhibition zone of 12.9 ± 0.10 mm at the given concentration of ArJ essential oil. In contrast, the lowest antibacterial activity was observed against Klebsiella pneumoniae and Shigella flexneri, with inhibition zones of 6.2 ± 0.10 mm and 6.2 ± 0.10 mm in diameter, respectively ( Figure 2 and Table 3). Additionally, the findings indicated that the range for the mean zone of inhibition for Gram-positive bacteria is 7.2-12.9 mm, while for Gram-negative bacteria it is 6.2-10.0 mm, indicating that Gram-positive bacteria are more susceptible than Gram-negative bacteria to a given dose of ArJ essential oil. The results for preliminary antifungal activity indicate that both the tested fungal strains are highly susceptible to ArJ essential oil. The results also indicated that the highest antifungal activity was observed against Candida albicans with an inhibition zone of 25.2 ± 0.20 mm, while Aspergillus niger had an inhibition zone of 15.0 ± 0.20 mm at the given concentration of ArJ essential oil. The control antibiotics inhibited the growth of all the tested organisms at the given concentrations, i.e., 5 µg/disc for levofloxacin and 50 µg/disc for clotrimazole, respectively ( Figure 2 and Table 3). Minimum Inhibitory Concentration (MIC), Minimum Biocidal Concentration (MBC), Minimum Biofilm Inhibitory Concentration (MBIC), and Minimum Biofilm Eradication Concentration (MBEC) The MIC and MBC results for the tested bacteria revealed that the MIC values ranged from 6.25 to 100 µL/mL, while MBC values ranged from 12.5 to >100 µL/mL ( Table 4). The MIC and MBC results for the tested fungi demonstrated that Candida albicans had MIC and MBC values of 3.125 µL/mL and 6.25 µL/mL, respectively, whereas Aspergillus niger had values of 6.25 µL/mL and 12.5 µL/mL, respectively. The MBIC and MBEC results revealed that the MBIC values for the tested bacteria ranged from 6.25 to 100 µL/mL, whereas the MBEC values ranged from 12.5 to 200 µL/mL (Table 4). Our findings for ArJ essential oil antimicrobial activity are consistent with previously published data [17,24,[82][83][84][85]. Benmansour et al. demonstrated that ArJ essential oil had an excellent inhibitory effect against tested MRSA (methicillin-resistant Staphylococcus aureus), S. aureus, and B. subtilis [17], which is consistent with our results. Benderradji et al. showed that petroleum ether and ethyl acetate extracts of A. sahariensis leaves had the highest inhibitory activity against most tested strains. The most reported significant inhibition zone was obtained with chloroform extract of the plant against Pseudomonas [82]. These findings partially corroborate our results, since ArJ essential oil could not kill Pseudomonas, which might be a consequence of the essential oil and extract's differing phytochemical contents, as well as species variations. Elazzouzia et al. demonstrated that the essential oil of A. ifranensis had highly potent antibacterial activity against the tested S. aureus [55], which is, again, consistent with our results. Kazemi et al. demonstrated that the essential oil of the aerial parts of A. kermanensis had highly potent antibacterial activity against B. subtilis, P. aeruginosa, and S. aureus, which is partially consistent with our results [85]. Al-Wahaibi et al. demonstrated that essential oils derived from A. judaica and A. herbaalba had potent antimicrobial potential against the tested organisms, including Aspergillus fumigatus, Syncephalastrum racemosum, Geotricum candidum Candida albicans, Streptococcus pneumoniae, Bacillus subtilis, and Escherichia coli, except Pseudomonas aeruginosa; these results are consistent with our results [24]. The results of our study indicated that ArJ essential oil has highly potent antimicrobial activity, demonstrating that ArJ essential oil could be a promising antimicrobial drug candidate and can cure various human infections, e.g., wound infections, boils, acne, etc., caused by various life-threatening pathogens, including bacteria and fungi. These results encouraged us to conduct wound healing testing on an animal model to verify the antimicrobial properties of ArJ essential oil in-vivo. In Vivo Skin Burn Wound Healing In the current study, the second-degree burn was induced on female rats based on the recent publication that observed significant wound healing in second and third-degree wounds [57]. The choice of three-month-old female rats was due to their quicker wound healing and greater wound contraction ability as compared to males [86]. Morphological Appearance and Histological Analysis of the Wounds The observations of wounds over three weeks of treatment revealed the significant progression in the healing process among the treated groups. At the time of burn induction, the skin burns produced were whitish in color and round in shape. After one week of the treatment, a crust developed on the wound along with the disappearance of edema in the treated groups (ArJ and SS groups). The wound area started decreasing by the second week of the treatment; however, edema formation was still prominent in the untreated burn area. At the end of 3 weeks, the treatment wound area for both the ArJ and SS groups demonstrated recovery, while the untreated zone did not recover completely (Figure 3). In Vivo Skin Burn Wound Healing In the current study, the second-degree burn was induced on female rats based on the recent publication that observed significant wound healing in second and third-degree wounds [57]. The choice of three-month-old female rats was due to their quicker wound healing and greater wound contraction ability as compared to males [86]. Morphological Appearance and Histological Analysis of the Wounds The observations of wounds over three weeks of treatment revealed the significant progression in the healing process among the treated groups. At the time of burn induction, the skin burns produced were whitish in color and round in shape. After one week of the treatment, a crust developed on the wound along with the disappearance of edema in the treated groups (ArJ and SS groups). The wound area started decreasing by the second week of the treatment; however, edema formation was still prominent in the untreated burn area. At the end of 3 weeks, the treatment wound area for both the ArJ and SS groups demonstrated recovery, while the untreated zone did not recover completely ( Figure 3). Wound areas were measured using freely available Image J software. No significant differences were observed among the groups after the induction of skin burn. Two weeks after the treatment, the ArJ group's wound area decreased significantly (p = 0.04), while the SS group's wound size remained insignificant compared to the negative group (Supplementary file, Figure S2). Wound healing is a complex restorative process of injured tissue to its original state [87], and it involves hemostasis, inflammation, proliferation, and remodeling [88] to prevent complex metabolic alteration affecting body organ systems. During the process of hemostasis, blood coagulation occurs, while the inflammation process ensures safety from invasive pathogens [88], thereby facilitating the proliferation step [89] towards remodeling the tissue maturation process [89,90]. During skin burn, cells and tissues are damaged substantially, thereby involving a complicated healing network compared to wound incision [87]. Based on the deepness of the burn wounds, they are categorized as first-, second-, and thirddegree burns. The first-degree burn is generally red or gray without any blisters and normal capillary network, while in a second-degree burn, blisters and partial-thickness damage to the dermis are observed. Second-and third-degree burns are treated similarly [91]. In the current study, second-degree skin burn wounds were induced, which healed over 3 weeks for the ArJ and SS groups. H&E staining was performed for all the animal groups ( Figure 4). The H&E staining demonstrated epidermis integrity and the degree of neutrophilic infiltration in the dermis and capillaries of the ArJ and SS groups compared to the untreated wound zone. Wound healing visual appearance for ArJ was not enlarged, most probably due to the balm effect of paraffin ( Figure 4B). Tissue sections were analyzed qualitatively for the amount of fibroblast, collagen, inflammation, and neovascularization in SS and ArJ groups. The data demonstrated increased collagen, fibroblast, and neovascularization, with decreased inflammation in the SS and ArJ groups compared with the negative control group (Supplementary file, Figure S3). mis and capillaries of the ArJ and SS groups compared to the untreated wound zone. Wound healing visual appearance for ArJ was not enlarged, most probably due to the balm effect of paraffin ( Figure 4B). Tissue sections were analyzed qualitatively for the amount of fibroblast, collagen, inflammation, and neovascularization in SS and ArJ groups. The data demonstrated increased collagen, fibroblast, and neovascularization, with decreased inflammation in the SS and ArJ groups compared with the negative control group (Supplementary file, Figure S3). Role of Antioxidants and Oxidative Stress Markers in Wound Healing The ArJ ointment group demonstrated significantly increased antioxidant SOD (p = 0.03) and CAT (p < 0.01) enzymatic activities compared to the negative group. The SOD and CAT activities were comparable in the intact and negative control. The SS treated group demonstrated a significant difference in CAT activity compared to the negative group (p = 0.01) and was comparable to the ArJ group, while the differences were insignificant for SOD activity. The antioxidant activity contributing towards wound healing is in accordance with previous studies reporting enhanced wound healing due to potent antioxidant activities [92]. LP significantly increased in the negative control group (p < 0.0001) compared to the intact group, which accords with previously recorded data [93]. No significant differences in LP were observed in treatment groups with either ArJ or SS groups compared to the negative control, while the ArJ and SS treated groups exhibited significant increases (p < 0.0001) compared to the intact group (Table 5). , Tables S1-S3). Statistical significance was performed using one-way ANOVA, followed by a post hoc test on GraphPad Prism 8.0.2. CAT = catalase, LP = lipid peroxide, SOD = Superoxide dismutase. The mean values that do not share a superscript letter (A,B) in the respective columns of superoxide dismutase (SOD), catalase (CAT), and lipid peroxide (LP) are significantly different (p < 0.05) using Tukey's multi-group comparisons. Role of Pro-and Anti-Inflammatory Markers in Wound Healing The pro-inflammatory markers IL-1b and IL-6 values were comparable among all the studied groups, similar to the previously published articles [94,95]. At the same time, tumor necrosis factor α (TNF-α) significantly increased in the negative control group compared to the intact group (p < 0.0001). TNF-α values decreased significantly after treatment with SS (p < 0.02) and ArJ (p < 0.002) compared to the negative control group. TNF-α, the inflammatory cytokine activated during acute inflammation by macrophages/monocytes, plays a vital role in cell signaling, leading to necrosis or apoptosis. TNF-α participates in vasodilatation and edema formation and leukocyte adhesion to the epithelium through the expression of adhesion molecules. Furthermore, TNF-α regulates blood coagulation, contributes to oxidative stress at sites of inflammation, and indirectly induces fever. The data conforms with Gushiken and Periera, where TNF-α decreased after two weeks of treatment in skin wound tissue compared to the negative control [94,95]. The anti-inflammatory or the pro-angiogenic markers IL-10 and transforming growth factor beta 1 (TGF-b1) increased significantly in both ArJ (p < 0.0001 and p < 0.0001) and SS (p < 0.0001 and p < 0.0001) groups compared to the negative and intact control groups. However, differences in IL-10 and TGF-b1 levels in the negative control group were insignificant compared to the intact group ( Table 6). The data confirm previous studies where IL-10 increased significantly in skin wound healing tissue after two weeks of treatment compared to the negative control group [94,95]. , Tables S4-S8). Statistical significance was performed using one-way ANOVA, followed by a post hoc test on GraphPad Prism 8.0.2. The mean values that do not share a superscript letter (A-C) in the respective columns of interleukin-1 (IL-1b), IL-6, IL-10, transforming growth factor beta 1 (TGF-b1), and tumor necrosis factor α (TNF-α) are significantly different (p < 0.05) using Tukey's multi-group comparisons. Various plant and herbal products are economically cheap to procure and demonstrate modest therapeutic potency with minimum toxicity relative to synthetic drugs [45][46][47]96]. Eupolin ointment, derived from an aqueous extract of C. odorata leaves, is the first Vietnam FDA-approved product [59]. The wound healing mechanism, even though it remains unclear, nevertheless enhanced blood flow, decreased inflammatory response, and reduced infection rates, all of which are contributing factors to angiogenesis. Rats are loose-skinned, in contrast to the tight human skin, with quicker constriction of the wound than the epithelization process; as such, rat wound healing, even though it is resemblant, is not entirely similar to wound healing in human skin [97,98]. However, rats are widely used animal models due to their genetic and behavioral similarity, ease of handling, and being economically viable. Thus, the rat skin burn model serves as a vital knowledge resource. Mortality has been one of the major concerns in patients with deep burns due to infections, and researchers have attempted to minimize wound infection risk and accelerate the healing process [58]. Topical antimicrobial ointments are commonly employed for such purposes as SS 1% with low toxicity and have a potent antibacterial effect in burn wound therapy management [99][100][101]. Increased antioxidant levels have demonstrated wound healing potency [102] by protecting tissue from oxidative stress [103,104]. In the current study, ArJ demonstrated an-tioxidant (augmented SOD and CAT) enzymatic activities, validating the role of antioxidant enzymes, in addition to potentiating wound healing through increased anti-inflammatory levels. TGF-b1 is involved in wound healing, angiogenesis, immune regulation, and cancer. On the contrary, TGF-b1, along with inflammatory marker IL-6, helps in T helper 17 differentiation (Th17), aggravating inflammation [105]. In our study, the levels of TGF-b1 increased significantly while no statistical difference was observed for IL-1b and IL-6 in the ArJ group compared to the control group. Moreover, IL-10 is a key regulator of the immune system by limiting the inflammatory response, which could otherwise cause tissue damage. In one study, IL-10 knockout mice were found prone to colitis [106,107], and blocking of IL-10 resulted in severe pathology. On the contrary, increased IL-10 levels cause chronic infection, and IL-10 blocking paved the way for pathogen clearance [108]. Mucosal secretion of IL-10 and TNF-α were augmented during wound healing, demonstrating the protective effect of IL-10 against inflammation. On the contrary, treatment with P. pinnata increased the serum IL-10 concentration while downregulating TNF-α and IL-6 [45]. Our study showed a significant increase in IL-10 concomitant with a significant decrease in TNF-α in ArJ compared to the negative control group, thus confirming the potential role of IL-10 in promoting wound healing. The current findings for the in vivo and in vitro antioxidant activity and the antiinflammatory effect of ArJ could be attributed to the presence of higher percentages of oxygenated monoterpenes (57.2%) in the plant essential oils [109]. Oxygenated monoterpenes have been found as major constituents in the plants used traditionally to accelerate wound healing, e.g., the plant essential oil of Helichrysum italicum [110,111] has exhibited a primary role in potential antimicrobial activity and anti-inflammatory effects [111]. Furthermore, some of the major ArJ essential oils reported as antioxidant and anti-inflammatory agents, e.g., thujone (both α and β-thujone, 9.55%), 1,8-cineole (2.56%), camphor (1.92%), and borneol (0.47%), have chiefly contributed to the rosemary anti-inflammatory effect [112]. Furthermore, 1,8-cineole antioxidant and anti-inflammatory effects have been reported, and the compound effect as an inhibitor for the inflammatory markers, TNF-α, IL-6, IL-8, LTB 4, PGE 2, and IL-1β, as well as down-regulation of 5-lipoxygenase (LOX) and cyclooxygenase (COX) pathways, are well documented [113,114]. Moreover, methyl cinnamate, a major constituent in ArJ essential oil (4.02%), has demonstrated potent anti-inflammatory activity [115]. All these compounds participated in the antioxidant and anti-inflammatory effects of the ArJ essential oil as well as in the wound healing activity. However, other identified constituents in this article could also be playing a role in the demonstrated plant activities. Conclusions The phytochemical analysis of the ArJ essential oils was conducted and revealed the dominance of the highly active antioxidant volatile compounds, oxygenated monoterpenes, and cinnamic acid derivatives in the essential oil constituents of the plant. Such classes of compounds were reflected in the in vitro and in vivo potential antioxidant activity of the ArJ essential oil. In the current study, wound treatment with ArJ also demonstrated significantly increased SOD and CAT enzymatic activities, with insignificant LP levels compared to the negative control group. In addition, ArJ reduced the pro-inflammatory marker TNF-α and augmented pro-angiogenic/anti-inflammatory TGF-b1 and IL-10 levels. The antimicrobial and antibiofilm potential of ArJ essential oil against Bacillus cereus, Candida albicans, and Aspergillus niger confirmed in the study supports the effectiveness of ArJ essential oil as a wound healing candidate. These results validate the curative role of ArJ in the treatment of skin wounds, which is attributed to its antioxidant and anti-inflammatory effects, as well as its high proportion of oxygenated monoterpenes and cinnamate derivatives.
2022-02-11T16:15:31.289Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "9f8f15ff6abe20ba9172c756f04aa13597822149", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/11/2/332/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37f953b6c869e3cee9fa36aef05398938bc7ac49", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253053207
pes2o/s2orc
v3-fos-license
Insider Trading: Law of the Republic of Indonesia Number 8 of 1995 on Capital Market from Typewriters to Digital Era Introduction to The Problem : Symmetric information is an essential factor in the capital market. Symmetric information will create an efficient capital market. Insider trading is one of the things that makes asymmetric information. The regulations on the capital market determine the criteria for insider trading. Insider trading is people who have non-public information on the company and earn financial benefits from non-public information. Purpose/Objective Study : This research aims to determine the insider trading criteria on the Indonesian Capital Market Law Number 8 of 1995. Design/Methodology/Approach: This research uses the normative juridical method. The study utilizes several cases that occur in countries as a discussion. Findings : This research concludes that the definition of insider trading consists of stakeholders who have interests and non-public information on public companies. The scope of insider trading is also extended to family members of stakeholders. Stakeholders include management, related companies' employees, officials, suppliers, shareholders, and their family members. The definition of family members is the spouse, children, and parents. The definition of insider trading should be extended to the current regulations. The related individuals must carry out the obligation to report share ownership. Introduction President of the Republic of Indonesia, Joko Widodo, announced that the COVID-19 vaccine is free for all Indonesians (Syakriah, 2020). The Indonesian government has several pharmaceutical companies involved in manufacturing and distributing the COVID-19 vaccine (Sholichah & Johan, 2022). Kimia Farma is a state-owned pharmaceutical company. Kimia Farma is a public listed company on the Indonesian Are the tweets of Kaesang coincidences, or has something been known beforehand? Do the actions of Kaesang meet the criteria of insider information? (Detik Finance, 2020). Carlyle Group, a private equity firm, fired an executive for insider trading. The executive is Rajiv Louis. The Monetary Authority of Singapore (MAS) fined the executive USD$ 316,000 due to an insider trading transaction in the share transaction of Bank Danamon, Indonesia (Johan & Ariawan, 2021a). Louis benefits from nonpublic information from the proposed acquisition of Bank Danamon by Development Bank of Singapore (DBS) Group, Singapore (Indopremier Sekuritas, 2015). Louis uses his wife's account to make transactions in Bank Danamon, Indonesia. A former head of a rental storage company in Japan, Palma, named Shinsuke Tsuoka, did insider trading with his friend. They bought shares in Palma before the announcement that the Japan Post Group company would acquire a 20 percent stake in Palma. Tsuoka gave this information to his friend Hirayama, who also bought shares of Palma. Authorities imposed criminal acts charges on them (Agustina, 2019). Monetary authorities in Singapore and Japan can prove that both transactions are insider trading. They carry out transactions on their behalf or through someone close to them. They have already benefited from insider trading transactions. Suppose they used used-public information and posted it on social media in the public domain. This non-public information affects investor decisions or stock price movements (Patil & Bagodi, 2021). Can the authorities classify this type of transaction as insider trading? Very few researches on insider trading related to social media and those close to insider trading have been done. This research studies the sharing of information of a company code of a state-owned pharmaceutical company on Twitter by the son of the President of the Republic of Indonesia before the President announced the plan to provide free vaccines in Indonesia. This research will contribute to the development of the capital market and laws since 1995. Social media did not develop rapidly in 1995. Retail trading with the application has increased since the COVID-19 pandemic in 2020 (Dannenberg et al., 2020). This research answer the question: is the definition of insider trading in Law of the Republic of Indonesia Number 8 of 1995 on the Capital Market (PM Act) still appropriate during the period of trading with digital applications? The Capital Market Law regulates the disclosure of non-public information (Bhuana et al., 2021). Material information is information or facts that can affect the price of securities of a public company or may influence shareholders' or investors' decisions in making investment decisions based on Article 1 of Law of the Republic of Indonesia Number 8 of 1995. On the other hand, a public company that does not provide material information may be subject to sanctions in the form of fines and crimes (Agusta, 2020; Johan & Ariawan, 2021b). Openness is one of the principles of good corporate governance in a company (Harahap et al., 2020). This is also regulated by the laws and regulations in the capital market. Information disclosure in the capital market is absolute (Herlina, 2018). The capital market is defined as a meeting place for sellers and buyers to conduct securities transactions. Capital market activities require legal instruments that govern them to run regularly and fairly for all parties involved (Suardana et al., 2020). Based on Article 7 of Law of the Republic of Indonesia Number 8 of 1995 on Capital Market, material information is information regarding merger, acquisition, consolidation or the formation of joint ventures; stock split or share dividend distribution (stock dividend); income and dividends of an extraordinary nature; acquisition or loss of essential contracts; a significant new product or invention; changes in the company's financial year; and changes in control or substantial changes in management; as long as the information can influence the price of securities and decisions of investors, potential investors, or other parties with interest in the information or facts (Mochtar & Rahayu, 2021). Based on Article 86 of the Indonesian Capital Market Law, issuers whose registration statements have become effective or public companies are required to submit periodic reports to the Financial Services Authority (FSA/OJK) and announce the information to the public. They shall also submit information to the Financial Services Authority and announce the material events that could affect the price of securities to the public no later than the end of the second working day after the event occurred (Leonard & Heriyanti, 2018). The principle of transparency in the capital market means that issuers and capital market supporting professionals must convey information regarding material information regularly, correctly, and truthfully so that this material information affects investors' decisions to invest in the capital market (Wisudawan, 2015). Openness is the principle of transparency by companies in an initial public offering, which is an essential factor in making investors decide to buy or sell securities (Pertiwi Indra; Pratiwi, Tia; Perdana, 2019). Uneven distribution of information on a transaction between parties is called asymmetric information (Johan, 2021a). Asymmetric information is a condition where one party has information that the other party does not know (Chernonog & Avinadav, 2019). The company's management is the party who knows more about the company than the investors in the capital market (Hidayah & Ferawati, 2013). Asymmetric information is a condition where one party has better information than the other party (Bergh et al., 2019). Based on the Indonesian Capital Market Law, the obligation to report material information is a maximum of 2 working days from the transaction or event (Day D+2). Asymmetric information comes from Fama's theory (Hallunovi, 2018). Asymmetric information arises in a market that is classified as weak form or weak form (Johan & Ariawan, 2021c). Asymmetric information theory was also introduced by George Akerlof, who is familiar with the term "The Market for Lemons". Buyers do not know the quality of lemons. The seller knows more about the quality of the lemon. This theory became known as the quality assurance theory (Hayati, 2020). The problem of asymmetric information is a problem that occurs in various sectors of business life. This problem also occurs between banks and customers (Kabul & Afriwan, 2021). Based on the Stock and Exchange Commission (SEC) 's Rule 10b-5, to ensure insider trading transactions occur based on the tipper-tippee theory, the government needs to prove that the information provider benefits from the information provided that the information recipient knows the benefits obtained (Woody, 2019). The implementation of corporate governance is still fragile in public companies, primarily due to the behavior of taking profits by using weaknesses in regulations and weaknesses in penalties in capital market regulations (Sya'bani, 2014). Modern corporate governance standards and principles are vital in developing countries, such as Vietnam, China, India, Indonesia, Myanmar, Bangladesh, and others (Dat et al., 2020). Insider trading is a crime in the capital market, which is very difficult to prove, even in a developed country like the United States. It is difficult to establish the practice of this crime (Amelia, 2016). Insider traders earn up to 35% profit for 21 days, and the information is shared via social networks. Dissemination of information through social networks increases the efficiency of information dissemination (Ahern, 2017). The disclosure of information on prices and volumes in response to news shows implications for securities trading (Rogers et al., 2016). The number of stock transactions the Chief Executive Officer conducts is highly correlated with the compensation package (Brodmann et al., 2019). Methodology This research studies several insider trading cases or those suspected of being insider trading in Indonesia, Singapore, and Japan. This study uses the normative judicial method. Information and legal material are secondary legal material and information. This study obtained legal material from various news sources. This research studies the laws and regulations of each country (Johan, 2021b). Law of the Republic of Indonesia Number 8 of 1995 According to Article 95 of the Indonesian Capital Market Law, insider trading is an insider of an Issuer or Public Company who has internal information who is prohibited from buying or selling the securities of the Issuer or Public Company; or other companies that conduct transactions with the Issuer or Public Company concerned. Insider means commissioners, directors, or employees of an Issuer or Public Listed Company; major shareholders of the Issuer or Public Listed Company; an individual whose position or profession or business relationship with the Issuer or Public Company allows that person to obtain inside information; or a Party which within the last 6 (six) months is no longer the party as referred to previously. Insiders referred to as persons by position are persons in positions of government institutions, entities, or agencies. A business relationship is defined as a working relationship or partnership in business activities, including, among others, customer, supplier, contractor, customer, and creditor relationships. This is illustrated in Figure 1. What is meant by inside information is material information owned by insiders that are not yet available to the public. Based on Article 1 of the Indonesian Capital Market Law, Material Information or Facts are important and relevant information or facts regarding events, incidents or facts that can affect the price of securities on the stock exchange and or decisions of investors, potential investors, or other parties with interest in such information or facts. Figure 1. Company Stakeholders Based on Article 96 of the Indonesian Capital Market Law, insiders referred to in Article 95 are prohibited from influencing other parties to purchase or sell such securities; or provide inside information to any person who reasonably presumes can use the information to make a purchase or sale of securities. Article 97 of the Indonesian Capital Market Law stipulates that any party who tries to obtain inside information from insiders against the law and then obtains it, is subject to the same prohibition as the prohibition that applies to insiders. Any party who tries to get insider information and then obtains it without violating the law is not subject to restrictions that apply to insiders as long as the Issuer or Public Company provides the information without limitation. Parties who violate the provisions of articles 95, 96, and 97 can be subject to a maximum imprisonment of 10 years and a maximum fine of 15 billion rupiahs. The parties in question can consist of individuals or companies. Dissemination of Information by the Family of State Officials or Public Servants A state official or public servant is included in a person's category because his position is a person in a role in a government institution, entity, or body. State officials must convey information to the public on a policy taken. A president must openly convey an information policy to the public, especially during a pandemic. Therefore, state officials must deliver policies, not because of non-public information on companies. Information regarding the provision of free vaccines is a government policy, not a company policy. This information cannot be categorized as non-public company information. Sharing information via Twitter with the contents "$KAEF?" cannot be classified as material information. KAEF is the stock code for Kimia Farma. Apart from KAEF, it is also marked with a "?" (question mark). KAEF is not essential and relevant information or facts regarding events, incidents, or points that can affect the price of Securities on the Stock Exchange and/or decisions of investors, potential investors, or other parties interested in such information or facts. The definition of material information is based on Article 1 of the Indonesian Capital Market Law. A question mark indicates a question that requires an answer. Information Dissemination in the Digital Age Information dissemination has become very massive in the digital era (Ahern, 2017), which can be done simply by posting social media information. Information will be immediately seen or read by tens of thousands to millions of people. If social media accounts have tens of thousands of followers, this account's followers will re-tweet or forward it to other social media. Social media accounts will continue to spread or retweet them repeatedly. Information that was previously non-public will become public information. This pattern of information dissemination is shown in Figure 3. The discoverer of the first information will benefit from time. Information discoverers take advantage of the information that is disseminated. The next recipient of the information will benefit less. The recipient of the latest information will suffer a loss. This information can be in the form of information about shares. The delivery of this information is not conveyed equally, but from certain circles to other circles and so on. The delivery of information will go viral until the information becomes public information. The last party to receive the information will be the party that does not benefit. Social Media (Twitter) •A family mamber of public officer is not considered as insider The speed of information dissemination in the digital age is also breakneck. A recipient of the information will easily resend it to others. The spread of this information is known as going viral. The dissemination of this information has become a concern of regulators (BBC Indonesia, 2019). Information providers have also limited the amount of forwarding information (CNBC, 2020). Redefining Insider Trading Determining insider trading transactions is not an easy thing. In the case of the children of state officials who disseminate the information, it is suspected that it is non-public information. Financial authorities need to investigate the ownership of shares by the offspring of officials who publish information on Twitter or any other social media. Has the official's son or family member purchased and benefited from the information related to his parent's announcement? Investigations can be carried out by knowing the ownership, sale, and purchase of shares on behalf of the person concerned. The definition of insider trading in Indonesia needs to be redefined based on the related information instead of the people. Related information is the dissemination of information that causes insider trading. Related people are based on the relationship of people related to the information center. Based on the Indonesian Capital Market Law, information dissemination is still based on insiders related to the information. Based on the cases in Singapore and Japan, Singapore and Japan's regulations have embraced the information relation system. The wife of an executive in Singapore was penalized for insider trading. In Japan, the executive's friend was punished for obtaining information from his friend and profiting from insider trading transactions. This is illustrated in Figure 4 below. Authorities need to adjust the definition of insider and non-public material information in line with digital technology development. The massive and rapid dissemination of information has changed people's lives. The flood of information has transformed people's lives. Insiders or people related to securities transactions are not only the people associated but also people who obtain information and gain profits without having a relationship with public companies whose securities are traded. A restaurant waiter, a photocopy waiter, a document carrier, or anyone who may obtain non-public information must be included in the insider trading criteria. Insider trading is the party that obtains non-public information and uses it to gain profits or minimize losses. The authorities can determine the minimum criteria for share price movements and transaction volume so that transactions are classified as abnormal and fall into insider trading. Conclusion The Indonesian Capital Market Law has been in place since 1995. Technological developments have changed from the era of fax machines to the typewriters to the digital age. Technological developments have changed from person-to-person information dissemination to person to multiple people. The spread of this information is massive and fast. The information disseminated is in the form of public and non-public information. Various parties disseminate non-public information. Non-public information can provide benefits or reduce losses to the party who obtained the information. Information is rapidly disseminated through physical meetings and communication tools, such as social media. Unrelated parties can also share and get non-public information. These parties take advantage of being the owners of the information. These parties are classified as insider trading group. The Indonesian Capital Market Law still classifies insider trading based on people related to the information. The Capital Market Act needs to change from related people to related information. With the change to related information, the Indonesian Capital Market Law may adapt to the age of information. Amendments to the Capital Market Law need to be made immediately. The amendments will create legal certainty and protect capital market investors. Amendments to the capital market law will also adapt the regulations to technological developments. This study suggests that the capital market authority amend capital market regulations following information technology developments. Public listed company's management needs to be more careful in providing non-public information. Providers of information may also be subject to sanctions. Non-public information must be discussed privately, not in the public area. Research has several limitations; the research is based on economic and legal aspects only. Further research can include other variables such as politics, period of capital market development and macroeconomic conditions.
2022-10-22T15:08:27.108Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "55b550895a7398156d0c7c7cc9e4622f17c62c5b", "oa_license": "CCBY", "oa_url": "http://journal.uad.ac.id/index.php/Novelty/article/download/19101/pdf_82", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9eff096518f71be8ecd201e86ed9e6aa0dbc4a5c", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
15350630
pes2o/s2orc
v3-fos-license
Dynamics of Intermediate Mass Black Holes in Star Clusters We have followed the evolution of multi-mass star clusters containing massive central black holes by N-body simulations on the GRAPE6 computers of Tokyo University. We find a strong cluster expansion and significant structural changes of the clusters. Star clusters with IMBHs have power-law density profiles $rho ~ r^{-alpha}$ with slopes $alpha=1.55$ inside the influence sphere of the central black hole. This leads to a constant density profile of bright stars in projection, which rules out the presence of intermediate mass black holes in core collapse clusters. If the star clusters are surrounded by a tidal field, a central IMBH speeds up the destruction of the cluster until a remnant of a few hundred stars remains, which stays bound to the IMBH for a long time. We also discuss the efficiency of different detection mechanisms for finding IMBHs in star clusters. Introduction X-ray observations of starburst and interacting galaxies have revealed a class of ultra-luminous X-ray sources (ULX), with luminosities of order L ≈ 10 39 to 10 41 (Makishima et al. 2000). If the flux is radiated isotropically, this exceeds the Eddington luminosities of stellar mass black holes by orders of magnitude, making ULX good candidates for IMBHs. Many ULX appear to be associated with star clusters (Fabbiano, Schweizer & Mackie 1997), the irregular galaxy M82 for example hosts an ULX with luminosity L > 10 40 erg/sec near its center Kaaret et al. 2001) whose position coincides with that of the young (T ≈ 10 Myrs) star cluster MGG-11. Portegies Zwart et al. (2004a) and McMillan et al. (2004) have performed N -body simulations of several star clusters in M82 and showed that runaway merging of massive stars could have led to the formation of an IMBH with a few hundred to a few thousand solar masses in MGG-11, thereby explaining the presence of the ultraluminous X-ray source. The fact that a considerable fraction of star cluster might have formed intermediate mass black holes (IMBHs) has interesting consequences. For example, IMBHs of a few 100 to a few 1000 M ⊙ would explain why the mass-to-light ratios in several globular clusters increase towards the center (Gerssen et al. 2002;Colpi, Mapelli & Possenti 2003), although the data presented so far is also compatible with an unseen concentration of neutron stars and heavy mass white dwarfs . IMBHs in star clusters would also be prime targets of the forthcoming generation of ground and space-based gravitational wave detectors and could provide the missing link between the stellar mass black holes formed as the end product of stellar evolution and the 10 6 to 10 9 M ⊙ sized black holes found in galactic centers Ebisuzaki et al. (2001). In this paper we explore the dynamical evolution of star clusters containing massive black holes. We study how star clusters evolve during a Hubble time and compare the outcome of our simulations with galactic globular clusters in order to determine which clusters are likely to contain IMBHs. We also study what is left bound to an IMBH after the parent cluster is dissolved and discuss ways how to detect an IMBH in a globular cluster. Details of the Simulations We simulated the evolution of star clusters containing between N = 16, 384 (16K) and 131, 072 (128K) stars, using the collisional Aarseth N -body code NBODY4 (Aarseth 1999) on the GRAPE6 computers of Tokyo University . Clusters were treated as isolated and followed King W 0 = 7.0 profiles initially. The models started with IMBHs of M BH = 1000M ⊙ that were initially at rest in the cluster centers. When creating the clusters, stellar velocities were chosen such that the initial model was in dynamical equilibrium in the combined potential of the cluster and the central IMBH. Our simulations included stellar evolution, dynamical relaxation and the tidal disruption of stars which get too close to the central black hole. The initial half-mass radius of the clusters was 4.9 pc. So far we have not included stellar collisions or the change of the stellar orbits due to gravitational radiation into the code since these processes are not likely to play an important role for the type of clusters considered in this study. The mass function of stars was given by a Kroupa (2001) IMF with lower mass limit of 0.1M ⊙ and we modelled stellar evolution by the fitting formulae of Hurley et al. (2000). Two series of simulations were made, one with a massfunction extending up to 30M ⊙ and a second series in which the maximum stellar mass was equal to 100M ⊙ . In the first series, only few black holes were formed, Figure 1. 3D mass density profile after T=12 Gyrs for 4 clusters starting with particle numbers between 16, 384 ≤ N ≤ 131, 072. Solid lines mark the N -body results, dashed lines a single power-law fit to the density profile inside the influence radius of the black hole (shown by a solid circle). For all models we obtain slopes near α = 1.55 for the central stellar cusp. all of them with masses below 3M ⊙ , while in the second series a significant number of black holes with masses up to 45M ⊙ were formed. We assumed a 100% retention rate for black holes in the clusters at time of formation, so the situation in real globular clusters is probably somewhere between our two cases. More details of the simulations can be found in Baumgardt et al. (2004b). Projected density profile of bright stars (top) and projected velocity dispersion of the cluster starting with N = 131, 072 stars. The projected distribution of bright stars has a constant density core, similar to that seen in most globular clusters. Observations of the velocity dispersion could reveal the black hole if a sufficiently large number of stars at radii r/r h < 0.01 can be observed (bottom panel). Density profiles In order to calculate the density profile, we overlayed between 5 (128K) to 20 (16K) snapshots centered at T=12 Gyrs, creating roughly the same statistical uncertainty for all models. All snapshots were centered on the position of the IMBH. We then fitted the combined density profile inside the influence radius of the black hole with a power-law density profile. As can be seen, we obtain power-law profiles ρ ∼ r −α inside the influence radius of the black hole with a slope around α = 1.55 for all clusters. There is no dependence of the slope on the particle number. The slope we obtain for multi-mass clusters is slightly flatter than the α = 1.75 slope found for single-mass clusters by Bahcall & Wolf (1976) and Baumgardt et al. (2004a). The reason is that while high mass stars still follow an α = 1.75 profile, they are not numerous enough to determine the overall profile. The upper panel of Fig. 2 depicts the projected distribution of bright stars for the cluster with N = 128K stars. We define bright stars to be all stars with masses larger than 90% of the turn-off mass which are still main-sequence stars or giants at T = 12 Gyrs. Their density distribution should be representative of the distribution of cluster light. The projected density distribution of bright stars does not show a central rise and can instead be fitted by a model with a constant density core. The reason is that due to mass segregation, compact remnants, which are more massive than main-sequence stars, have been enriched in the core while the density of main-sequence stars has decreased in the center. A cluster with a massive central black hole would therefore appear as a standard King profile cluster to an observer, making it virtually indistinguishable from a star cluster before core collapse. Core collapse clusters have power-law density profiles in their centers, which is in contradiction with this profile. Since the central relaxation times of core collapse clusters are much smaller than a Hubble time, any cusp profile would have been transformed into a constant density core if an IMBH would be present in any of these clusters, so the presence of IMBHs in core collapse clusters is ruled out. The lower panel of Fig. 2 shows the velocity dispersions, both the measured one and the one inferred from the mass distribution of stars. The inferred velocity dispersions were calculated from Jeans equation (Binney & Tremaine 1986, eq. 4-54) and different mass distributions under the assumption that the velocity distribution is isotropic (i.e. β = 0). The velocities calculated from the mass distribution of the cluster stars alone give a good fit at radii r/r h > 0.2 where the mass in stars is dominating (except at the largest radii, where the velocity distribution becomes radially anisotropic). At radii r/r h < 0.2, the contribution of the black hole becomes important. At a radius r/r h = 0.01, the velocity dispersion is already twice as high as the one due to the stars alone. For a globular cluster at a distance of a few kpc, such a radius corresponds to central distances of one or two arcseconds. Of order 20 stars would have to be observed to detect the central rise at this radius with a 95% confidence limit. This seems possible both for radial velocity or proper motion studies with HST. Fig. 3 depicts the semi-major axis of stars which are deepest bound to the IMBH for a cluster with N = 128K stars and a mass-function that extends up to 100 M ⊙ . The energy of the deepest bound star decreases quickly in the beginning when it still has many interactions with passing stars. When the semi-major axis becomes significantly smaller than that of other deeply bound stars, interactions become rare and the energy change slows down considerably. For all simulations made, the innermost stars are among the heaviest stars formed in the cluster and would be a massive black holes with several 10M ⊙ for an IMBH in a globular cluster. The innermost star will therefore not transfer mass onto the IMBH. All other stars have semi-major axis of R > 10 6 R ⊙ which is too far for mass transfer, even if some stars will move on strongly radial orbits. An IMBH in a star cluster can therefore only accrete gas from disrupted stars, or when a star captured through tidal heating is close enough to the IMBH to undergo mass transfer (Hopman et al. 2004). Gravitational radiation The dashed line in Fig. 3 marks the radius inside which a 20M ⊙ black hole can merge with a 1000 M ⊙ IMBH within a Hubble time. The orbit of the deepest bound star is still a factor of 6 wider than this radius, so gravitational radiation Figure 3. Semi-major axis of the three stars deepest bound to the IMBH as function of time for the cluster with N = 128K stars and a high upper mass limit. The star closest bound to the IMBH is almost always another black hole which is among the heaviest stars in the cluster. The other stars are too far away from the IMBH to undergo mass transfer. does not significantly change the stellar orbit. If the number of cluster stars is higher or the initial model more concentrated, the innermost star would be bound more tightly and the two black holes could merge with each other. In this case the system would become visible for gravitational wave telescopes like LISA during the final stages before merging. Fig. 4 shows the mass of bound stars as a function of time for two N = 16K clusters, one with an IMBH of 1000M ⊙ in its center and one without an IMBH. Both clusters move in circular orbits with radius R = 8 kpc around the galactic center. The bound mass decreases in both clusters due to mass loss from stellar evolution and since during each relaxation time a certain fraction of stars gains the energy necessary for escape through encounters with other cluster stars. The cluster with an IMBH loses its mass even faster since in addition to the previous processes, tidal disruption of stars by the IMBH also decreases the average energy of the cluster stars, thereby heating the whole system. As a result, stars flow over the tidal boundary much faster. Mass loss slows down considerably when the number of stars has dropped to less than a few hundred stars, since by then most mass is in the central black hole and the relaxation time starts to increase with decreasing cluster mass. As a consequence, a system of about 100 stars, composed mainly out of main sequence stars and white dwarfs, is still bound to the IMBH after a Hubble time. In the solar neighbourhood, the central IMBH could easily be found in such a cluster through kinematic studies, since the mass-to-light ratio is very high and the stellar velocities show a near perfect Keplerian rise. Near the galactic center, such clusters would spiral into the galactic center through dynamical friction (Portegies Zwart et al. 2004b). If the cluster does not contain a central black hole, it is likely to be disrupted before it reaches close enough to the center. However, the presence of the IMBH prevents the complete disruption of the cluster. The innermost stars would be stripped from the IMBH only in the very late stages, which could explain the presence of a group of young, massive main-sequence stars less than 0.1 pc from the galactic center black hole (Hanson & Milosavljevic 2003). Conclusions We have performed two sets of N -body simulations of multi-mass star clusters containing intermediate mass black holes. We found that the 3-dimensional mass-density follows a ρ ∼ r −1.55 density profile around the central black hole. When viewed in projection, the luminosity profile of clusters with massive black holes has a constant density core. The presence of intermediate mass black holes in core collapse globular clusters like M15 is therefore ruled out by our simulations. As was shown in Baumgardt et al. (2003), a more natural explanation for mass-to-light ratios that increase towards the center in such clusters is a dense concentration of neutron stars, white dwarfs and stellar mass black holes. The detection of a central black hole through proper motion or radial velocity measurements of stars in the central cusp around the black hole is possible with HST for the nearest globular clusters. It might also be possible to find black holes in globular clusters by their gravitational wave emission. The detection through the X-ray emission arising from the IMBH is possible only after the tidal disruption of a star or when a star captured through tidal heating is in a close enough orbit to the IMBH. Intermediate mass black holes also speed up the dissolution of star clusters if the clusters are surrounded by a tidal field.
2014-10-01T00:00:00.000Z
2004-03-05T00:00:00.000
{ "year": 2004, "sha1": "8a933e53bd970612d65d00a84e84ad4fe460ba53", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "763bfbc77e9592d63808311be11d2adacdc50d29", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
81244220
pes2o/s2orc
v3-fos-license
The effect of Avocado leaf extract (Persea americana Mill.) on the fibroblast cells of post-extraction dental sockets in Wistar rats Background: Tooth extraction, a common practice among the dental profession, causes trauma to the blood vessels during the wound healing process. The acceleration of wound healing, within which fibroblasts play an important role, is influenced by nutrition. Avocado leaves contain a variety of chemicals, including flavonoid compounds, tannins, katekat, kuinon, saponin and steroids/triterpenoid. Avocado leaves also contain glycosides, cyanogenic, alkaloids and phenols which function as anti-inflammatory, antibacterial and antioxidant agents. This avocado leaf content could be used as an alternative medicine to accelerate the wound healing process in posttooth extraction sockets. Purpose: To determine the role of avocado leaves (Persea americana Mill) in accelerating fibroblast cells proliferation in tooth socket post-extraction. Methods: The sample was divided into four groups, a control group and three treatment groups. The treatment groups used avocado leaf extract and 3% CMC Na solution which was inserted into the tooth sockets of Wistar rats. Both the control and treatment groups had their mandibula decapitated with all the required specimens being prepared on the 3 and 7 days of the experiment. Mandibular decapitation and tooth extraction socket were prepared by HPA (Histology Pathology Anatomy) with Hematoxylin Eosin (HE) staining. The fibroblast proliferation was analyzed by means of a light microscope at 400x magnification. The obtained data was analyzed using a t-Test. Result: The t-Test obtained a significance value 0.001 (p <0.05) between the control and treatment groups. The number of fibroblast cells increased in the group treated on the third day and decreased in the group treated on the seventh day. Conclusion: Avocado leaf extract (Persea americana Mill.) accelerates proliferation of fibroblast cells in Wistar rats post-tooth extraction. INTRODUCTION Within the dental profession, one of the most common procedures performed is tooth extraction which may cause trauma to the blood vessels. After trauma occurs to the blood vessels, the hemostasis process, involving blood clotting on the walls of damaged blood vessels in order to prevent bleeding, commences. The process of postextraction wound healing can occasionally cause infections, possibly even leading to complications. 1-4 Patients require appropriate post-extraction management in order to reduce the possibility of complications and accelerate blood clotting, thereby promoting wound healing after extraction. The wound healing process itself is relatively complex, consisting of various processes and assisted by many cells, one of them being fibroblasts. Fibroblasts are cells found in connective tissue responsible for the phagocytosis of bacteria. TGF-β (transforming growth factor β) and PDGF (platelet-derived growth factor) stimulate fibroblast structures to become miofibroblasts located at the edges of ECM which promote wound closure in tissues. Fibroblasts will appear in the wound area after three days with the number of fibroblast cells peaking on the seventh day after trauma. [5][6][7][8] The avocado plant possesses the benefits of traditional remedies 9 since almost all its constituent parts possess properties akin to those of such medicines. The leaves, fruit and seeds all have a high nutrient content. Avocado leaves contain a variety of chemicals, including: flavonoid compounds, tannins, katekat, kuinon, saponin, steroids/ triterpenoids, glycosides, cyanogenic compounds, alkaloids and phenols. [7][8][9] The aim of this study was to determine the effect of avocado leaf extract on fibroblast proliferation rates and inflamation indicators. MATERIALS AND METHODS This study used rodent subjects to evaluate wound healing activity indicated by fibroblast proliferation. Approval by the ethical board was granted (304/ HRECC. FODM/XII/2017). This study used a post-test only control group design with 24 male Wistar rat subjects, 150-200 grams in weight and aged 2-3 months which were allowed to freely consume pellet food for one week. The sample was divided into four groups, a control group (n=6) and the treatment groups (n=6). In the control group, the subjects were given a 3% CMC Na solution to synchronize the physiological state of their bodies which had no negative effect on their tissues or organs. The treatment groups had avocado leaf extract and 3% CMC Na solution as a 0.1cc solvent inserted into their tooth sockets. Both the control and treatment groups had their mandibula decapitated with all the required specimens being prepared on the 3 rd and 7 th days of the experiment period. Fresh avocado leaves were obtained from and identified at UPT Materia Medica, Kota Batu, East Java. The leaves were washed thoroughly, dried and liquified in a blender with 96% ethanol solvent, placed in a tightly sealed jar for 24 hours and agitated in a digital agitator at 50 rpm. The resulting liquid extract was filtered by being passed through a cloth, inserted in an Erlenmeyer tube and subsequently evaporated in a rotary evaporator for 90 minutes and stored in a freezer until required. A general anesthetic was administered to the subjects by means of chloroform inhalation. Tooth extraction was performed on the left mandibular incisor using pliers after which irrigation was carried out using sterile aquades to remove the remaining debris. In order to stop postextraction bleeding, a sterile cotton roll was applied to the resulting socket. The treatment protocol adopted was that advocated by Krinke whereby, following removal of the teeth and discontinuation of bleeding from the sockets, the subjects were treated. 10 The treatment group was selected to have its mandibula decapitated and made into preparations on the 3 rd and 7 th days. Decapitation of the mandible in the treatment group and preparation on the 3 rd and 7 th day were performed because fibroblasts appeared in the wound area three days after the trauma before peaking after seven days. On the 3 rd and 7 th days, a mandibular retrieval procedure was performed by anesthetizing the subjects in a glass gas chamber filled with 10% chloroform. The members of each group had their mandibula decapitated and appropriately disposed of. The decapitated mandibles were made into tissue preparation, before being stained with HE (Haematoxylin Eeosin) and observed. Histopathologic observation was performed by counting the number of fibroblasts under a light microscope at 400x magnification. Data was analyzed by means of a One-way Anova test with a 5% significance rate and subsequently with an LSD test to establish whether a significant difference existed. 11,12 RESULTS The results in Table 1 show that after a 3-day experimental period the number of fibroblasts in the treatment group had increased compared to that in the control group (Figures 1 & 2). Conversely, after seven days the number of fibroblast cells in the treatment group was lower than that in the control group (Figures 3 & 4). Table 1 shows the extent of fibroblast proliferation on day 3 was significantly different in the wounds in the treatment group and the control group, while on day 7 no such significant difference was observed between the two groups. DISCUSSION Tooth extraction will result in a wound which then undergoes a healing process consisting of a series of complex processes involving a number of cells, cytokines, growth factors and extracellular components that play a role in repairing damage to the hard tissue and soft tissue. 2 The wound healing process is influenced by several factors including: bacterial infections, damage to the tissue, necrosis, hematoma (tissue bleeding), excessive movement of injured tissue, low blood supply and drug administration. 8,13,14 The injured tissue rapidly experiences an acute inflammatory reaction. The inflammatory phase precedes healing and wound immobilization. The instantaneous acute inflammatory phase is characterized by the exudation of plasma proteins and neutrophils. The chronic inflammatory phase is characterized by the presence of chronic inflammatory cells (macrophages, lymphocytes, and plasma cells). 7 In this study, it was observed that the proliferation phase begins on the third day and can last for several weeks. 2 In the proliferation phase, neutrophil cells digest bacteria, then release intracellular enzymes into the surrounding matrix before expiring. Monocytes will move from the blood capillaries into the ECM, transforming into macrophages which are then mediated by the inflammatory mediator TGF β. TGF β activates fibroblast cells and stimulates collagen deposition by increasing collagen synthesis. With the synthesis of collagen by fibroblasts, the formation of the epithelial layer will be enhanced by regulating the balance between it and the granulation tissue. 10,15,16 As a result, the mucous epithelium and collagen layer will form. 2 Acceleration of the wound healing process can be confirmed by the presence of several indicators, one of which is the number of fibroblasts. Fibroblasts are key to the proliferative phase of wound healing, such as destroying fibrin clot, forming collagen, elastin, glycosaminoglycan and proteoglycans induced by TGF-β to form a new extracellular matrix to close the wound and affect the reepithelization process in the wound. 10 Thus, as indicated in this study, the more fibroblasts appear in the socket sample, the more rapid the wound healing process might be. 2 This study showed that on the third day an increase in the number of fibroblast cells occurred due to active substances such as flavonoids contained in the avocado leaves (Persea americana Mill) that have an antiinflammatory effect through inhibition of cyclooxygenase and lipoxygenase. In this manner, they are able to limit the number of inflammatory cells that migrate to the wound area. Flavonoids play an important role in maintaining permeability and increasing capillary vascular resistance. Therefore, flavonoids are present in pathological conditions such as disruption to the permeability of the blood vessel walls. Flavonoids and phenol substances in avocado leaves accelerate wound healing through antioxidant mechanisms which inhibit the activity of free radicals to donate hydrogen atoms and bond to unstable free radicals that can cause damage to cell membranes and impede cell functioning. The existence of this bond will render free radicals more stable, thereby reducing damage to cell membranes and enabling the proliferation phase to proceed more rapidly. This reduces the duration of the inflammatory reaction, induces earlier TGF-β proliferation and results in the production of fibroblasts. In addition, avocado leaves also contain tannins which are active substances that increase the formation of fibroblast cells and capillary blood vessels, causing growth factor to stimulate the proliferation of fibroblast cells. 1,13,17 Other content of avocado leaves (Persea americana Mill) includes saponin, another active substance, which increases monocyte proliferation and can augment the number of macrophages that will secrete growth factors such as EGF, FGF, PDGF and TGF-β. These, in turn, can stimulate the migration to and proliferation of fibroblasts in the wound area in order to more rapidly synthesize collagen. 1,15 This study showed a decrease in the number of fibroblast cells on the seventh day. Due to a significant increase in fibroblast cell production on day 3, fibroblasts are sufficient to synthesize collagen. This has the result that, on day 7, the number of fibroblast cells decreases as they are transformed into myofibroblasts located on the ECM margins of wound tissue closure. 10,18 This study showed that avocado leaves (Persea americana Mill) topically applied to the post-extraction socket were capable of accelerating the amount of fibroblast present in the wound healing process in Wistar rat tooth sockets on day 3.
2019-03-18T14:06:43.533Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "c000cfa2a080fd77f7519ba2012e8dd4b6e234be", "oa_license": "CCBYSA", "oa_url": "https://e-journal.unair.ac.id/MKG/article/download/9793/6264", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3fec30ca03b9b662cc2ebdd41dfe7c7d14615065", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
139948874
pes2o/s2orc
v3-fos-license
Electrodeposition of Adherent Polypyrrole Film on Titanium Surface with Enhanced Anti-corrosion Performance A method of producing extremely adhesive polypyrrole film on Titanium (Ti) substrate was investigated. The Ti substrate was chemically pretreated and then modified by polydopamine (PDA), the polypyrrole film synthesized by electrochemical method on the treated Ti substrate displayed good adhesion and enhanced anticorrosion performance. The study of the corrosion process was conducted through open circuit potential, tafel polarization and alternating current impedance test. The adhesive polypyrrole film coated titanium showed higher positive shift in corrosion potential and lower corrosion rate, indicating the great enhanced anti-corrosion Introduction Polypyrrole (PPy), one of the typical conducting polymers (CPs), has been widely investigated in metal anti-corrosion applications due to its good air and water stability, high conductivity and ease of synthesis at room temperature [1,2]. However, the poor adhesion force between the PPy film and substrate material seriously inhibit its further application [3]. Although lots of significant progress has been made to improve its adhesion problem, there still has the desire to find a mild and effective method [4,5]. Dopamine (DA), a biological neurotransmitter, which has a similar structure to the essential adhesive component of mussel protein, was famous for its good adhesion and easy synthesis [6,7]. The good adhesion property and the ability to be secondary modified by other molecules make it a suitable material for improving the adhesion force between films and substrate by being used as an interlayer [8]. In the present study, we investigate an effective method to produce adherent PPy film on Ti substrate. Before electrosynthesis of PPy film, the Ti substrate was chemically pre-treated and modified by PDA interlayer. The obtained PPy film showed good adhesion force and enhanced anti-corrosion performance. Experimental Section Pyrrole (Py, 98 %), Sodium dodecyl benzene sulfonate (SDBS, 95 %) and DA was purchased from J & K Chemical Technology. Titanium foil (99.7 %, 1.5×2.5 cm) was purchased from Alfa Aesar. All other chemicals were of analytical grade and were used as received. Ti surface was chemically pretreated by dipping the Ti pieces into the solution contained 0.5 M NaOH and 1.0 M H2O2 for 10 minutes at room temperature. Polydapomine film was deposited on Ti surface via the solution oxidation method reported by Lee et al [6]. Ti pieces were immersed into a dopamine solution (2 mg/ml DA in 10 mM Tris, pH 8.5) for 24 hours, and then washed with ultrapure water and dried. The electrodeposition of PPy film was performed from an electrolyte solution containing 0.2 M SDBS and 0.2 M Py monomer via three electrode system. Platinum wire electrode, saturated calomel electrode (SCE), and Ti electrode was used as counter electrode, reference electrode, and working electrode, respectively. A constant potential of 0.75 V was applied to the system for 600 s, the obtained PPy film was removed from the electrolyte and rinsed with ultrapure water after electrodeposition. The morphology of the PDA and PDA-PPy films were analyzed using SU-70 scanning electron microscopy (SEM). Adhesion force between the films and Ti substrate was tested by the Scotch™ Magic™ Tape 810 (3M). The electrochemical impedance spectra were measured in 3.5 % NaCl solution at 25 •C with a CHI 660E Electrochemical Workstation using 5 mV (rms) AC sinusoid signal at a frequency range from 1000 kHz to 0.01 Hz. The Tafel polarisation curves were performed by potentiodynamic polarisation in 3.5 % NaCl solution with a rate of 0.333 mV s -1 . untreated Ti substrate, PDA modified Ti substrate, and pretreated plus PDA modified Ti substrate, the obtained samples were denoted as Ti/PPy, Ti/PDA-PPy and SC2Ti/PDA-PPy, respectively. As shown in Fig. 1, the current of all the electropolymerization curves initially decreases at the beginning and then increases at a fast rate, which correspond to Ti surface oxidation and PPy nucleation and growth, respectively [9]. The electropolymerization current of Ti/PPy was apparently larger than Ti/PDA-PPy and SC2Ti/PDA-PPy in the whole process, we deduced the current change was due to the pretreated process and the modification of PDA, which changed the surface state and electrical conductivity of Ti electrode, leading to the PPy formation more difficult. Figure 1. Chronoamperometric curve of PPy films The morphology of the PDA and PPy films were characterized by SEM and shown in Fig. 2. From Fig. 2a and Fig. 2c, we can see that after PDA modification, lots of isolated islands were formed and the polishing scratch of Ti surface was coated by uniform thin film. It was consistent with the PDA layer formation process reported by Jiang et al [10], nanoaggregates of PDA and uniform PDA film were formed in the suspension and substrate simultaneously, and the particle of PDA will deposit onto substrate after a period of time. Fig. 2b and Fig. 2d shows the cauliflower-like morphology of PPy film, which Figure 2. SEM images of PDA and PPy films. a) and b) was the low-resolution SEM images of PDA and PPy. c) and d) was the high-resolution SEM images of PDA and PPy. means the PDA modification or chemically pretreating for Ti substrate did not change the morphology of PPy films. Adhesion force of the PPy and PDA-PPy films was examined by peel-off test after the electrosythesis, the results were shown in Table 1. For the Scotch tape peeloff test, a piece of tape was firstly taped on PPy films, the air bubbles was removed by pressure to make sure good contact between the tape and the film, then the tape was peeled off from the bottom upwards at a quick speed. If the film was strongly bonded with the substrate, the delamination will occur at the tape/film interface, otherwise the delamination will occur at the film/substrate interface. In the cases of PPy on untreated Ti, PPy films were completely removed from Ti surface for all 6 samples, indicating that PPy films were barely bonded to the substrate. In the cases of PPy on PDA modified Ti substrate, 20% of PPy films was removed from the Ti surface for 2 samples and 100% adhesion for other 4 samples, indicating that the cohesive strength between PPy film and Ti substrate was greatly enhanced. DA was a small biomolecule bearing similar structure to the essential adhesive component of mussel protein, so we supposed that the existence of PDA improved its adhesive force to Ti substrate. The best adhesive force between PPy and Ti substrate was in the cases of PPy on pretreated plus PDA modified Ti, 100% adhesion was observed in all 6 samples, it was impossible to remove any PPy from the Ti substrate. We attribute the good adhesion to the pretreated process and the PDA modification, which change the topography and chemical composition of Ti substrate, leading to a more adhesive PPy film formation. Table 1 Scotch tape test of PPy, and PDA-PPy films after the electrosythesis. Sample Number of samples Result PPy on Ti 6 100% film off (without any treatment) (6 cases) PDA-PPy on Ti 6 0% film off (4 cases) (without any treatment) 20% film off (2 cases) PDA-PPy on SC2 treated Ti 6 0% film off (6 cases) The corrosion resistance of PPy and PDA-PPy coated Ti in 3.5% NaCl solution was analyzed by the open circuit potential, alternating current impedance test and tafel polarization test, the results was shown in Fig. 3. The open circuit potential (OCP, Fig. 3a) for the PPy coated Ti was positive shift to above than 0 V compared to pure Ti (-0.33 V), and the OCP of SC2Ti/PDA-PPy was more positive than Ti/PDA-PPy and Ti/PPy. Typical Nyquist plot of pure Ti, Ti/PPy, Ti/PDA-PPy, and SC2Ti/PDA-PPy was shown in Fig. 3b, all the curves showed only one capacitive loop in the measured frequency region. The enhancement of corrosion resistance can be reflected by the increasing radius of CMPSE2017 Nyquist plot [11]. It was observed that the radius of Ti/PPy was much higher than pure Ti, indicating the anticorrosion performance of Ti/PPy. The radius of SC2Ti/PDA-PPy was higher than Ti/PPy and Ti/PDA-PPy, indicating the best corrosion resistance. The tafel polarization test (Fig. 3c) The corrosion rate for Ti, Ti/PPy, Ti/PDA-PPy, and SC2Ti/PDA-PPy was 4.29×10 -7 m/s, 3.18×10 -7 m/s, 1.41×10 -7 m/s, and 1.34×10 -7 m/s, respectively. The much more positive shift in corrosion potential and the lower corrosion rate indicate the greatly enhanced anticorrosion performance of SC2Ti/PDA-PPy. The SEM images of Ti/PPy and SC2Ti/PDA-PPy after polarization was shown in Fig. 3d and Fig. 3e. It was observed that Ti/PPy film was badly damaged while the SC2Ti/PDA-PPy film surface seems no change, which further verified the enhanced anti-corrosion performance of SC2Ti/PDA-PPy. Conclusions Adherent SC2Ti/PDA-PPy film was electrosynthesized on chemical pretreat and PDA modified Ti substrate, the adhesion force between PPy film and substrate was evaluated by peel-off test. The adherent SC2Ti/PDA-PPy film showed improved protective behavior against the corrosion of Ti compared to pure Ti and Ti/PPy, which was reflected by open circuit potential, the polarization curves and the electrochemical impedance spectroscopy test. The results were also in agreement with the observed morphology of Ti/PPy film and SC2Ti/PDA-PPy film after polarization in 3.5% NaCl solution.
2019-04-30T13:04:43.539Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "33ac54bda06f55a01e9d2797173f60719318957f", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/44/matecconf_cmpse2017_08007.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "733281db0df9c38307c77486fc4b86055c13b5c3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
233224910
pes2o/s2orc
v3-fos-license
Composite lymphoma of T-cell rich, histiocyte-rich diffuse large B-cell lymphoma and nodular lymphocyte predominant Hodgkin lymphoma: a case report Background Composite lymphoma is a rare entity where two or more distinct subtypes of lymphoma coexist within a single organ or tissue. Case presentation We report a new case of a 67-year-old Caucasian male patient, who presented with fatigue, weakness, weight loss, and polyuria. He also had epigastric and left lumbar pain, enlarged spleen, and enlarged left axillary lymph node on examination, with no relevant medical or familial history. A biopsy from the node showed an appearance of T-cell rich, histiocyte-rich diffuse large B-cell lymphoma and nodular lymphocyte predominant Hodgkin lymphoma. The patient was initially treated with adriamycin (doxorubicin), bleomycin, vinblastine, dacarbazine chemotherapy regimen, then switched to rituximab, cyclophosphamide, doxorubicin, vincristine, prednisone regimen. During the therapy, some regression was noticed, especially in the size of the splenic enlargement; however, the patient died 2 months after completing the regimen. Conclusion Composite lymphomas should continue to be studied. Also, treatment is still debatable in type, efficacy, and outcomes. Background The term "composite lymphoma" (CL) was first used by Custer RP to describe the coexistence of more than one histological type of lymphoma in a single patient [1]; however, the present term is now limited to the rare coexistence of two or more morphologically and immunophenotypically distinct lymphoma clones occurring within a single organ or tissue [2]. Composite lymphoma incidence is low, varying from 1% to 4.7% [3]. Esper Case presentation A 67-year-old Caucasian male presented to the department of gastroenterology at Tishreen University hospital, Lattakia, Syria, complaining of fatigue and weakness that began 1 month earlier accompanied by an unintentional weight loss of about 15 kg over a 15-day period. Medical history includes hypertension and a cerebrovascular accident with no residual complications. The patient could not recollect the etiology of this accident. There was no history of familial lymphoma or cancer. The patient had no vomiting, fever, or diarrhea but had polyuria and urine hesitancy, exertional dyspnea, and orthopnea. Ultrasound study of the abdomen and the pelvis showed a hypoechoic nodule in the liver (3 × 3 cm), a massive spleen enlargement (22 cm) with a few hypoechoic nodules, and a node above the splenic vein, possibly in the pancreas. The kidneys had a clear corticomedullary differentiation with a few simple cysts, the largest of which was (45 × 59 mm) in the left kidney and (67 × 78 mm) in the right one ( Fig. 1). Computed tomography (CT) with intravenous (IV) contrast was then performed and showed homogeneous node measuring 35 mm in diameter in the fourth segment of the liver, homogeneous lobular splenic enlargement, bilateral cysts in the kidney with heterogeneous fixation of the contrast agent in the renal cortex, and calcifications in the coronary arteries, celiac trunk, and splenic artery. Based on investigations, lymphoma was suspected. Later, bone marrow aspiration showed normocellular, granular leukocytes in all stages of differentiation, increased eosinophils and neutrophils, plasma cells < 3%, and red blood cell precursors in all stages of differentiation without abnormalities. Megakaryocytes were normal in number with a slight decrease in size. The biopsy of the left axillary lymph node was then studied. Macroscopically, the lymph node was large (6 × 4 × 3 cm) and gray-tan in color with a soft consistency. Microscopy with the following immunostaining revealed foci with complete effacement of lymph node architecture and diffuse proliferation of cohesive large neoplastic lymphoid cells with large irregular nuclei and prominent nucleoli (Fig. 2). These cells were positive for CD20 and Bcl-2. The background cells were predominantly T lymphocytes (CD3+) and histiocytes, whereas B cells (CD20) were markedly depleted and Reed-Sternberg-like cells (LP cells) were absent. Other foci in the lymph node showed proliferation of LP cells with a background of mixed inflammatory exudates in the absence of CD20+ large lymphoid cells. LCA (CD45) and CD20 were positive for LP cells, whereas CD30 and CD15 were negative (Fig. 3). This panel supports the diagnosis of composite lymphoma. After the initial diagnosis of lymphoma, the patient was treated with once-every-2-weeks dose of adriamycin (doxorubicin), bleomycin, vinblastine, dacarbazine (ABVD) chemotherapy regimen for two sessions; then, he was switched to rituximab, cyclophosphamide, doxorubicin, vincristine, prednisone (R-CHOP) regimen after the diagnosis was confirmed with immunostaining and had 11 sessions every 3 weeks. Another CT scan with IV contrast was carried out 2 months after the initiation of the therapy, and it reported the presence of a left axillary nodule measuring 3.5 cm in diameter with smaller nodules in both axillae no more than (12 × 7 mm) in size. It also showed thickening in the stomach wall (up to 24 mm), in the cardia, and in the upper half of the body, as well as a mild splenic enlargement (Fig. 4). The patient, unfortunately, died from cardiac arrest 2 months after the completion of the chemotherapy regimen. The patient's family refused to perform an autopsy, which prevented the establishment of an accurate causative relationship between the death and the disease. Discussion Composite lymphoma was defined as the combination of more than one lymphoma in the same patient at different sites or in the same location. This definition has developed over time and currently is more accurately described as the presence of two or more types of lymphoma in the same lymph node or extranodal site. Usually one of these types is with low-grade follicular histologic characteristics and the other with diffuse architecture [4]. This entity is rare, occurring in about 1-4.7% of all lymphomas, and most of the reports are case reports or small case series. Cases that describe a composite lymphoma consisting of two or more types of NHL are more common than those reporting NHL and HL. Only six cases in the literature show a combination of classical Hodgkin lymphoma and DLBCL in the same site [2] 4, 5. The diagnosis of composite lymphoma is based on diagnosing each lymphoma. TR-HR-DLCBL is diagnosed via pathologic appearance of DLBCL with less than 10% large neoplastic B cells on a ground of prominent inflammatory infiltrate of small T cells and histiocytes [6], [7]. NLP Hodgkin lymphoma affects a lymph node that is nodular, homogenous and pale, with no sclerosis between nodules (in comparison with classical HL), small B lymphocytes, and a variable number of large atypical cells with lobulated nuclei that resemble Reed-Sternberg cells (LP cells/popcorn cells) [7], 8. The immunohistochemical panel confirmed the diagnosis; however, both types exhibit almost the same panel [CD20+, CD45+, CD15-, CD30-, CD138-, and Bcl2 variable]. There are other tests that we could not perform on this patient [6]. It is worth mentioning that these two entities are possibly intertwined, as recent data suggest the possibility that NLPHL could proceed to/or contain areas of TR-HR-DLBCL [7]. In our case, the lymph node biopsy was interpreted as a composite lymphoma of TCR-HR-DLBCL and NLPHL, with the histoarchitecture of the lymph node and the composition of the background cell populations providing the most reliable diagnostic features for this case. The majority of NLPHLs demonstrate a nodular pattern. Our case demonstrates diffuse pattern, diffuse T-cell and histiocyte-rich infiltrate in the background. In conclusion, the unusual morphologic and immunophenotypic features of this challenging case support the diagnosis of composite lymphoma with features of TCR-HR-DLBCL & NLPHL rather than an NLPHL THRLBCL-like variant. The etiology and pathogenesis of the different types of CL are not clear. Variable definitions and mechanisms have been proposed to explain this entity, of which the theory that the development of one type of lymphoma can induce the development of the other type is the most suitable [2]. Epstein-Barr virus (EBV) is suspected to cause composite lymphoma as it is known for multiple types of lymphoma, especially of B cells; however, proving the relation requires testing of p53 levels, which was not possible to perform. The classification of composite lymphoma is still primitive, and the description of different combinations of lymphomas in the literature as non-Hodgkin lymphoma and Hodgkin lymphoma are B-cell lymphoma of any type and HL, T-cell lymphoma and HL, or even two distinct B-cell lymphomas [2]. Complete recovery is achieved in about 60% of patients with advanced disease on R-CHOP. In the series by Ho et al., the first reported patient received six cycles of R-CHOP, initially achieving a complete response and subsequently relapsing with DLBCL 15 months after the completion of therapy. The second patient declined combination chemotherapy and was treated with single-agent rituximab, achieving stable disease [9]. ABVD is an effective choice in patients with Hodgkin lymphoma [10]. As it is a common older regimen, the physician used it after the initial diagnosis of the lymphoma. R-CHOP is found to be more effective than ABVD in non-Hodgkin lymphoma, as well as in advanced NLPHL, especially regarding the 10-year recurrence [10][11][12]; therefore, the patient was switched early to R-CHOP after immunostaining confirmed the presence of two types of lymphoma. After our patient received the R-CHOP chemotherapy regimen, he did not show firm evidence of response, with only confirmed regression in the size of the spleen, and he, unfortunately, died shortly after the end of the treatment. Conclusion Composite lymphomas should continue to be studied, as morphology, etiology, and types of lymphomas contributing to the presentation vary. TCR-HR-DLBCL with NLPHL that we report is a valid variant. Treatment is still debatable in type and efficacy and the resulting quality of life, but R-CHOP regimen is a promising choice.
2021-04-14T13:36:01.569Z
2021-04-14T00:00:00.000
{ "year": 2021, "sha1": "addd0dee8cd6543d93b6b1e9170ef63b6c4569ee", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-021-02783-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fed4ef13cf0124998f31efa49bb403b76903401e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219607325
pes2o/s2orc
v3-fos-license
Detection of Hydroxyl Radicals Using Cerium Oxide/Graphene Oxide Composite on Prussian Blue A composite sensor consisting of two separate inorganic layers of Prussian blue (PB) and a composite of cerium oxide nanoparticles (CeNPs) and graphene oxide (GO), is tested with •OH radicals. The signals from the interaction between the composite layers and •OH radicals are characterized using cyclic voltammetry (CV). The degradation of PB in the presence of H2O2 and •OH radicals is observed and its impact on the sensor efficiency is investigated. The results show that the composite sensor differentiates between the solutions with and without •OH radicals by the increase of electrochemical redox current in the presence of •OH radicals. The redox response shows a linear relation with the concentration of •OH radicals where the limit of detection, LOD, is found at 60 µM (100 µM without the PB layer). When additional composite layers are applied on the composite sensor to prevent the degradation of PB layer, the PB layer is still observed to be degraded. Furthermore, the sensor conductivity is found to decrease with the additional layers of composite. Although the CeNP/GO/PB composite sensor demonstrates high sensitivity with •OH radicals at low concentrations, it can only be used once due to the degradation of PB. Introduction Hydroxyl radicals (•OH radicals) are one of the most reactive free radicals among reactive oxygen species (ROS). In a human body, •OH radicals are produced as a by-product of cellular respiration primarily in the mitochondria [1,2], the oxidation burst in phagocytic cells [3,4], and enzyme reactions [5,6] for various cellular functions such as restoration of damaged DNA [7], activating vital proteins [8,9], signaling pathways [10], and responding to external impacts [11]. The imbalance between production and elimination of •OH radicals occurs due to the overproduction of ROS or oxidants beyond the capability of cells to facilitate an effective antioxidant response [12,13]. The excess of •OH radicals could develop the oxidative stress condition in a human body leading to interference of the normal function of cells [14] and damage of cellular components including DNAs [15,16], and lipids [17,18]. Acceleration of aging, cancer, cardiovascular diseases, and neurodegenerative diseases, such as Alzheimer's disease and Parkinson's disease, are a few examples of the negative impacts from the oxidative stress [19][20][21]. The detection of •OH radicals, as a biomarker, therefore, is a crucial step in the diagnosis of those severe diseases at initial stages. Synthesis of CeNP/GO Composite Both CeNPs and GO (50 mg each) were added into 100 mL of deionized water. The mixing solution was then placed into an ultra-sonication bath for one hour. Following sonication, the mixing solution was stirred for two hours to form a composite. The homogeneous mixing solution was then transferred to a centrifuge tube and centrifuged at 12,000 rpm for 30 min to receive the precipitated solid from the liquid portion of the mixing solution. The composite sample was then collected and dried at 60 • C for 12 h [47]. Once dried, the solid was grounded to a fine powder and kept in a desiccator at room temperature. The final CeNP/GO composite was confirmed by SAXS and STEM. Deposition of PB on a GCE The deposition of PB on a GCE was reported in previous literatures [48,49]. Briefly, before any PB was immobilized on a working electrode, a GCE was cleaned with 0.1 N sulfuric acid using CV to eliminate impurities on the surface of electrode. After that, two solutions were prepared to deposit PB on the working electrode. The first solution was made of 2 mM potassium ferricyanide, K 3 Preparation of the CeNP/GO Composite on the PB-Modified GCE 10 mg of the CeNP/GO composite powder were suspended in 10 mL of deionized water. The solution was then sonicated for one hour to obtain a homogenous solution. The CeNP/GO composite solution was applied to the PB-modified glassy carbon working electrode by delivering 10 µL with a pipette and dried in an oven at 60 • C for one hour. After drying, the CeNP/GO composite was reduced by CV through electrochemical reduction with the potential range from 1.7 to −1.7 V at 40 mV/s for 12 cycles to improve the overall composite conductivity [45]. Then, the CeNP/GO composite layer on top of the PB layer was rinsed with deionized water and dried again under nitrogen gas. CV was used to confirm the presence of composite layer on top of the PB-modified glassy carbon working electrode with the potential range between −0.8 V to 0.8 V at a scan rate of 100 mV/s in the same CV solution used in 2.3. Detection of •OH Radicals by CeNP/GO/PB on a GCE To test the composite sensor, •OH radicals were generated using the Fenton reaction. 10 mM of H 2 O 2 solution was mixed with 10 mM solution of FeSO 4 ·7H 2 O with an equal volume to perform the Fenton reaction. The H 2 O 2 solution was covered with aluminum foil to prevent the oxidation from UV light exposure for the duration of the experiment. CV was implemented to test the sensor during the Fenton reaction. The first cycle in CV was run in the H 2 O 2 solution. After that, the test was paused and an equal volume of the FeSO 4 ·7H 2 O solution was added to the H 2 O 2 solution to begin the Fenton reaction. CV was continuously used to detect the current change of the sensor during the Fenton reaction with the potential range of −0.6-0.4 V at 100 mV/s. After the Fenton reaction terminated within 15 min, the sensor was transferred to the same CV solution used in 2.3 and CV was run to check for the degradation of PB and composite layers on the surface of electrode. After testing, the sensors were washed with deionized water and dried under nitrogen gas for next tests. The same test procedure was repeated for a sensor multiple times to investigate the reusability of sensor. Both the reduction and oxidation responses (i.e., redox responses) in the cyclic voltammogram were used to calculate the redox response (∆A) of the sensor due to the redox reaction between the CeNP/GO composite and •OH radicals. The redox response in terms of the current change (∆A) was calculated using the procedure described in Figure 1, in which ∆A is taken from the difference between the currents at the oxidation and reduction peaks. The CV curve for H 2 O 2 shows no significant redox peaks, which proves that there is no considerable redox reaction between the CeNP/GO modified electrode and H 2 O 2 . Figure 2a summarizes the synthesis of the CeNP/GO/PB modified electrode and the detection of •OH radicals in the Fenton solution. The design concept of the sensor is also shown in Figure 2b. UV light exposure for the duration of the experiment. CV was implemented to test the sensor during the Fenton reaction. The first cycle in CV was run in the H2O2 solution. After that, the test was paused and an equal volume of the FeSO4·7H2O solution was added to the H2O2 solution to begin the Fenton reaction. CV was continuously used to detect the current change of the sensor during the Fenton reaction with the potential range of −0.6-0.4 V at 100 mV/s. After the Fenton reaction terminated within 15 min, the sensor was transferred to the same CV solution used in 2.3 and CV was run to check for the degradation of PB and composite layers on the surface of electrode. After testing, the sensors are washed with distilled water and dried under nitrogen gas for next tests. The same test procedure was repeated for a sensor multiple times to investigate the reusability of sensor. Both the reduction and oxidation responses (i.e., redox response) in the cyclic voltammogram were used to calculate the redox response (ΔA) of the sensor due to the redox reaction between the CeNP/GO composite and •OH radicals. The redox response in terms of the current change (ΔA) was calculated using the procedure described in Figure 1, in which ΔA is taken from the difference between the currents at the oxidation and reduction peaks. The CV curve for H2O2 shows no significant redox peaks, which proves that there is no considerable redox reaction between the CeNP/GO modified electrode and H2O2. Figure 2a summarizes the synthesis of the CeNP/GO/PB modified electrode and the detection of •OH radicals in the Fenton solution. The design concept of the sensor is also shown in Figure 2b. Synthesis and Characterization of the CeNP/GO Composite The composite was synthesized by a low-temperature solution process. The XRD patterns of GO, CeNPs, and the CeNP/GO composite are showed in Figure 3a-c, respectively. Figure [50,51]. As for the XRD pattern of the CeNP/GO composite, Figure 3c demonstrates the crystalline structure of CeNPs which confirms the presence of CeNPs in the composite. It is worth mentioning that the refractive index of the CeNP/GO composite spikes with a sharper peak in comparison to that of CeNPs, which is attributed to a highly ordered CeNP crystallinity in the composite. On the other hand, it is observed that the characteristic XRD pattern of GO around 25° significantly reduces in the CeNP/GO composite, which is thought to be due to the disorder of stacking of graphene oxide sheets in the composite. The morphologies of CeNPs and the CeNP/GO composite were investigated using STEM. Figures 3d,e show the bright field TEM images of CeNPs and CeNP/GO composite, respectively. In Figure 3d, the CeNPs have an average size from 15 nm to 60 nm with a consistent cubic shape. For the CeNP/GO composite, which is exhibited in Figure 3e, the CeNPs are homogeneously dispersed all over the GO sheets. Thus, it is confirmed that the low-temperature solution process can be successfully used to prepare the CeNP/GO composite. Characterization of the PB Layer Deposited on a GCE The CV results for a bare GCE and the PB modified GCE are shown in Figure 4a,b. Once the electrochemical deposition was performed, two distinct redox peaks appear in the cyclic voltammogram for the PB modified electrode as shown in Figure 4b. These two redox peaks, which are found at 0.1 V and 0.6 V, represent the reduced form (Prussian white) and the oxidized form (Berlin green) of PB, respectively. Furthermore, the PB modified GCE shows a higher conductivity in comparison to the bare GCE. The increase of sensor conductivity is explained with an intrinsic characteristic of PB as an electrocatalyst. PB is well-known for its redox catalysis that increases a rate of electron transfer in a redox reaction between an electrode surface and electrolyte in a solution [52,53]. The addition of a PB layer on the electrode surface as an interlayer between the electrode and Synthesis and Characterization of the CeNP/GO Composite The composite was synthesized by a low-temperature solution process. The XRD patterns of GO, CeNPs, and the CeNP/GO composite are showed in Figure 3a-c, respectively. Figure [50,51]. As for the XRD pattern of the CeNP/GO composite, Figure 3c demonstrates the crystalline structure of CeNPs which confirms the presence of CeNPs in the composite. It is worth mentioning that the refractive index of the CeNP/GO composite spikes with a sharper peak in comparison to that of CeNPs, which is attributed to a highly ordered CeNP crystallinity in the composite. On the other hand, it is observed that the characteristic XRD pattern of GO around 25 • significantly reduces in the CeNP/GO composite, which is thought to be due to the disorder of stacking of graphene oxide sheets in the composite. The morphologies of CeNPs and the CeNP/GO composite were investigated using STEM. Figure 3d,e show the bright field TEM images of CeNPs and CeNP/GO composite, respectively. In Figure 3d, CeNPs have an average size from 15 nm to 60 nm with a consistent cubic shape. For the CeNP/GO composite, which is exhibited in Figure 3e, CeNPs are homogeneously dispersed all over the GO sheets. Thus, it is confirmed that the low-temperature solution process can be successfully used to prepare the CeNP/GO composite. Additionally, SEM was used to investigate the morphologies of the deposited PB layer on a GCE. Figure 4c,d are SEM images of a bare GCE and the PB modified GCE, respectively. Figure 4c shows an uneven surface of glassy carbon electrode. After the electrochemical deposition of PB, a homogenous PB layer across the electrode surface was formed as shown in Figure 4d. Thus, it is confirmed that, from the CV and SEM results, the electrochemical deposition is successfully used to deposit a PB layer on the electrode surface. Characterization of CeNP/GO/PB on a GCE The composite layer was deposited on an electrode surface using the drop casting method. The chemisorption interaction is responsible for the attachment of the CeNP/GO composite with the PB modified electrode. CV was employed to verify the deposition of CeNP/GO composite on top of the PB modified electrode. As shown in Figure 5, two redox peaks of PB turn into one redox peak of the CeNP/GO composite modified sensor. Furthermore, the electrode conductivity increases after applying the CeNP/GO composite layer on top of the PB modified electrode, which is attributed to the highly conductive GO in the composite. The potential change (ΔEp) of the oxidation and reduction peaks also decreases for the composite modified sensor. The shift of redox peaks either to positive or negative potential indicates the reversibility of redox reaction at the electrode surface as a peak-topeak separation (ΔEp). The ΔEp's of a bare and the composite on the PB modified electrode are 980 mV and 170 mV, respectively. This result indicates that PB in the composite tremendously enhances the Characterization of the PB Layer Deposited on a GCE The CV results for a bare GCE and the PB modified GCE are shown in Figure 4a,b. Once the electrochemical deposition was performed, two distinct redox peaks appear in the cyclic voltammogram for the PB modified electrode as shown in Figure 4b. These two redox peaks, which are found at 0.1 V and 0.6 V, represent the reduced form (Prussian white) and the oxidized form (Berlin green) of PB, respectively. Furthermore, the PB modified GCE shows a higher conductivity in comparison to the bare GCE. The increase of sensor conductivity is explained with an intrinsic characteristic of PB as an electrocatalyst. PB is well-known for its redox catalysis that increases a rate of electron transfer in a redox reaction between an electrode surface and electrolyte in a solution [52,53]. The addition of a PB layer on the electrode surface as an interlayer between the electrode and the CeNP/GO composite layer can facilitate the electron transfer resulting in an increase in the sensor conductivity [54,55]. Additionally, SEM was used to investigate the morphologies of the deposited PB layer on a GCE. Figure 4c,d are SEM images of a bare GCE and the PB modified GCE, respectively. Figure 4c shows an uneven surface of GCE. After the electrochemical deposition of PB, a homogenous PB layer across the electrode surface was formed as shown in Figure 4d. Thus, it is confirmed that, from the CV and SEM results, the electrochemical deposition is successfully used to deposit a PB layer on the electrode surface. composite layer on top of the PB modified electrode. As demonstrated in Figure 5d,e, the surface morphology of PB modified GCE is completely different from the image taken after depositing CeNP/GO composite on the PB layer. Figure 5e shows the homogeneous dispersion of CeNP/GO composite on top of the PB modified GCE. Therefore, it is concluded that the CeNP/GO composite layer was successfully deposited on the PB modified electrode, and it showed a higher conductivity and required a lower potential to operate than the bare and PB modified electrodes. Characterization of CeNP/GO/PB on a GCE The composite layer was deposited on an electrode surface using the drop casting method. The chemisorption interaction is responsible for the attachment of the CeNP/GO composite with the PB modified electrode. CV was employed to verify the deposition of CeNP/GO composite on top of the PB modified electrode. As shown in Figure 5, two redox peaks of PB turn into one redox peak of the CeNP/GO composite modified sensor. Furthermore, the electrode conductivity increases after applying the CeNP/GO composite layer on top of the PB modified electrode, which is attributed to the highly conductive GO in the composite. The potential change (∆E p ) of the oxidation and reduction peaks also decreases for the composite modified sensor. The shift of redox peaks either to positive or negative potential indicates the reversibility of redox reaction at the electrode surface as a peak-to-peak separation (∆E p ). The ∆E p 's of a bare and the composite on the PB modified electrode are 980 mV and 170 mV, respectively. This result indicates that PB in the composite tremendously enhances the electron transfer for the redox reaction at the surface of electrode, which results in the significant reduction of ∆E p . Furthermore, SEM images were used to confirm the presence of CeNP/GO composite layer on top of the PB modified electrode. As demonstrated in Figure 5d,e, the surface morphology of PB modified GCE is completely different from the image taken after depositing CeNP/GO composite on the PB layer. Figure 5e shows the homogeneous dispersion of CeNP/GO composite on top of the PB modified GCE. Therefore, it is concluded that the CeNP/GO composite layer was successfully deposited on the PB modified electrode, and it showed a higher conductivity and required a lower potential to operate than the bare and PB modified electrodes. Electrochemical Reduction of the CeNP/GO Composite As mentioned earlier, the electrochemical reduction can improve the intrinsic conductivity of GO. Figure 6 shows the cyclic voltammogram for the CeNP/GO composite modified electrode before and after the electrochemical reduction. It is found that, the conductivity of CeNP/GO composite modified electrode significantly increases after treatment with the electrochemical reduction. The increase in the conductivity of the CeNP/GO composite modified electrode is due to the elimination of oxygen groups on GO by electrochemical reduction. Electrochemical Reduction of the CeNP/GO Composite As mentioned earlier, electrochemical reduction can improve the intrinsic conductivity of GO. Figure 6 shows the cyclic voltammogram for the CeNP/GO composite modified electrode before and after the electrochemical reduction step. It is found that, the conductivity of CeNP/GO composite modified electrode significantly increases after treatment with electrochemical reduction. The increase in the conductivity of the CeNP/GO composite modified electrode is due to the elimination of oxygen groups on GO by electrochemical reduction. Nanomaterials 2020, 10, x FOR PEER REVIEW 9 of 17 CV for •OH Radical Detection As mention before, a CeNP has the dual oxidation states as Ce 3+ and Ce 4+ on the surface of particle. Several works have verified that the Ce 3+ oxidation state on the surface of CeNP is responsible for the oxidation reaction with high selectivity toward •OH radicals [40,41]. Our hypothesis is that CeNPs possessing the Ce 3+ oxidation state can be used as sensing element for •OH radicals via the oxidation reaction. Figure 7 shows the cyclic voltammograms of three different layers of the CeNP/GO composite sensor with (7a, b, and c) and without the PB deposition (7d, e, and f) in the presence of H2O2 and •OH radicals. Regardless of PB layer and additional composite layer(s), the CeNP/GO composite sensor shows the increase of oxidation current peak around 0.2 V in the presence of •OH radicals; in contrast, there is no oxidation current peak from the bare electrode. The composite shows greater reactivity with •OH than with H2O2 as Figure 7a shows, for example, that the redox response (∆A) for •OH is 87 ± 6.2 µA while the ∆A for H2O2 is 37 ± 0.5 µA. Therefore, it proves our hypothesis that CeNPs can be used as a sensing element and the Ce 3+ oxidation state on the surface of CeNP is the reactive site for •OH radicals. The CeNP/GO composite was catalyzed with PB to improve the conductivity and sensitivity of the sensor with low detection limits. The redox response (∆A) of three different layers of a composite with and without PB to •OH radicals is presented in Figure 8. As expected, the PB modified composite sensor delivers a significant increase in the ∆A to •OH radicals compared to the composite sensor without the PB modification. Therefore, this experimental result confirms that the PB layer can be used as an electrocatalyst in this composite sensor configuration. It was found, however, that the PB layer degraded after contacting with H2O2 or •OH radicals. In order to prevent the degradation of PB layer, additional layers of the CeNP/GO composite were deposited on top of the PB layer. It was thought that the extra layers of the composite deposited on CV for •OH Radical Detection As mention before, a CeNP has the dual oxidation states as Ce 3+ and Ce 4+ on the surface of particle. Several works have verified that the Ce 3+ oxidation state on the surface of CeNP is responsible for the oxidation reaction with high selectivity toward •OH radicals [40,41]. Our hypothesis is that CeNPs possessing the Ce 3+ oxidation state can be used as a sensing element for •OH radicals via the oxidation reaction. Figure 7 shows the cyclic voltammograms of three different layers of the CeNP/GO composite sensor with (7a, b, and c) and without the PB deposition (7d, e, and f) in the presence of H 2 O 2 and •OH radicals. Regardless of the PB layer and additional composite layer(s), the CeNP/GO composite sensor shows the increase of oxidation current peak around 0.2 V in the presence of •OH radicals; in contrast, there is no oxidation current peak from the bare electrode. The composite shows greater reactivity with •OH than with H 2 O 2 as Figure 7a shows, for example, that the redox response (∆A) for •OH radicals is 87 ± 6.2 µA while the ∆A for H 2 O 2 is 37 ± 0.5 µA. Therefore, it proves our hypothesis that CeNPs can be used as a sensing element and the Ce 3+ oxidation state on the surface of CeNP is the reactive site for •OH radicals. The CeNP/GO composite was catalyzed with PB to improve the conductivity and sensitivity of the sensor with low detection limits. The redox response (∆A) of three different layers of the composite with and without PB to •OH radicals is presented in Figure 8. As expected, the PB modified composite sensor delivers a significant increase in the ∆A to •OH radicals compared to the composite sensor without the PB modification. Therefore, this experimental result confirms that the PB layer can be used as an electrocatalyst in this composite sensor configuration. top of the PB layer would prevent the degradation of PB layer. As shown in Figure 8, the addition of composite layers is found to reduce the ∆A of the composite sensor in the presence of •OH radicals. This could be due to the additional layer(s) enhances agglomeration of the nanoparticles that results in the reduction of active sites and the decrease in the ∆A. Moreover, the increased layer thickness with the additional composite layer(s) results in a longer distance for electrons to transfer from active sites at the composite surface to the PB layer, leading to the reduction of the ∆A. It was found, however, that the PB layer degraded after contacting with H 2 O 2 or •OH radicals. In order to prevent the degradation of PB layer, additional layers of the CeNP/GO composite were deposited on top of the PB layer. It was thought that the extra layers of the composite deposited on top of the PB layer would prevent the degradation of PB layer. As shown in Figure 8, the addition of composite layers is found to reduce ∆A of the composite sensor in the presence of •OH radicals. This could be due to the additional layer(s) enhances agglomeration of the nanoparticles that results in the reduction of active sites and the decrease in ∆A. Moreover, the increased layer thickness with the additional composite layer(s) results in a longer distance for electrons to transfer from active sites at the composite surface to the PB layer, leading to the reduction of ∆A. Nanomaterials 2020, 10, x FOR PEER REVIEW 11 of 17 Composite Sensor Response to Different •OH Radical Concentrations The single layer of composite modified sensors with and without the PB deposition were used to detect •OH radicals in the concentration range from 0.1 to 10 mM as shown in Figure 9. Both the modified composite sensors show linear relationships between the ∆A and different concentrations of •OH radicals with R-square (R 2 ) values equal to 0.93 and 0.89 for with and without the PB deposition, respectively. A higher R 2 value of composite sensor with the PB deposition could be yielded from the electrocatalytic property of PB, which improves both conductivity and sensitivity as hypothesized before. Furthermore, the CeNP/GO composite modified sensors with the PB deposition shows a higher ∆A for all tested •OH radical concentrations than that without a PB layer in Figures 7 and 8. The limits of detection (LOD) of the composite sensor, calculated by the equation, (3.3 × SD)/b [56], where SD and b represent the standard deviation and a slope of the regression line, are 60 and 100 µM with and without the PB modification, respectively. The electrocatalytic effect of PB is the main factor contributing to a better sensor performance in terms of ∆A and LOD of the composite sensor. The LOD of this CeNP/GO composite sensor with the PB deposition are found to be comparable to other sensors, which are in the range of 1-100 µM [37,[57][58][59]. Composite Sensor Response to Different •OH Radical Concentrations The single layer of composite modified sensors with and without the PB deposition were used to detect •OH radicals in the concentration range from 0.1 to 10 mM as shown in Figure 9. Both the modified composite sensors show linear relationships between the ∆A and different concentrations of •OH radicals with R-square (R 2 ) values equal to 0.93 and 0.89 for with and without the PB deposition, respectively. A higher R 2 value of composite sensor with the PB deposition could be yielded from the electrocatalytic property of PB, which improves both conductivity and sensitivity as hypothesized before. Furthermore, the CeNP/GO composite modified sensors with the PB deposition shows a higher ∆A for all the tested •OH radical concentrations than that without a PB layer in Figures 7 and 8. The limits of detection (LOD) for the composite sensor, calculated by the equation, (3.3 × SD)/b [56], where SD and b represent the standard deviation and a slope of the regression line, are 60 and 100 µM with and without the PB modification, respectively. The electrocatalytic effect of PB is the main factor contributing to a better sensor performance in terms of ∆A and LOD of the composite sensor. The LOD of this CeNP/GO composite sensor with the PB deposition are found to be comparable to other sensors, which are in the range of 1-100 µM [37,[57][58][59]. Effects of PB Degradation on Sensor Performance PB turns out to be an important layer to improve the sensor conductivity and sensitivity. As mentioned before, however, PB is found to be degraded by oxidizing species, H2O2 and •OH radicals. Since PB is used as the electrocatalyst to improve the electron transfer for redox reactions, the degradation of PB surely impacts the ∆A of this composite sensor. Cyclic voltammograms of three different composite layers with the PB deposition before and after running in the Fenton reaction are showed in Figure 10. The ∆A of all composites with single, double, and triple layers are observed to decrease after performing the detection of •OH radicals regardless of the thickness of layer. To confirm the reduction of ∆A in Figure 10 results from the PB degradation, SEM images of the PB layers before and after the Fenton reaction are shown in Figure 11. Figure 11a shows the homogenous structure of PB layer, whereas a damaged rough surface of PB layer is shown after exposure to •OH radicals in the Fenton reaction in Figure 11b. In Figure 12, the percent decreases of the sensor conductivities are estimated as 22.1%, 19.4%, and 23.2% for the single, double, and triple composite sensors with the PB deposition, respectively. On the other hand, the composite sensors without the PB deposition show the 7.2%, 7.8%, and 8.8% decreases in sensor conductivity for the single, double, and triple composite layers. From Figure 12, all the composite sensors of three different layers with the PB deposition show approximately three times more degradation compared to those without the PB deposition. From experimental results in Figures 10-12, it is concluded that the decrease of ∆A mainly results from the degradation of PB layer on the composite sensor. In addition, the different thicknesses of composite layer(s) (single, double, and triple) show no effect on the protection of PB from degradation. Effects of PB Degradation on Sensor Performance PB turns out to be an important layer to improve the sensor conductivity and sensitivity. As mentioned before, however, PB is found to be degraded by oxidizing species, H 2 O 2 and •OH radicals. Since PB is used as the electrocatalyst to improve the electron transfer for redox reactions, the degradation of PB surely impacts ∆A of this composite sensor. Cyclic voltammograms of three different composite layers with the PB deposition before and after running in the Fenton reaction are showed in Figure 10. ∆As of all composites with single, double, and triple layers are observed to decrease after performing the detection of •OH radicals regardless of the thickness of layer. To confirm the reduction of ∆A in Figure 10 resulting from the PB degradation, SEM images of the PB layers before and after the Fenton reaction are shown in Figure 11. Figure 11a shows the homogenous structure of PB layer, whereas a damaged rough surface of PB layer is shown after exposure to •OH radicals in the Fenton reaction in Figure 11b. In Figure 12, the percent decreases of the sensor conductivities are estimated as 22.1%, 19.4%, and 23.2% for the single, double, and triple composite sensors with the PB deposition, respectively. On the other hand, the composite sensors without the PB deposition show the 7.2%, 7.8%, and 8.8% decreases in sensor conductivity for the single, double, and triple composite layers. From Figure 12, all the composite sensors of three different layers with the PB deposition show approximately three times more degradation compared to those without the PB deposition. From the experimental results in Figures 10-12, it is concluded that the decrease of ∆A mainly results from the degradation of PB layer on the composite sensor. In addition, the different thicknesses of composite layer(s) (single, double, and triple) show no effect on the protection of PB from degradation. Nanomaterials 2020, 10, x FOR PEER REVIEW 13 of 17 Conclusions The CeNP/GO composite deposited on the PB modified GCE is successfully synthesized by the electrochemical deposition and the drop casting method. The one layer of CeNP/GO composite sensor shows its sensitivity with •OH radicals as it produces the current increase of 87 ± 6.2 µA in CV when contacts with •OH radicals, whereas the current increases by 37 ± 0.5 µA with H2O2. The composite sensors with and without PB modification show the linear relationships of redox response with •OH radical concentrations from 0.1 to 10 mM with the LOD as 60 and 100 µM, respectively. The PB layer is found to be a crucial factor as an electrocatalyst to improve the sensor efficiency in terms of the redox response and the LOD. Unfortunately, PB layer is found to degrade when exposed to •OH radicals or H2O2. The thicker composite layers show no effect on protecting the degradation of PB. Moreover, the thicker composite layers produce lower current responses. The optimum sensor configuration for •OH radical detection is the PB modified electrode with one layer of CeNP/GO composite. This work presents the promising results on the integration of PB and CeNP to develop the electrochemical sensor for the detection of •OH radicals. Moreover, the PB degradation by •OH radicals is confirmed in this study. Single layer Dobble layer Triple layer 0 Conclusions The CeNP/GO composite deposited on the PB modified GCE is successfully synthesized by the electrochemical deposition and the drop casting method. The single layer of CeNP/GO composite sensor shows its sensitivity with •OH radicals as it produces the current increase of 87 ± 6.2 µA in CV when contacts with •OH radicals, whereas the current increases by 37 ± 0.5 µA with H 2 O 2 . The composite sensors with and without the PB modification show the linear relationships of redox response with •OH radical concentrations from 0.1 to 10 mM with the LOD as 60 and 100 µM, respectively. The PB layer is found to be a crucial factor as an electrocatalyst to improve the sensor efficiency in terms of the redox response and the LOD. Unfortunately, the PB layer is found to degrade when exposed to •OH radicals or H 2 O 2 . The double and triple composite layers show no effect on preventing the degradation of PB. Moreover, the double and triple composite layers produce lower current responses than the single composite layer. The optimum sensor configuration for •OH radical detection is the PB modified electrode with one layer of CeNP/GO composite. This work presents the promising results on the integration of PB and CeNPs to develop the electrochemical sensor for the detection of •OH radicals. Moreover, the PB degradation by •OH radicals is confirmed in this study.
2020-06-11T09:04:25.990Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "2a1bd08e84e54a23ea50c56406dcb208e12846e2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/10/6/1136/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a62115466e2af252b1a7d2c21f2cc654fb49f81", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
267770037
pes2o/s2orc
v3-fos-license
Galactic Diffuse Emission from Radio to Ultra-high-energy γ-Rays in Light of Up-to-date Cosmic-Ray Measurements Cosmic rays (CRs) travel throughout the Galaxy, leaving traces from radio to ultra-high-energy γ-rays due to interactions with the interstellar gas, radiation field, and magnetic field. Therefore, it is necessary to utilize multiwavelength investigations on the Galactic diffuse emission to shed light on the physics of CR production and propagation. In this work, we present a spatially dependent propagation scenario, taking account of a local source contribution, while making allowances for an additional CR component freshly accelerated near their sources. In this picture, after reproducing the particle measurements at the solar system, we calculated the intensity and compared the spectral energy distribution to observations from Fermi-LAT and LHAASO-KM2A in the γ-ray band, and from WMAP and Planck among other radio surveys at lower energies. Multiband data considered in conjunction, the former comparison exhibits sufficiently good consistency in favor of our model, while the latter calls for improvement in data subtraction and processing. From this standpoint, there remains potential for advanced observations at energies from milli-eVs to MeVs toward the Galactic plane, in order to evaluate our model further and more comprehensively in the future. Introduction Interactive and ubiquitous, cosmic rays (CRs) play an impactful role in varieties of celestial events.Nonetheless, the origin of CRs remains a century-long mystery.CRs below the knee (around a few PeVs) are expected to be of Galactic origin.Away from their energetic accelerators, particles propagate in the magnetic halo of the Milky Way and interact with the interstellar gas, radiation field (ISRF) and magnetic field (IMF), generating secondary particles and photons.Unlike the particle measurements, which lose most of their initial directional information, the secondary photon emission preserves the spatial distribution of the progenitor CRs and thus turns out to be a unique and irreplaceable probe of CR propagation. Wide-band diffuse emission from radio frequencies to ultra-high-energy (UHE) γ rays is yielded via interactions between CRs and the interstellar gas, the ISRF, and the IMF.In general, for a typical random magnetic field of a few µGs, the synchrotron radiation of CR electrons and positrons (CREs) results in radio emission from MHz to THz.The bremsstrahlung of CREs in the ISM generates high-energy emission from X rays to γ rays.The inverse Compton scattering (ICS) between CREs and the ISRF, together with the inelastic hadronic interactions between CR nuclei and the ISM give diffuse γ rays in a wide energy range.The multi-wavelength diffuse emission observations can therefore be used to constrain the source distribution and propagation of Galactic CRs. With forefront space-borne and ground-based instruments, γ-ray observations have advanced into higher energy domains, enabling the possibility to study the GDE in multiwavelength approaches.Radio (Haslam et al. 1981a;Remazeilles et al. 2015) and microwave (Planck Collaboration et al. 2016a,b) surveys of early years, in conjunction with recent measurements at above hundreds of MeVs (Ackermann et al. 2012) and at even higher energies of multi-TeVs (Lemiere et al. 2015;Smith & VERITAS Collaboration 2015) can be comprehensively investigated to renovate and reconstruct the current theoretical framework of CRs.While some previous studies have taken data-driven, phenomenological methodologies, others have proposed variant refined models beyond our traditional understanding of CRs, postulating exotic origins (Bringmann & Salati 2007) and/or novel interaction mechanisms (Calore et al. 2015), but only a few of them show the capability to interpret the observed GDE spectrum across different wavelengths and other unforeseen anomalous phenomena simultaneously (Strong et al. 2010;Carlson et al. 2016;Orlando 2016).Straightforward comparison of high-level models each derived from within a certain energy range corresponding to a certain series of astronomical entities and processes could sometimes be misleading, due to the interdependence among the involved physical quantities and uncertainties introduced by different assumptions.Thus, it might be a safer and more refreshing perspective that we aim at fitting all available data at different energy ranges inside a unified configuration when trying to construct or assess our models, which partially motivates this work.Meanwhile, multi-wavelength observations have been used in many recent studies on both point and extended sources, which rely heavily on the accurate modeling of the GDE. Featuring a spatially dependent propagation (SDP) scenario with extra CR origins beyond the standard paradigm, our model, which has been developed based on principally the up-to-date measurements of secondary-to-primary ratios (Zhang et al. 2023a), could be finer tested on this wise.In recent years, it has been argued extensively that the excesses in the observed CR spectra could be naturally explained by a rather young supernova remnant (SNR) located near us (Guo et al. 2016;Liu et al. 2018), whose contribution is also regarded in this work.The suggested CR confinement around the accelerators also influences the GDE intensity across different frequencies and will be depicted more specifically hereafter. The rest of this paper is organized as follows.Section 2 describes the model setting and obtains the model parameters.Section 3 presents and discusses the results of wide-band diffuse emission.Finally, Section 4 concludes this work. Model During the active phases of varieties of astrophysical objects, particles are accelerated up to very high energies (VHE) and then go through a long voyage across the Galaxy before some of them enter the solar system.The whole process can be described from three major aspects: the injection from sources, the propagation in the Milky Way, and the production of secondaries. Source injection In this work, the SNRs are considered as the dominant sources of CRs and a continuous distribution (Case & Bhattacharya 1996) is adopted.The spatial distribution can be parameterized as where r ⊙ = 8.5 kpc is the distance of the Solar system to the Galactic center.Other parameters are adopted as a = 1.69, b = 3.33, and z s = 0.2 kpc (Case & Bhattacharya 1996).Besides this constituent defined as the background component, the contribution from individual nearby sources is also considerable, as evidenced by the curiosities in the energy spectra (Yoon et al. 2017;Atkin et al. 2018;Adriani et al. 2019;An et al. 2019) and large-scale anisotropies (Bartoli et al. 2015a;Aartsen et al. 2016;Amenomori et al. 2017;Abeysekara et al. 2019).To reproduce these anomalies, a local source is incorporated to our model.Though many astrophysical objects exhibit the ability to accelerate particles up to VHE, the SNRs are generally considered as the dominant ones.The injection spectra are assumed to follow a broken power-law energy distribution: where Q 0 is the normalization factor, ν 1 (ν 2 ) is the spectral index at rigidities lower (higher) than the break rigidity R br , and R c is the cut-off rigidity.The detailed information of the injection spectra is listed in Table 1 and Table 2 for the background and the local source respectively. CR propagation As mentioned above, we adopt an SDP model to describe the propagation of CRs in the Milky Way.The SDP model is primarily motivated1 by γ-ray observations of pulsar halos, which suggest very slow diffusion for regions surrounding those pulsars (e.g.Abeysekara et al. 2017;Aharonian et al. 2021) compared with the average diffusion coefficient inferred from CR secondary-to-primary ratios.It is a natural expectation that the slow diffusion regions in the Galactic plane are abundant due to the population of such middle-aged pulsars.Therefore, a two-zone diffusion (slow disk plus fast halo) scenario is reasonable to describe the propagation of CRs.It was also shown that this two-zone diffusion model can suppress the amplitudes of the dipole anisotropies, and the spatial variations of CR intensities and spectra from Fermi observations (Guo & Yuan 2018). Following the previous work (Guo & Yuan 2018), we assume an anti-correlation between the diffusion coefficient with the source distribution.The diffusive halo is divided into two regions: the slow diffusion inner halo (IH) and the fast diffusion outer halo (OH).At the outer halo border (r = r h , z = ±z h ), the free-escape condition, ψ(r h , z, p) = ψ(r, ±z h , p) = 0, is imposed.With z h , ξz h , and (1 − ξ)z h being the half-thickness of the entire halo, the IH, and the OH respectively, the diffusion coefficient is given by where β is the velocity of the CR particle in unit of light speed c, and where g(r, z) = Nm 1+f (r,z) .The propagation parameters are summarized in Table 3.For a more complete and detailed description of the SDP model, we refer the readers to Zhang et al. (2023a). As CRs enter the heliosphere, their energy spectra will be modified by the solar magnetic field.This so-called solar modulation effect is accounted for using the prevalent force field approximation (Gleeson & Axford 1968).In this work, without considering the chargesign dependent modulation effect, a constant modulation potential of ∼ 550 ± 150 MV is Table 3 0.56 0.55 0.1 4.0 0.05 4.0 5.0 6.0 assumed, which, together with other parameters, suffices to fit the observed CR spectra properly.However, it should be noted that this simplified treatment results in uncertainties of the calculated diffuse emission, particularly for the low-frequency (radio) and low-energy (γ-ray) parts. Secondary production As CRs travel through the Milky Way, they leave substantial imprints of secondary nuclei, leptons (positrons and electrons), and photon emission of radio to γ rays, which provide important probes to study their propagation.Relatively heavy secondary nuclei, such as lithium, beryllium and boron, are produced through fragmentation of the primaries chancing upon interstellar gas molecules, in which case each nucleon is considered to take after the energy of its parent.The production of these secondary particles can be described as follows: where ψ i is the density of the primaries, v i is the velocity of the parent particle, n H,He is the number density of hydrogen/helium in the ISM, and σ i+H/He→j denotes the cross section of the fragmentation process i → j. Besides the fragmentation of heavy nuclei, inelastic collisions of light nuclei with the ISM will also produce secondary particles, such as antiprotons, electrons, positrons, and γ rays.The source term is the convolution of the primary spectra and the relevant differential cross sections: where ψ i (p i ) is the solution of the propagation equation of the primaries in the Milky Way. The interactions near the acceleration sites are also considered, in which case, we assume that these particles have not experienced adequate propagation before they escape from the source regions, and ψ i = Q pri,i × τ , where τ is the effective confinement time of the particles around the source, which is estimated to be 0.2 Myr in our model through fitting the observed CR spectra.Approximately, the average grammage accumulated by all escaped primary particles from t = 0 to t = τ is X = ρcτ ≈ m p n H cτ = 0.32 g cm −2 , assuming a constant density n H = 1 cm −3 in the proximity of the sources for such injections.The charged secondary particles experience the same propagation procedure as primary CRs in the Milky Way. The γ-ray emissivity from pp collisions can also be calculated with Eq. ( 6).For the production cross section, we use a recently developed interpolation routine based on Monte Carlo simulations, AAfrag, which employs the updated QGSJET-II-04m model tuned with the LHC data on hadronic interactions (Kachelrieß et al. 2019).For proton energies below 4 GeV and helium energies below 5 GeV, where AAfrag does not cover, an old cross section model of Dermer (1986) is used. The bremsstralung emissivity of CREs is ) (7) where α is the fine structure constant, r 0 is the classical electron radius, c is the speed of light, n s is the number density of gas of species s, E e is the energy of CREs, ψ e is the differential density of CREs, ϵ is the energy of emitted photons, ϕ 1,s and ϕ 2,s are shielding factors which can be found in Blumenthal & Gould (1970). The gas distributions used in this work are the ones embedded in GALPROP.They are HI distribution from Gordon & Burton (1976) renormalized to the results given by Dickey & Lockman (1990), the molecular gas distribution from Bronfman et al. (1988) with a conversion factor from CO to H 2 of ∼ 1.9 × 10 20 mols cm −2 /(K km s −1 ) (Strong & Mattox 1996), and the ionized hydrogen distribution from Cordes et al. (1991).As for the calculation of the diffuse emission, the gas column densities for each line of sight are further renormalized to match the data from surveys of HI (Kalberla et al. 2005) and molecular gas (Dame et al. 2001). In addition to the interactions depicted above, CREs also emit γ rays via the ICS process in the ISRF.The ICS γ-ray emissivity is (Moskalenko & Strong 2000) where γ is the Lorentz factor of parent electrons, ϵ 1 is the energy of seed photons, ϵ 2 is the energy of scattered photons, (n γ ϵ 2 1 f γ ) and (n e γ 2 f e ) are the differential spectra of seed photons and parent electrons, and is the yield function of ICS photons for electron-photon interactions of fixed energies, for an isotropic distribution of photons, where q ′ = ϵ 2 /[4ϵ 1 γ 2 (1 − ϵ 2 /γ)] and 1/4γ 2 < q ′ ≤ 1. VHE γ rays would experience attenuation when traveling through the ISRF due to the pair production effect (Zhang et al. 2006;Moskalenko et al. 2006).We calculate the absorption fraction of the three dimensional emissivity of γ rays, and then integrate along the line-of-sight to get the fluxes (Zhang et al. 2023b). The Milky Way is filled with randomly oriented magnetic field, and the electrons will lose energy and produce wide-band emission through the synchrotron process.After averaging over the pitch angle for an isotropic electron distribution, the synchrotron emissivity of a single electron integrated over all directions is (Ghisellini et al. 1988) where ν is the radiation frequency, γ is the electron Lorentz factor, ν B = eB/(2m e c), x = ν/(3γ 2 ν B ), and K 4/3 , K 1/3 are modified Bessel functions.The Galactic magnetic field strength is modeled as B = B 0 e −R/30 kpc−|z|/4 kpc .The observable synchrotron intensity is then obtained by integrating the emissivity along the line of sight: To be conclusive, the major ingredients of the model include an SDP propagation scheme, a nearby source component which mainly accounts for the spectral bumps of primary CRs, and the secondary production around acceleration sources which is to explain the B/C hardening and diffuse γ-ray excesses.Note that the spectral breaks of primary and secondary particles are likely a coincidence.With higher energy measurements of the primary spectra by DAMPE (An et al. 2019;Alemanno et al. 2021) and CALET (Adriani et al. 2019(Adriani et al. , 2020(Adriani et al. , 2023)), it was shown that the break rigidity is about 500 GV for protons, about 650 GV for helium, 400 -500 GV for carbon and oxygen nuclei.The break rigidity of B/C and B/O measured by DAMPE is about 200 GV (Alemanno et al. 2022).Therefore, the break energies for primary and secondary nuclei may indeed have different origins. Galactic diffuse emission Starting from the model setting described in Sec. 2, we adjust the model parameters to match the measured spectra of CR protons, positrons, CREs, and boron-to-carbon ratio, as shown in Figure A1 in Appendix A. The diffuse emission from neV to PeV energies is then calculated, which will be discussed in detail as follows.2018), based on original measurements: radio surveys (green stars) (Guzmán et al. 2011;Landecker & Wielebinski 1970;Haslam et al. 1981bHaslam et al. , 1982;;Reich et al. 1999Reich et al. , 2001)), Planck synchrotron map (teal circle) (Planck Collaboration et al. 2016a) and WMAP (magenta arrows) (Bennett et al. 2013).In the left panel, the black solid line represents the overall intensity with modulation potential of 550 MV.The colour orange and green mark the radiation from background and fresh interactions respectively; the dotted, dashed and long-dashed lines correspond to contributions by CREs at various energy ranges: [0.1 GeV, 1 GeV], [1 GeV, 10 GeV] and [10 GeV, 100 GeV].In the right panel, the total synchrotron fluxes for different solar modulation potentials are shown.Note that the magnetic field strength for each case is slightly re-scaled to better fit the wide-band data. Diffuse radio emission The calculated synchrotron fluxes are shown in the left panel of Figure 1, compared with the data taken from Orlando (2018), for a sky region slightly above the Galactic plane (10 • < |b| < 20 • ).The model prediction is roughly consistent with the observations.At high frequencies the model flux is slightly lower than the Planck data.This may be solved by a slight re-scale of the magnetic field strength and lowering CRE fluxes below ∼ 1 GeV (see discussion on the uncertainty of the solar modulation in the next paragraph).Note that the WMAP results seem to be higher than the Planck flux at similar frequency (23 GHz).As discussed in Orlando (2018), there might be degeneracies among different components in such frequency bands, e.g., synchrotron, free-free, thermal dust, and anomalous microwave emissions.The WMAP synchrotron intensities may be over-estimated.We therefore use WMAP results as upper limits (shown by arrows). To better see the mapping relation between CREs and synchrotron emission, we show the contributions from CREs in three energy bands, 0.1 − 1 GeV, 1 − 10 GeV, and 10 − 100 GeV, respectively.The emission below 1 GHz mainly comes from CREs with energies smaller than 1 GeV, in which range the fluxes are uncertain due to solar modulation.In the right panel of Figure 1, we show the synchrotron fluxes for three different values of the modulation potentials, 550 MV as the benchmark, 400 MV, and 700 MV, respectively.For each modulation parameter, the source parameters are tuned to reproduce the measured CR spectra (see Table 1).The magnetic field strength is slightly re-scaled to better fit the radio data.For Φ = 400 MV, the low energy CRE fluxes are lower (Figure A1), and the resulting synchrotron spectrum is harder, which can better match the data in a wide frequency band. Diffuse γ-ray emission The comparisons between model calculations and observed γ-ray data in different sky regions are shown in Figure 2. The Fermi-LAT measurements in two latitude belts, 10 • < |b| < 20 • and 8 • < |b| < 90 • , for the whole longitude range (Ackermann et al. 2012), the ARGO-YBJ (Bartoli et al. 2015b) and Tibet-ASγ (Amenomori et al. 2021) measurements in the inner Galactic plane region (25 • < l < 100 • , |b| < 5 • ), the Milagro measurement in a smaller region in the inner Galactic plane (30 (Abdo et al. 2008), and the CASA-MIA upper limits a region covering mostly the outer plane (50 (Borione et al. 1998), are employed for comparison.It is shown that in general the model can reproduce the observations relatively well.Only for ∼GeV energies the model fluxes are slightly higher.We keep in mind that the uncertainty of the solar modulation may take effect in such an energy range. The agreement is satisfactory to an extent: at intermediate latitudes, the contributions from GCRs and extragalactic backgrounds are comparable; at the Galactic disk, the predominant emission is from the Galactic background CR nuclei, with a small portion of electronic origins.The good consistency between the predictions and the observations of γ rays through the sky further demonstrates the reliability of our model.In addition, the Galactic background CRs and the fresh ones make major contributions below and above tens of GeVs respectively.This feature has already been reflected in the hardening of the CR spectra and secondary-to-primary ratio spectra as shown in Figure A1 in Appendix A. It suffices to say that our model has been tested in the γ-ray band from tens of MeVs to hundreds of TeVs. -Diffuse emissions from radio to PeV γ rays.The black solid line is the total radiation.For the rest of the lines, the colour orange, pink, green and teal denotes comfrom synchrotron, Bremmstrahlung, ICS and π 0 decay; the dots and dashes tell whether they are generated from background or fresh interactions; the magenta and gray solid lines are contributions by the IGRB and resolved sources detected by Fermi-LAT.Data are from: Fermi-LAT (Ackermann et al. 2012), ARGO-YBJ (Bartoli et al. 2015b), Tibet ASγ (Amenomori et al. 2021), CASA-MIA (Borione et al. 1998), Milagro (Abdo et al. 2008). Fig. 3.-Diffuse γ-ray emission of the inner region (left) and the outer region (right) of the Galaxy.Areas containing resolved sources detected by LHAASO-KM2A is masked, the masked region is referred to FIG. 1 in Cao et al. (2023).The red, black and blue solid lines represent results predicted by models with modulation potentials of 400 MV, 550 MV and 700 MV. Spatial distributions The energy spectra from radio to PeV γ-ray emission have been calculated and compared with the measurements.There exists satisfactory consistency between the calculations and the observations in the γ-ray band over the sky.Moreover, with the aim of conducting a more comprehensive and detailed study the CR propagation, we should delve deeper into their spatial distribution, especially those at different energy bands, to study the evolution of CR composition.Figure 4 shows the diffuse emission skymaps for radiation at the energy of 1.5×10 −9 MeV, 1.4×10 3 MeV and 1.4×10 6 MeV, where two distinct features are noticeable.Firstly, the distribution appear rather smooth in the radio band and yet somewhat uneven in the range of γ rays.Secondly, only at above TeVs does the fresh component dominate.To explore these features a step further, the 1-d spatial distributions along the Galactic latitudes and longitudes are explored and presented in Figure 5, showing similar attributes to those exhibited already in Figure 4.In addition, as the injected source distribution is assumed to be identical for the CREs and CR nuclei during our calculation, the heterogeneity of the γ rays must have originated from the distribution of the ISM.Hopefully the Fermi-LAT and the LHAASO experiments will provide cleanly subtracted diffuse skymaps in the γ-ray range and the space-borne experiments will give similar measurements in the radio and X-ray bands in the future. Comparison with other models Similar to ours, modifications on the diffusion coefficient have been performed in other studies, e.g., a linear dependence on the distance to the Galactic center (Gaggero et al. 2015).Alternatively, despite minimized by our model, the influence of processes such as reacceleration and convection can also be highlighted (e.g.Orlando 2018;Qiao et al. 2022).The unresolved γ-ray sources have also been brought up to explain the high-energy GDE (Vecchiotti et al. 2022;Schwefer et al. 2023), but the correspondent radiation mechanism remain to be further understood.Some models take account of variants less recognized in this context, e.g., a spatially dependent X CO (Gaggero et al. 2015), with uncertain factors such as Galactic chemical evolution.Predictions from some of these models raised above are drawn together with those from this work, as shown in Figure 6.At radio frequencies, we compare with the baseline DRE model of Orlando (2018) instead of the best-fit DRElowV model which employed some arbitrary tuning of the Alfven velocity for particular particle species.Our model gives relatively lower fluxes at low frequencies, and can better match the data.This is primarily due to that the reacceleration in our case is not as strong as the DRE model of Orlando (2018).Note that the simplified magnetic field model we adopt in this work may affect the results as explored in (Orlando & Strong 2013).As for the diffuse γ-ray emission, previous works tend to give higher fluxes above 100 TeV, to fit the ASγ data (Amenomori et al. 2021).Updated fitting to the LHAASO measurements may give slightly lower fluxes at the UHE band (Cao et al. 2023).Our model prediction differs from those works mainly in the TeV region, featuring by a bump-like structure of our model prediction, which could be tested with future measurements by LHAASO (Li et al. 2023).The one-dimensional distributions along Galactic latitudes are compared with the results of a homogeneous diffusion model of Zhang et al. (2023b).The results show that the SDP model in this work gives a faster decrease from the disk to the pole regions, due to a slower diffusion in the Galactic disk.(Orlando 2018;Lipari & Vernetto 2018;De La Torre Luque et al. 2023;Schwefer et al. 2023;Zhang et al. 2023b).Left: of synchrotron radiation; middle: fluxes of diffuse γ-ray emission; right: Galactic latitude distribution of diffuse γ-ray emission at 1.5 × 10 7 MeV energy.(Cummings et al. 2016), AMS-02 (Aguilar et al. 2019b(Aguilar et al. ,a, 2015(Aguilar et al. , 2018)), CREAM (Yoon et al. 2017) and DAMPE (Ambrosi et al. 2017;Alemanno et al. 2022;An et al. 2019), and CALET (Adriani et al. 2022). Fig.1.-Themodel predicted synchrotron emission, compared with data compiled or processed inOrlando (2018), based on original measurements: radio surveys (green stars)(Guzmán et al. 2011;Landecker & Wielebinski 1970;Haslam et al. 1981bHaslam et al. , 1982;;Reich et al. 1999Reich et al. , 2001)), Planck synchrotron map (teal circle)(Planck Collaboration et al. 2016a) and WMAP (magenta arrows)(Bennett et al. 2013).In the left panel, the black solid line represents the overall intensity with modulation potential of 550 MV.The colour orange and green mark the radiation from background and fresh interactions respectively; the dotted, dashed and long-dashed lines correspond to contributions by CREs at various energy ranges: [0.1 GeV, 1 GeV], [1 GeV, 10 GeV] and [10 GeV, 100 GeV].In the right panel, the total synchrotron fluxes for different solar modulation potentials are shown.Note that the magnetic field strength for each case is slightly re-scaled to better fit the wide-band data.
2024-02-22T06:45:08.419Z
2024-02-21T00:00:00.000
{ "year": 2024, "sha1": "20e4ebd891d93a019c8f62c1680effbf49f73f3e", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad2a4e/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "20e4ebd891d93a019c8f62c1680effbf49f73f3e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
32496005
pes2o/s2orc
v3-fos-license
Determination of CO , H 2 O and H 2 coverage by XANES and EXAFS on Pt and Au during water gas shift reaction The turn-over-rate (TOR) for the water gas shift (WGS) reaction at 200 1C, 7% CO, 9% CO2, 22% H2O, 37% H2 and balance Ar, of 1.4 nm Au/Al2O3 is approximately 20 times higher than that of 1.6 nm Pt/Al2O3. Operando EXAFS experiments at both the Au and Pt L3 edges reveal that under reaction conditions, the catalysts are fully metallic. In the absence of adsorbates, the metal–metal bond distances of Pt and Au catalysts are 0.07 Å and 0.13 Å smaller than those of bulk Pt and Au foils, respectively. Adsorption of H2 or CO on the Pt catalysts leads to significantly longer Pt–Pt bond distances; while there is little change in Au–Au bond distance with adsorbates. Adsorption of CO, H2 and H2O leads to changes in the XANES spectra that can be used to determine the surface coverage of each adsorbate under reaction conditions. During WGS, the coverage of CO, H2O, and H2 are obtained by the linear combination fitting of the difference XANES, or DXANES, spectra. Pt catalysts adsorb CO, H2, and H2O more strongly than the Au, in agreement with the lower CO reaction order and higher reaction temperatures. Introduction The water gas shift (WGS) reaction (eqn (1)), is an industrially important reaction for H 2 production and CO removal. 1,2It is mildly exothermic (DH = À40.6 kJ mol À1 ), thus it is thermodynamically favored at lower temperatures.However, due to kinetic limitations, the reaction is typically conducted at temperatures between 200 and 450 1C.In commercial operation, WGS is, typically, a two-stage process with a high-temperature stage (320-450 1C) employing iron oxide-based catalysts and a low-temperature stage (180-250 1C) employing copper-based catalysts. CO + H 2 O -CO 2 + H 2 (1) However, such two-stage process designs are not viable for small-scale applications.It was predicted that until 2030, approximately 10% of the world annual energy consumption will originate from the WGS reaction. 1Thus, the development of new, higher activity low-temperature WGS catalysts is of scientific and practical interest.Although commercial Cu/ZnO/Al 2 O 3 WGS catalysts operate at low temperature, they suffer from poor stability under constant start-up-shutdown conditions and display poor sulfur tolerance. 3,4In the past decade, Pt and Au-based WGS catalysts have received intense attention as alternatives to Cu. [5][6][7][8][9][10][11][12][13][14] The former are not pyrophoric and require no exceptional pretreatment protocol.In general, Au catalysts operate at a lower temperature than Pt.Moreover, kinetic studies have shown different reaction orders on Au than on Pt.The reasons for the high rate per mole of Pt and Au compared to other metals and the differences in their kinetics are not fully understood. In X-ray Absorption Near Edge Spectroscopy (XANES), the Pt and Au L 3 edges correspond to the 2p -5d electronic transition, and therefore, the intensity and position of the XANES spectra are directly related to the 5d electronic structures.Most often XANES is used to determine the metal oxidation state, fraction of metallic and oxidized metal, formation of alloys, etc.6][17] So far, no studies have reported analysis of the difference XANES spectra for identification of the surface coverage of adsorbates during the WGS reaction in order to elucidate the factors that govern turn-over-rates (TOR) and reaction orders. In the present study, operando EXAFS and XANES have been used to identify the active sites on Pt nanoparticles supported on SiO 2 and Al 2 O 3 and Au/Al 2 O 3 WGS catalysts.Adsorption of CO, H 2 , and H 2 O induce changes in the shape and position of the Pt and Au L 3 XANES spectra.The XANES difference, or DXANES, method is used to identify adsorbates and quantify their individual coverage and under steady-state WGS reaction conditions (CO + H 2 O + H 2 ).The CO surface coverage determined by FTIR agree well with those obtained by the DXANES analysis.The different reaction orders between Pt and Au are explained by the difference in CO surface coverage.In addition it is suggested that weak adsorption of CO leads to exposed sites on Au at low temperature, resulting in catalytic activity at lower temperature than that on Pt.Finally, the differences in adsorption properties of Pt and Au suggest potential opportunities for the development of new, low-temperature WGS catalysts. Experimental 2.1 Catalyst preparation 4.3% Pt/SiO 2 : 50 g of Davisil silica (Sigma-Aldrich, 280 m 2 g À1 , 1.1 cc/g) was slurried in 250 mL of deionized (DI) H 2 O with about 2 mL of concentrated NH 4 OH.The pH was approximately 10. 5.0 g of Pt(NH 3 ) 4 (NO 3 ) 2 (PtTA) was dissolved in 250 mL of DI H 2 O and 2 mL of concentrated NH 4 OH was added.The PtTA solution was rapidly added to the stirring silica.After 15 min, the silica was settled and the solution was decanted.The Pt/silica was slurried in 200 mL of cold DI H 2 O for 15 min and the solution was decanted.Following a second wash and decanting of the solution, the wet solid was filtered and washed on the filter with 200 mL of DI H 2 O.The catalyst was dried overnight at 100 1C and calcined at 225 1C in flowing air for 5 h.The elemental composition was determined to be 4.3 wt% Pt by ICP analysis.By means of the double isotherm method, volumetric hydrogen and CO chemisorption indicated a dispersion of 0.95 H/Pt and 0.55 CO/Pt. 2.6% Pt/Al 2 O 3 : The solution of 2.3 g of Pt(NH 3 ) 4 (NO 3 ) 2 (PtTA) in 23 mL of DI H 2 O was added to 45 g of g-alumina (200 m 2 g À1 , 0.5 cc/g).The catalyst was dried overnight at 110 1C and calcined in flowing air at 450 1C for 5 h.The elemental composition was determined to be 2.6 wt% Pt by ICP analysis.The H 2 and CO volumetric chemisorption values were 1.0 H/Pt and 0.85 CO/Pt.0.7% Au/Al 2 O 3 : The 0.71% Au/alumina catalyst (BC17) was provided by the World Gold Council. XAS reactor description 2.2.1 Operando XAS reactor.The temperature of the operando, fixed-bed plug flow reactor was controlled in an Al heating block.A type K thermocouple was mounted inside the reactor at the top of the catalyst bed to measure the reaction temperature.The design details of the reactor and its validation as a true plug flow reactor can be found elsewhere. 20The flow rates of gases (CO, H 2 , Ar) to the catalyst bed were controlled using mass flow controllers.The concentration of water was controlled by saturation of the reaction gas at different temperatures, thereby varying the partial pressure.The lines from the water saturator to the reactor were heat traced to prevent water condensation.The exit water vapor was condensed in an ice bath prior to injection of the products to the on-line gas chromatograph (GC). Approximately, 10 mg of the Pt/Al 2 O 3 catalyst was placed on a 3 mm plug of SiO 2 powder (Davisil 644, 280 m 2 g À1 ) supported on Pyrex wool in a quartz tube reactor (O.D.: 0.25 00 , I.D.: 0.123 00 ), to achieve a level catalyst bed and clear visual distinction between the Pyrex wool, silica and catalyst.The catalyst bed height was approximately 6 mm. The catalysts were pre-reduced at 200 1C and 280 1C for Au and Pt, respectively.The concentrations of the individual reactants for the WGS reaction were 6.8% CO in Ar, 20% H 2 in Ar, or 12.3% H 2 O in Ar.The total gas flow rate over the Pt/Al 2 O 3 catalyst in each experiment was constant at 10 mL min À1 , yielding a flow-rate-to-catalyst mass ratio of 0.91 mL min À1 mg cat À1 .The total gas flow rate over the Au/Al 2 O 3 catalyst in each experiment was 15 mL min À1 giving a flow rate to catalyst mass ratio of 0.27 mL min À1 mg cat À1 .The kinetic rates for WGS were determined by on-line gas chromatography at 120 and 200 1C for Au and 200 and 280 1C for Pt WGS, and agree with those determined in a laboratory reactor, which used larger amounts of catalyst. XAFS measurements X-Ray absorption measurements were conducted on the insertion device beamline of the Materials Research Collaborative Access Team (MRCAT, 10-ID) at the Advanced Photon Source (APS), Argonne National Laboratory.Ionization chambers were optimized for the maximum current with linear response (ca. 10 10 photons detected per sec.)using a mixture of N 2 and He in the incident X-ray detector and a mixture of ca.20% Ar in N 2 in the transmission X-ray detector.A third detector in the series collected a reference spectrum (Au or Pt foil) simultaneously with each measurement for energy calibration.The catalyst supports were spray-dried microspheres of about 100-200 mesh, which allowed the samples to be loosely packed without bed plugging.The reactor composition, diameter and wall thickness was chosen to give a total absorbance (mx) at the Pt L 3 (11.56keV) edge or Au L 3 (11.92keV) edge between 1-2 and edge steps (Dmx) between about 0.3-0.5.Three spectra were obtained in quick scan mode in about 4 min and were averaged for data analysis.For both reactors, the EXAFS and XANES spectra of the catalysts with adsorption of CO, H 2 , H 2 O or WGS gas mixture were obtained at room temperature, 120 1C, 200 1C and for Pt 280 1C.The gases were purified to remove traces of oxidants (air) by passing through a Matheson PUR-Gas Triple Purifier Cartridge. XAFS data analysis Phase shift and backscattering amplitudes were obtained from the Au and Pt foils for Au-Au and Pt-Pt scattering, respectively.Standard procedures based on WINXAS 3.1 software were used to fit the XAS data.The EXAFS coordination parameters were obtained by a least square fit in q-and r-space of the isolated nearest neighbor, k 2 -weighted Fourier transform data.The quality of the fits were equally good with both k 1 and k 3 weightings.The EXAFS data and fits were obtained for reduced catalysts at 200 1C for Au and 280 1C for Pt and room temperature.A linear temperature dependence of the Debye-Waller factor (DWF) was assumed in order to calculate the values at intermediate temperatures. 21 Difference (D) XANES spectra The normalized, energy calibrated Pt L 3 and Au L 3 edge XANES spectra were obtained by standard methods.The XANES spectra were fit with a linear combination of the catalyst in He (no adsorbates) and with individual gases.For CO and H 2 the reference XANES spectra correspond to the coverage at room temperature, while for H 2 O the reference XANES spectra correspond to the relative coverage at room temperature on Au, but 280 1C on Pt, i.e., the temperature for the WGS reaction. The DXANES spectra were obtained by subtracting the XANES spectrum of the supported catalyst in He from that with different reaction gases at various temperatures.For single gases (CO, H 2 , and H 2 O), the difference XANES spectra at elevated temperatures were fit using the room temperature DXANES as references.This gives a relative fraction of the adsorbed gas at that temperature compared to the amount adsorbed at the reference temperature.For mixed gas compositions, the DXANES spectra were fit as a linear combination of the DXANES spectra for each single adsorbate. Laboratory testing of the WGS reaction The catalytic activities of the Pt/Al 2 O 3 , Pt/SiO 2 , and Au/Al 2 O 3 catalysts were determined using a plug-flow, laboratory reactor, which has been described elsewhere. 22For each experiment, 250 to 300 mg of the catalyst was pre-treated in situ by drying at 100 1C in a flowing inert atmosphere, followed by heating to 200 1C for Au and 300 1C for Pt at 50 mL min À1 in 25% H 2 /75% Ar with a ramping rate of 5 1C min À1 .In this fashion, the Pt/Al 2 O 3 , Pt/SiO 2 , and Au/Al 2 O 3 catalysts were reduced for two hours.After reduction, the catalysts were exposed to a standard WGS composition of 6.8% CO, 8.5% CO 2 , 21.9% H 2 O, 37.4% H 2 and balance Ar.For all kinetic experiments, the total pressure was kept constant at ambient pressure with a total inlet flow rate of 75.4 mL min À1 .The Pt/Al 2 O 3 , Pt/SiO 2 , and Au/Al 2 O 3 catalysts were stabilized at 300 1C, 260 1C, and 200 1C respectively, under the standard gas composition for 15 to 20 h.Water in the reaction gas was condensed in an ice bath and the dry exit stream from the reactor was periodically injected into an Agilent 6890 GC.The dry inlet gases were analyzed before each injection to determine the response factors of the detectors and ensure precise measurements.The GC is equipped with a thermal conductivity detector and a Carboxen 1000 column operating with helium as a carrier gas. After the stabilization process, the reactor temperature was adjusted to lower the CO conversion below 10% and maintain differential conditions during kinetic measurements.The rate of CO consumption was used to calculate the WGS rate under differential conditions.Reaction orders for the reactant and product gases were determined by varying each gas concentration independently.The four concentrations were varied over the ranges 4-21% CO, 5-25% CO 2 , 11-34% H 2 O, and 14-55% H 2 .To determine the apparent activation energy, the temperature was varied over a range of 30 1C with the catalysts exposed to the standard gas concentrations. Diffuse reflectance FTIR Infrared spectra were obtained with a Nicolet Magna 550 FTIR augmented with a Thermo Spectra Tech Collector II diffuse reflectance (DR) mode attachment and equipped with a high-temperature, high-pressure environmental chamber.All DR spectra were collected in situ at a total flow rate of 50 mL min À1 with approximately 10 mg of finely ground sample catalyst.The samples were each exposed to 2 different gas compositions: 6.8% CO and WGS reaction mixture of 6.8% CO, 37.3% CO, 4% H 2 O and 8.6% CO 2 .The carrier gas, Ar, was fed through a Matheson PUR-Gas Triple Purifier Cartridge to remove trace O 2 , hydrocarbons and moisture.DR spectra were collected at 3 different temperatures always in order of decreasing temperature: 200, 120 1C and RT for Au/Al 2 O 3 and 280, 120 1C and RT for the Pt samples.The maximum temperatures of 200 1C and 280 1C were chosen to correlate with the temperature used to collect laboratory kinetic data for Au and the two Pt samples, respectively.At each temperature, ten minutes were allowed for equilibration.Before each catalyst was exposed to the adsorbates it underwent surface cleaning by oxidation with 10% O 2 for 10 min at 200 1C and 300 1C for the Au and Pt samples, respectively, followed by reduction in 25% H 2 for 30 min.Background spectra (256 scans) were collected in Ar at each adsorption temperature as the catalyst was cooled to RT from the reduction temperature.Then the catalyst was heated to the highest adsorption temperature, the adsorbing gases were introduced and a DR spectrum was collected with respect to the Ar background.The catalyst was cooled to the next adsorption temperature without changing the gas-phase conditions and a new spectrum was collected.Background spectra for WGS reaction mixture were obtained while flowing Ar through a bubble saturator with H 2 O at RT. Collecting backgrounds with H 2 O allows for subtraction of gas phase H 2 O bands.Spectra were averaged over 32 scans with 4 cm À1 resolution. Peak fitting and data analysis were completed in CasaXPS v. 2.3.14 slightly modified for IR data compatibility.As adsorbed hydrogen is not visible in IR, and as adsorbed H 2 O is not distinguishable from gas phase H 2 O, IR characterization was limited to adsorbed CO.CO has been extensively used as a catalyst probe molecule, including during in situ WGS. 5,234][25][26][27][28][29][30][31][32] The spectra were background subtracted using a straight-line background from 2144 cm À1 (the valley between the P and R branches of the gas phase CO bands) to approximately 1700 cm À1 .As the gas phase CO bands overlap chemisorbed CO, a model peak for the P branch of gas phase CO was created from a spectrum of metal-free Al 2 O 3 support under WGS conditions.The model peak was highly constrained in shape (FWHM) and position allowing for fits of the very small peaks (linearly adsorbed CO on Au), which would otherwise be obscured by the significantly larger gas phase CO spectrum.For clarity, the CO gas peak was subtracted from the catalyst spectra.All fits assumed Gaussian peaks in positions known for CO adsorbed on Pt or Au.The fits were optimized via a Levenberg-Marquardt algorithm in the CasaXPS software. Transmission electron microscopy The Pt catalyst samples were dispersed in ethanol and sonicated for 10 min, dispersed on 200 mesh carbon-coated copper grids, and dried for 15 min at room temperature.The Z-contrast imaging was done by using an electron microscope (JEM-2010F FasTEMm FEI) manufactured by JEOL, USA operated at 200 kV and an extracting voltage of 4500 V. Since most of the particles were circular in shape, an electronic grid was place around the nanoparticle image in order to determine the diameter.The particle size distribution was obtained by measuring the diameter of approximately 1000 particles. Catalyst characterization It was previously reported that adsorption of Pt(NH 3 ) 4 (NO 3 ) 2 on SiO 2 at strongly basic pH followed by calcination at 225 1C gives 1-2 nm metallic particles with Pt loadings up to 2%. 33or the higher loading of this Pt/SiO 2 catalyst, similarly small sizes were obtained.Impregnation of PtTA on Al 2 O 3 followed by calcination at 450 1C also gives small metallic nanoparticles upon reduction.Generally, Al 2 O 3 -supported catalysts are less prone to sintering than those supported on SiO 2 .Assuming a H-to-surface Pt atomic ratio of 1, the dispersions estimated from hydrogen chemisorption experiments of both catalysts are near 1.0, i.e., every atom is at the surface of the particles. Dark-field STEM images are shown in Fig. 1(a) and (b) for Pt/Al 2 O 3 and Pt/SiO 2 , respectively.For Pt/SiO 2 , the particle size distribution (not shown) indicates an average size of 1.7 nm with about 5% of the particles larger than 2.5 nm.Similarly, the average particle size on Pt/Al 2 O 3 was 1.5 nm with less than 5% of particles larger than 2 nm. Additional information on the Pt particle sizes was determined by EXAFS spectroscopy at room temperature after the catalysts were reduced in 4% H 2 /He at 300 1C for 30 min.The isolated first-shell EXAFS spectra were obtained by a Fourier transform of the k 2 -weighted data from 2.75 to 12.2 A ˚À1 , followed by an inverse Fourier transform from 1.6 to 3.2 A ˚.The fit parameters were determined by fitting both the real and imaginary parts of the Fourier transform of the isolated k 2 -weighted EXAFS spectra and are summarized in Table 1.Assuming spherical nanoparticles, 34,35 the average sizes, determined based on previous correlations of the coordination number, N Pt-Pt , with dispersion, 36 give estimates of 1.5 nm for both Pt catalysts, in agreement with STEM and hydrogen chemisorption size estimates. For these small Pt nanoparticles without adsorbates, there is a contraction in the Pt bond distances ( The 1% Au/Al 2 O 3 catalyst was a commercial sample provided by the World Gold Council.Au is well known to adsorb little H 2 or CO, ruling out the possibility of using chemisorption to estimate the dispersion and particle size.The Au particle size distribution by TEM was unsuccessful due to the poor contrast between the Au nanoparticles and alumina support.Therefore, the particle size was determined by EXAFS spectroscopy.As shown in Table 1, an estimate from N Au-Au also gives an average size of about 1.5 nm.In addition, the contraction of the Au bond distance to 2.77 A ˚is consistent with particles of this size. 36Although there is a small increase of about 0.02 and 0.03 A ˚, upon adsorption of H 2 and CO, respectively, the change is much smaller than that on Pt and the Au-Au bond distance is still significantly shorter than in Au foil, e.g., 2.88 A ˚. WGS reaction kinetics Results of the kinetic measurements are summarized in been shown to depend strongly on the particle size. 40,41onsequently, it is difficult to make a direct comparison between our measured rates and those of others.3][44] Small variations are based on factors such as the concentration of feed gases, temperature, or composition of catalysts.For example, in the study by Grenoble et al., 43 the CO and H 2 O reaction orders were À0.21 and 0.75 for platinum and 0.74 and 0.13 for Au/Al 2 O 3 .However, in that study, hydrogen inhibition of the reaction was not taken into account. The H 2 O and CO 2 reaction orders are very similar for the catalysts reported here.These catalysts, however, do display a variation in the H 2 reaction order, especially for the commercial Cu catalyst.The largest difference between Au and Pt is the CO reaction order.In general, Au catalysts have a CO reaction order near unity, while those for Pt are closer to zero. XANES spectra of adsorbates on Pt WGS catalysts Typically, the L 3 XANES spectra are used to determine the oxidation state, or fraction of metallic and oxidized Pt.Smaller changes in L 3 edge XANES spectra, however, also occur with chemisorption of gases, e.g., H 2 and CO. 15,16,45 Thus, such changes induced by reactant and product chemisorption can be used to determine adsorbate identity. In addition, the change in Pt L 3 XANES intensity was shown to be linearly dependent on the amount of adsorbed H 2 . 18hus, the coverage of gases under WGS reaction conditions can also be determined. CO adsorption. Adsorption of CO on Pt leads to significant changes in the position, intensity and shape of the Pt L 3 XANES spectrum.Fig. 2(a) shows a comparison of the L 3 edge XANES of 4.3% Pt/SiO 2 in He at 280 1C, and in 6.8% CO from room temperature to 280 1C.Upon adsorption there is a shift in the edge position to higher energy.In addition, there is an increase in intensity up to about 20 eV above the edge.With increasing temperature up to 280 1C, there is only a small decrease in intensity, suggesting little CO desorbs at high temperature. If one subtracts the XANES in He from that with adsorbed CO, the difference (or DXANES) shows how the shape and intensity of the edge changes with adsorption of CO.Fig. 2(b) shows the DXANES spectra of CO adsorption at room temperature, 120 1C, 200 1C, and 280 1C for Pt/SiO 2 .A very similar series of DXANES (not shown) are observed for Pt/Al 2 O 3 .It can be seen that the shape of the DXANES is very similar for all temperatures and the spectra differ primarily in intensity.As the temperature increases, the magnitude of the DXANES decreases, corresponding to less adsorbed CO.Using the DXANES spectrum at RT as the reference, the relative fraction of CO at each temperature can be determined.The results of the DXANES fits are given in Table 3.Similar results for single adsorbates are obtained from the usual linear combination fit using the XANES spectra with He and RT CO.Since CO saturates the surface of Pt nanoparticles at RT, the relative coverage also corresponds to the fractional surface coverage.Upon heating to 280 1C, the surface CO coverage is about 70% of that at RT. Very similar CO coverages are obtained on Pt/Al 2 O 3 .3.3.2H 2 adsorption.[45][46][47][48][49] Using the DXANES spectrum at RT as the reference, the fraction of H 2 at each temperature can be calculated. 15The fitting results from the DXANES spectra are given in Table 3.At 280 1C, the relative coverage decreases to about 27% on Pt/SiO 2 and to 18% on Pt/Al 2 O 3 .Since at room temperature each surface Pt has one adsorbed H atom, the fractional fits at high temperature correspond to the H 2 surface coverage.Compared to CO, H 2 desorbs more easily, which is in agreement with the lower heat of adsorption. XANES spectra of adsorbates on 1.5 nm Au WGS catalysts Consistent with the lower adsorption capacity of Au compared to Pt, the changes in XANES spectra with adsorbed gases are much smaller. 50,51Because of these small changes, careful calibration of the reference foil and energy correction of the data files are required to avoid small artifacts in the DXANES spectra. 3.4.1 CO adsorption.Fig. 5(a) shows the Au L 3 edge XANES spectra of Au/Al 2 O 3 in He at 120 1C, in 6.8% CO at room temperature, in 6.8% CO followed by He purge at room temperature, and in 6.8% CO at 120 1C.Compared to adsorption on metallic Pt nanoparticles, the changes due to CO adsorption are very small and there is no shift in the edge position. 50Fig.5(b) shows the DXANES spectra on Au/Al 2 O 3 at room temperature in 6.8% CO, room temperature with He purge (0% gas phase CO), and at 120 1C in 6.8% CO.In the DXANES it is evident that the changes in the XANES shape occur above the edge, i.e., at higher than 11.92 keV.With increasing temperature, there is less adsorbed CO as expected.In addition, removal of gas-phase CO by purging with He also leads to a decrease in adsorbed CO indicating that even at room temperature, some CO is weakly bound. The fits of relative CO coverage (compared to that at RT) from the DXANES adsorption are given in Table 4. Very similar fits were obtained by using a linear combination of Au with out adsorbate (He only) and that with adsorbate at RT.At RT, approximately 40% of the adsorbed CO desorbs with a He purge, and at 120 1C, the relative CO coverage (in flowing CO) is about 35% of that at RT.At 200 1C (spectra not shown), there is about 5% adsorbed CO on these Au nanoparticles. H 2 and H 2 O adsorption. The Au L 3 XANES spectra of Au/Al 2 O 3 in He at 120 1C, in 20% H 2 at room temperature, and in 20% H 2 at 120 1C are given in Fig. 6(a).Similar to the adsorption of CO, the changes in the XANES spectra upon H 2 adsorption are very small, and there is no shift of the edge position. 51The DXANES spectra for adsorbed H 2 at room temperature and at 120 1C are shown in Fig. 6(b).The changes in the XANES spectra also occur above the edge.The fits of the relative coverages (compared to that at RT) are given in Table 4. Similar to the adsorption of CO, H 2 binds weakly to Au with a relative coverage of about 50% at 120 1C and 15% at 200 1C.Adsorption of H 2 O at RT and 120 1C on Au/Al 2 O 3 also gives small changes in the XANES spectra [Fig.7(a)].The DXANES spectra are shown in Fig. 7(b) and the relative coverages are given in Table 4.The relative surface coverage decreases rapidly with increasing temperature, and little is adsorbed at 200 1C.Although the changes in Au XANES with adsorbates are much smaller and indicate weaker adsorption energies than on metallic Pt, the XANES shows that under WGS reaction temperatures, CO, H 2 and H 2 O all chemisorb on the Au nanoparticle surface.In addition, despite the very low adsorbate coverage on Au at 200 1C, the WGS TOR is significantly higher than that on Pt, e.g., Table 2. FTIR of adsorbed CO Since the infrared spectra of adsorbed CO are very similar for Pt/Al 2 O 3 and Pt/SiO 2 , 23 only the results for Pt/SiO 2 are presented here.Fig. 8 shows the diffuse reflectance IR spectra over a range from 2145 to 1700 cm À1 for 6.8% CO on 4.3% Pt/SiO 2 at different temperatures.At RT, the data shows three clear peaks, a sharp shoulder at 2088 cm À1 , a strong peak at 2070 cm À1 with an asymmetric tail and a broad low intensity peak at 1763 cm À1 .The largest peak at 2070 cm À1 is designated linear-bonded CO, while the asymmetric tail likely represents a distribution of sites that bond CO in a bridgedbonded conformation.While bridged CO is generally expected between 1900-1800 cm À1 , Sheppard and Nguyen propose that bridged-bonded CO species occur in the region from about 2000-1800 cm À1 . 52The low-frequency peak is assigned as three-fold bonded CO. [23][24][25][30][31][32]53 The sharp shoulder is still under debate in the literature and is thought to be due to CO adsorbed in small islands or arrays, 31 or CO adsorbed on Pt d+ . 29,30 Astemperature increases to 280 1C, the sharp shoulder shifts to 2078 cm À1 , the linear peak shifts to 2060 cm À1 , the bridged species becomes more prominent and the three-fold bonded CO peak shifts to 1762 cm À1 .It is well known that these shifts are due to a decrease in CO surface coverage leading to less dipole-dipole coupling and a shift to lower frequency.28,30 Under WGS conditions at 280 1C, Fig. 8b, there is a small decrease in the linear-bonded CO and a small shifts in the sharp shoulder, and intensity of the three-fold CO peak.The CO coverage measured by FTIR does not change significantly with changing CO or under WGS (CO + H 2 O + H 2 ) reaction mixture. Fig. 9(a) shows the DRIFTS spectra of CO on Au/Al 2 O 3 from 2145 to 1950 cm À1 at different temperatures, with the CO gas-phase peak subtracted from the catalyst spectra.Since the peak intensities of CO on Au/Al 2 O 3 are a factor of 10 smaller than those on Pt/SiO 2 , the gas-phase CO bands obscure the peaks of CO adsorbed on the Au nanoparticles.The subtraction process was only used in presentation of the data, not the true data analysis.The difference spectra, however, allow for better determination of the peak shape, position and intensity.The linear CO adsorption peak is located at 2097 cm À1 is similar to previous studies on metallic Au 26,27,54 and the position does not change with temperature.Fig. 9(b) shows the DRIFTS of adsorbed CO on Au under WGS reaction conditions.At both 200 and 120 1C the peak has red-shifted to 2094 cm À1 , while at RT the peak has blue-shifted to 2100 cm À1 .The blue-shift may result from competitive adsorption of H 2 O on the metal as discussed below.At 200 1C the peak areas with 6.8% CO only and during WGS reaction are similarly small each showing a coverage of about 5% that for CO only at RT, while at 120 1C the relative coverage changes to 50% and 40% for CO only and under WGS reaction conditions, respectively.For the WGS gas mixture at RT, the relative coverage is also roughly 40%, a significant decrease from the higher CO-only coverage.The relative coverages determined by IR spectroscopy are very similar to those determined by XANES and DXANES fits in Table 4. Structure of the Pt and Au WGS catalysts It has been previously reported that the electronic structures of the atoms in metal nanoparticles differ from the bulk due to the rehybridization of the spd orbitals. 55The rehybridization results in an increase in the local electron density between metal atoms, which in turn leads to a higher bond order and subsequently a contraction of the metal-metal bond distances. 15,36,51,55,56In the present study, EXAFS experiments reveal that the reduced 1.6 nm supported Pt catalysts have a Pt-Pt bond distance of 2.69 A ˚(0.08 A ˚shorter than bulk), and the 1.4 nm supported Au catalyst has a Au-Au bond distance of 2.75 A ˚(0.13 A ˚shorter than bulk).This contraction, however, occurs only in the absence of adsorbates.Upon H 2 or CO adsorption, the bond distance increases, as shown in Tables 1 and 3. From fitting the XANES (or DXANES) with adsorbed hydrogen at different temperatures (RT, 120, 200 and 280 1C), the surface coverage was determined (Table 3).Over this temperature range, the EXAFS indicates there is little change in N Pt-Pt , or particle size.While the size does not change, the Pt bond distance increases with decreasing temperature.Fig. 10 shows that the Pt-Pt bond distance is linearly correlated with the hydrogen coverage.In addition, extrapolation to zero coverage gives an adsorbate-free bond distance of 2.69 A ˚, which is the measured bond distance in He at RT and 280 1C (Table 1).It is thought that when H 2 is adsorbed on Pt nanoparticles, the Pt-Pt valence electron density is shared by the Pt-H bond making the Pt-Pt bonds weaker and the bond distance longer. 15,51,56n Table 3, it can be seen that the Pt-Pt bond distance also increases upon adsorption of CO.Because CO has a higher heat of adsorption, the surface coverage is higher than that of H 2 at the same temperature.At the same surface coverage, however, the contraction in the Pt-Pt bond distance is similar for CO and H 2 .For example, at 70% surface coverage (120 1C for H 2 and 280 1C for CO) on Pt/SiO 2 , the Pt-Pt bond distance is 2.73 A ˚for both adsorbates.At 280 1C and under WGS reaction conditions, the Pt-Pt bond distance is larger than that in He, but is shorter than that of Pt foil.Thus, the Pt-Pt bond distance in small Pt nanoparticles is dependent on the fractional coverage of adsorbates, at least for CO and H 2 . On Au nanoparticles the surface coverage of adsorbates is low.Thus, there is little change in the metallic bond distance in the presence or absence of adsorbates.The metallic bond distance is similar in He, H 2 , CO and under WGS reaction and is significantly shorter than that of Au foil. The EXAFS of both Pt and Au indicate that the active phase is metallic, and for each adsorbate or under WGS reaction condition, there is no indication of oxidized metal as has been suggested for Au on CeO 2 . 11,132 The effect of adsorbates on the L 3 XANES spectra Pt and Au L 3 XANES correspond to the dipole allowed 2p -5d electronic transition, and is most often used to determine the oxidation state of the metal.As shown in this and other studies, [15][16][17][18][45][46][47][48][49][50][51] the intensity and position of the L 3 XANES spectra are affected by the surface coverage of adsorbates.Although one can determine the metal oxidation states from the K-edge XANES, similar changes in the XANES shape and intensity induced by adsorbates are not observed.57 For Pt, the changes in XANES position and intensity are sufficiently different to be used to identify the type of adsorbates.As shown in Fig. 11, the DXANES spectrum of H 2 O adsorption at 280 1C for Pt/SiO 2 has a shape distinct from that generated by CO and H 2 adsorption.In an effort to understand these differences in shape, we have performed calculations using the CASTEP code to simulate the XANES spectra.58 Briefly, the CASTEP simulation calculates the XANES (or EELS) spectra from matrix elements defined by an initial core state using a one-atom all electron calculation and a final state based upon an ''on-the-fly'' pseudopotential calculation.A 1Â1 unit cell was utilized in order to reduce computational effort.Calculations were performed without the inclusion of a core hole due to the electron shielding of platinum.The calculated spectra are all shifted in energy by a constant value (11 564 eV) so that the calculated edge onset is consistent with experimental results.Gaussian broadening of 3.0 eV is applied to mimic the instrument broadening.We approximate the surface coverage by assuming one monolayer of CO in the 3-fold hollow sites of Pt(111) (this is the favored site using the PW-91 functional due to the well known over-coordination problem of GGA functionals 59 ).Since molecular water does not adsorb strongly on metal surfaces and has a calculated adsorption energy of 0.30 eV, 60 we model the effect of water adsorption as one monolayer of OH at atop sites on the Pt(111). In both cass, we have chosen coverages that are substantially higher than the experimentally determined values.However, our models, although not quantitative, will allow us to assess the effects of water and CO adsorption from a qualitative standpoint allowing for identification of trends.Fig. 12(a) shows the simulated Pt L edge XANES (the calculations do not include spin-orbit coupling so we cannot differentiate between the L 2 and L 3 edges) for a platinum atom at the surface of Pt(111) as well as Pt with a monolayer of adsorbed CO and OH.The absorption spectrum shifts by 0.6 eV to lower energies with adsorbed OH and by 1.3 eV to higher energy with adsorbed CO.Fig. 12(b) shows the simulated DXANES spectra for CO and OH.It is clear that the simulated spectra replicate many of the features of experimental spectra, showing the positive peak for OH and the negative peak for CO associated with the edge shift.Recent work from Schweitzer et al. 61 suggests that the position of the edge is directly related to the density of states at the Fermi level (and not due to the charge transfer effects).Fig. 13 shows the partial d-electron density of states for surface atoms of Pt with and without OH, CO adsorbates.Upon adsorption of OH and CO, two effects are observed.First, the d-electron density is significantly depleted in the presence of an adsorbate.Second, the center of the d-band is shifted away from the Fermi level in the presence of the adsorbate.The d-band center shifts from À2.64 eV for Pt(111) to À4.17 eV for CO/Pt(111) and to À3.39 eV for OH/Pt(111). Hover, important differences exist as well.While the d-electron density is strongly depleted near the Fermi edge when CO is adsorbed, there is still significant density near the edge when OH is adsorbed.This implies that d-orbitals of different symmetry are involved in the bonding of CO versus OH to the Pt(111) surface.62 In addition, in Fig. 12(a) there is a sharp increase in the XANES intensity above the edge when either CO or OH is adsorbed.We hypothesize that the increase in intensity is related to the presence of empty (presumably antibonding) states above the Fermi level that are created from hybridization of d-states of the metal with adsorbate states.As proposed by Hoffmann, 63 both bonding and antibonding states will be created due to orbital overlap between filled states of the metal and both filled and empty states of the adsorbate.These hybrid states are now allowed transitions since they possess some d-symmetry. Changes in the XANES spectra during WGS reaction The Pt L 3 XANES spectra at 280 1C under WGS reaction and CO are given in Fig. 14(a).The similarity of these indicates that the major surface species under WGS reaction is CO at about the same coverage as CO only.By subtracting the XANES spectrum under CO from that under WGS conditions, a small residual XANES feature is obtained and is shown in Fig. 14(b).Although this peak is very small, it has the same shape as the DXANES of H 2 O. Using the DXANES of H 2 O at 280 1C as the reference, the fit indicates that there is approximately 10% as much adsorbed H 2 O under WGS as for H 2 O only.The absolute coverage is unknown since it was not possible to quantitatively determine the amount of adsorbed H 2 O on the Pt nanoparticles at 280 1C.Much of the reduced surface coverage is due to the high CO coverage, which is about 70%, essentially the same as for CO alone. While the DXANES on Pt are sufficiently different to identify the adsorbed species in a mixture, the changes on Au are much smaller, and the position and shape are more similar.As a result, identification of the adsorbed species is less reliable.Fig. 15 shows the DXANES for adsorbed CO, H 2 and H 2 O at RT on Au/Al 2 O 3 .While there are subtle differences in shape between CO and H 2 or H 2 O, the DXANES spectra of H 2 and H 2 O are identical.Fig. 16 compares the DXANES of the WGS gases and that of CO only at RT.The small difference indicates a second contribution.A fit of the WGS DXANES indicates that there is about 65% relative CO coverage and 35% relative H 2 O (or H 2 ) coverage, Table 4.The relative CO and H 2 O coverages decrease at 120 1C and there is only a trace amount of adsorbed CO at 200 1C.Small amounts of H 2 O (or H 2 ) are likely present at 200 1C, but are too small to detect.Despite the low surface coverage of reactants on Au, this catalyst has a significantly higher TOR than Pt. Comparison of the amount of adsorbed CO by DXANES and FTIR FTIR can be used to quantify CO coverage on supported catalysts where the integrated absorption coefficient (IAC) has been determined.However, the application of IACs is not without difficulties.Hollins and Pritchard 28 have summarized some of the corrections that are required in order to be quantitative.For example, there is a transfer of peak intensity from low frequency to higher frequency, a blue-shift due to increased dipole coupling as coverage increases, and an overall intensity loss at high surface coverages. 64As a result, significant errors in estimates of the amount of adsorbed CO can occur at high coverage.For our samples, the intensity loss at high coverage is especially significant on Pt/Al 2 O 3 , but occurs to a lesser extent on Pt/SiO 2 .While this makes quantification difficult, these effects allow for some qualitative conclusions.For example, the dipole shift to higher wavenumber at lower adsorption temperature in Fig. 8 is an indication of an increase in CO coverage, despite the slight decrease in integrated area that is observed by us and others in the literature. 30n the Au sample, coverage-dependent effects are not observed due to the low coverage at every temperature and integrated areas can be used for quantification of the relative surface coverages.Using FTIR CO coverage under 6.8% CO as a reference, we compute the Au CO coverages at different temperature and gas mixtures to be very similar to those determined by XANES (or DXANES).At 200 1C, on Au there is agreement by IR, XANES and chemisorption that the CO coverage is low.Also, at lower temperatures there is competitive adsorption of CO with H 2 O, i.e., the CO coverage is lower with the WGS gas mixture than for CO only.The competitive adsorption of CO with H 2 O at low temperature is very different on Au versus Pt. Although the amounts of adsorbed CO are similar by IR and XANES, DRIFTS affords the additional advantage of distinguishing between different CO adsorption conformations, and hence different CO adsorption sites.The advantage of the DXANES analysis is that one can identify and quantify IR inactive adsorbates (such as H 2 ), or highly IR absorbing molecules (such as H 2 O).Furthermore, the changes in the XANES imply bond formation at the catalytic site.Although the WGS reaction conditions are strongly reducing, i.e., CO and H 2 , the active site for the WGS reaction has been suggested to be ionic Au. 11,13 While there is no evidence for Au oxide or ionic Au in the EXAFS of this catalyst, the increase in XANES intensity with adsorbates might reflect oxidation of a small number of metallic Au atoms under reaction conditions.Comparison of the DXANES for the catalyst oxidized in air and with adsorbed H 2 O at RT is shown in Fig. 17 and is nearly completely desorbed at 200 1C.By contrast, the oxygen coverage on ionic Au does not change with temperature except at temperatures greater than about 350 1C. 65Thus, the shape of the XANES spectra and differences in chemical properties indicate that the changes in the Au XANES are due to adsorbate-metal bond formation, rather than oxidation of the metal nanoparticle surface under WGS conditions. 14ig. 18 shows a similar comparison of the DXANES of oxidized Pt and with adsorbed H 2 O.The oxidized Pt DXANES was obtained by air oxidation of a 9 nm Pt nanoparticle supported on SBA-15 at RT. Oxidation leads to surface PtO, i.e., Pt 2+ , with a metallic core.Similar to the effect of H 2 O on Au, the change in the DXANES intensity for H 2 O on Pt increases with increasing partial pressure and decreases with increasing temperature.By contrast the amount of oxidized Pt does not reversibly change with O 2 partial pressure and cannot be desorbed in He until temperatures greater than about 500 1C. 33he conclusion from both EXAFS and XANES under operando conditions, i.e., high reaction rate, is that metallic Au and Pt are the active sites for WGS in these catalysts. 4.5.2 The influence of adsorption on the observed WGS kinetics.The results from this study also indicate possible reasons why Au has a higher apparent TOR at low reaction temperature, higher CO reaction order and, generally, operates at a lower temperature than Pt.Under WGS reaction conditions, the CO coverage on Pt is high, about 70% at 280 1C.Increasing the partial pressure of CO has a minimal effect on increasing the surface coverage and rate.Thus, the reaction order is near zero.Similarly, as one lowers the reaction temperature on Pt, the CO surface coverage increases to near saturation at room temperature.At low reaction temperatures, therefore, there are few sites to adsorb H 2 O.In order to create exposed sites for H 2 O adsorption, higher reaction temperatures are required.Finally, since the CO surface coverage is high, especially at low temperature, increasing the CO partial pressure would most likely lead to a decrease in rate by complete saturation of the Pt surface.It is suggested that strong CO adsorption and high surface coverage leads to few exposed catalytic sites and contributes to inhibition of the TOR at low temperature, low reaction order and higher reaction temperatures on Pt. On Au, the heat of CO adsorption is much lower, leading to significantly lower surface coverage, more exposed catalytic sites and a CO reaction order close to 1. Since the Au surface is not saturated, catalytic activity is possible at lower temperatures than for Pt.In addition, increasing CO partial pressure would increase the surface coverage and reaction rate.At higher pressure, it may even be possible to conduct the WGS reaction at a lower temperature than was done here (1 atm and 200 1C). There is a continual need to develop WGS catalysts with higher rates per unit volume and which operate at lower temperature where thermodynamics favors H 2 production and lower levels of CO.The results from this study suggest that adsorption is an important factor in determination of the apparent TOR, reaction order of CO and temperature of the reaction.It is expected that improved catalysts will be those having lower heats of CO adsorption than Pt.To be active, the next generation of catalysts will have partial CO coverages at low, or even room temperature. Conclusions Low-temperature Pt and Au WGS catalysts have been investigated by EXAFS and XANES spectroscopy in order to identify the structure of the active site, and origin of high activity.Using a plug-flow, kinetic-EXAFS reactor, the EXAFS, XANES and kinetic measurements were simultaneously measured.The EXAFS of both Pt and Au indicate the active phase is fully reduced, metallic nanoparticles.At the L 3 edge, the surface-adsorbed species alter the position and shape of the XANES spectra.For Pt, subtraction of the adsorption edge free of adsorbate from that with adsorbate gives a DXANES spectrum, which has a unique shape for each adsorbate.Thus, the Pt DXANES can be fit to identify the type and coverage of adsorbed species under reaction conditions.During WGS reaction at 280 1C, the CO surface coverage on Pt is approximately 70% similar to that of CO only.While the primary surface species is CO, a small amount of adsorbed H 2 O is also observed.On Au, the changes in the XANES spectra are much smaller, consistent with low coverage of reactants and products.At low temperatures, CO and H 2 O are adsorbed, but under reaction temperatures near 200 1C, there is little adsorbed CO, H 2 or H 2 O. Nevertheless, the Au TOR is significantly higher than that for Pt.The high TOR of Au is suggested to result from the weak adsorption of CO and the availability of free catalytic sites, even at low temperature, while on Pt, high CO surface coverage leaves few exposed active sites resulting in low rates.The implication is that enthalpy of CO adsorption is an important factor in development of low-temperature WGS activity. Fig. 3(a) shows a comparison of the Pt L 3 XANES spectra of 4.3% Pt/SiO 2 in He at 280 1C, in 20% H 2 at room temperature, 120 1C, and 280 1C.Fig. 3(b) shows the corresponding DXANES spectra for Pt/SiO 2 .Similar changes occur on Pt/Al 2 O 3 with H 2 adsorption (not shown).As with CO, as the temperature increases, the magnitude of the DXANES spectra decreases. Fig. 10 Fig. 10 Dependence of Pt-Pt bond distance on H 2 coverage; the surface coverage was determined for 20% flowing H 2 at RT, 120, 200 and 280 1C. 4. 5 Implications for WGS catalysts 4.5.1 Identification of the active site in Pt and Au WGS catalysts. and indicates that the shape and position of oxidized Au and with adsorbed H 2 O are substantially different.In addition, as shown in 2.69 and 2.68 A ˚for Pt/Al 2 O 3 and Pt/SiO 2 , respectively) compared to Pt foil (2.77A ˚).This shortened bond distance, however, increases upon adsorption of 4% H 2 (2.69 to 2.73 A ˚on Pt/Al 2 O 3 and 2.68 to 2.75 A ˚on Pt/SiO 2 ).Similarly, there is a slightly larger increase in Pt bond distance for 7% CO (2.75A ˚for Pt/Al 2 O 3 and Pt/SiO 2 ). Table 2 [37][38][39]ion energy of approximately 10 kJ mol À1 was observed for the Au/Al 2 O 3 catalyst.Although this result is considerably lower than that for the Pt WGS catalysts, e.g., 70-80 kJ mol À1 , it is typical of previously reported activation energies for Au/Al 2 O 3 .[37][38][39]Therate for Au WGS catalysts has Here we report the turn-over-rate (TOR) of Au/Al 2 O 3 and those on Pt/Al 2 O 3 and Pt/SiO 2 with very similar size (and that of a commercial Cu catalysts with much larger size).The TOR is calculated as the moles of CO reacted per mole of surface metal (Pt, Au, or Cu) per second.At 200 1C the Au/Al 2 O 3 has a TOR, which is about 20 times that of Pt/Al 2 O 3 and has a similar TOR to that of a commercial Cu/ZnO/Al 2 O 3 catalyst.In addition, Pt/SiO 2 has a TOR, which is about six times higher than Pt/Al 2 O 3 , which indicates the well-known support effect for the WGS reaction. Table 1 EXAFS fit parameters for the reduced supported Pt and Au catalysts under different treatment conditions at room temperature Table 2 Summary of catalytic performances of the supported Pt and Au catalysts and a commercial CuZn catalyst a Temperature at which the reaction orders were determined.b Dispersion measured from EXAFS.c Dispersion measured by CO chemisorption.d CO chemisorption after N 2 O oxidation. 66e Rates calculated at 200 1C, 7% CO, 22% H 2 O, 8.5% CO 2 , 37% H 2 . 3.3.3H 2 O adsorption.Compared to CO and H 2 , adsorption of H 2 O gives smaller changes in the Pt L 3 XANES.Fig. 4(a) compares the XANES spectra of Pt/SiO 2 in He at 200 1C and with adsorbed H 2 O at two H 2 O vapor pressures (also at 200 1C).Since H 2 O condenses and adsorbs on the supports, H 2 O adsorption was conducted at elevated temperature.In addition to smaller changes in intensity, there is a slight shift to lower energy upon H 2 O adsorption.Fig. 4(b) shows the L 3 DXANES of H 2 O adsorption at different H 2 O vapor pressures for Pt/SiO 2 .Consistent with the higher H 2 O vapor, there is an increase in the DXANES intensity.Since Pt L 3 XANES intensity and position are dependent on the d orbital occupancy, these changes indicate that H 2 O is chemically bonded, i.e. transfer of some electron density between Pt and H 2 O, at this temperature.Since it was not possible to quantify the amount of chemisorbed H 2 O on the Pt nanoparticles separately from that physisorbed on the support, the relative coverage (given in parenthesis in Table 3) at 280 1C (relative to that at 280 1C in the absence of other WGS adsorbates) was determined for comparison with H 2 O coverage under WGS reaction (discussed below). Table 3 Temperature-dependent EXAFS and XANES fit parameters for Pt WGS catalysts with adsorbed gases Table 4 Temperature-dependent EXAFS and XANES fit parameters for the Au/Al 2 O 3 catalyst with adsorbed WGS gases Table 4 , the amount of adsorbed H 2 O decreases with increasing temperature
2018-04-03T02:12:04.258Z
2010-05-20T00:00:00.000
{ "year": 2010, "sha1": "a85027f73f4c146c0599bb423f876c86f3cc1195", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Determination_of_CO_H2O_and_H2_coverage_by_XANES_and_EXAFS_on_Pt_and_Au_during_water_gas_shift_reaction/10958618/1/files/19453589.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "a85027f73f4c146c0599bb423f876c86f3cc1195", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
257019030
pes2o/s2orc
v3-fos-license
Establishment and Characterization of Patient-Derived Xenograft Model of Non-Small-Cell Lung Cancer Derived from Malignant Pleural Effusions Purpose Non-small-cell lung cancer (NSCLC) comprises approximately 80% of all lung malignancies. The 5-year survival rate of patients with advanced lung cancer who lost their chances of surgery is approximately 15%. Suitable animal models are important in screening individualized treatment plans for patients with lung cancer, evaluating the pre-clinical efficacy of new drugs, and conducting basic research. Patients and Methods In this study, we collected malignant pleural effusion (MPE) samples from 31 patients with NSCLC, successfully constructed 11 NSCLC patient-derived xenografts (PDXs), and analyzed the factors affecting their successful establishment. Primary PDX tumors were characterized using histological analysis, immunohistochemistry, short tandem repeat (STR) profiling, and cytogenetic analysis. Results The PDXs preserved the histopathology and protein expression pattern of parental tumors. STR analysis revealed the PDX tissue and a tumor tissue of the same individual origin. Statistical analysis showed that the survival time of patients reflected the malignant degree of MPEs to a certain extent, thus affecting the establishment of PDXs. However, the age, gender, and clinical and biochemical indicators of the patients did not affect the establishment of PDX models. Conclusion These data suggest that the established NSCLC PDXs preserved the molecular characteristics of primary lung cancer and can serve as a new tool to elucidate the pathogenesis of tumors, explore new treatment methods, and conduct the research and development of new drugs for tumors. Introduction In February 2018, the National Cancer Center reported lung cancer as the leading cancer in China in terms of morbidity and mortality, with approximately 782,000 cases and 626,000 deaths annually. 1 Non-small-cell lung cancer (NSCLC) accounts for approximately 80% of all lung malignancies. Chemotherapy-based comprehensive treatments, including radiotherapy and targeted therapy, are usually adopted for advanced lung cancer when the opportunity for surgery is lost. The 5-year survival rate of these patients is approximately 15%, and the expected survival time is short. [2][3][4] Malignant pleural effusions (MPEs) are a common complication of advanced lung cancer and the manifestation of locally advanced lung cancer. Chemotherapy, targeted therapy, and immunotherapy are important treatments for advanced NSCLC. 5,6 At present, most tumors are treated following the standardized scheme in industry guidelines. However, given the heterogeneity of tumors, different tumor types or different patients of the same tumor type exhibit various sensitivities to drugs, and the treatment effects vary greatly. 7,8 Suitable animal models are required to screen individualized treatment plans for patients with lung cancer, evaluate the pre-clinical efficacy of new drugs, and conduct basic research. Patient-derived xenografts (PDXs) are a transplanted tumor model formed by implanting tissue blocks, primary cells, and circulating tumor cells derived from tumor patients into immunodeficient mice. 9 PDXs are more predictive of clinical outcomes compared with cell line-derived xenografts. [10][11][12][13][14] Tissue fragments from tumorectomy or biopsy are usually used to construct PDXs, but they have limited availability or cell viability and cannot be fully preserved. [15][16][17][18] Limited studies have shown that tumor cells from the pleural effusion of patients with NSCLC can be easily separated and expanded and cultured efficiently. MPEs may be an excellent source of tumor-initiating cells because they can effectively reproduce in vitro and in vivo and can reproduce the natural heterogeneity of tumors. [19][20][21] In this study, we used MPEs as the source of tumor cells for PDX establishment. We collected 31 MPE samples of NSCLC and successfully constructed 11 NSCLC PDXs. The age, gender, clinical and biochemical tests, survival time, and other factors of patients with MPEs were used to analyze the variables affecting tumor formation rate. Our results can serve as a reference for the future establishment of NSCLC PDXs with MPEs as tumor cell sources. The results of this study cannot only provide a reference for the establishment of NSCLC PDXs with MPEs as tumor cell source in the future but also lay a solid foundation for the development of the PDX model of malignant tumor accompanied with pleural effusion and ascites. In addition, the established PDXs can be used to clarify tumor pathogenesis, explore new treatment methods, understand the drug resistance mechanism of NSCLC, test drug sensitivity, and develop new tumor drugs. Patient Characteristics From September 2017 to January 2019, patients with NSCLC treated in Wuhan Pulmonary Hospital were included in the study, and patient data (Supplementary Table S1), including gender, age, disease stage, clinical examination and biochemical data, survival period, and immunohistochemistry, were collected prospectively. This study was approved by the Clinical Research Ethics Board of Wuhan Pulmonary Hospital (WPH201710), and all patients provided a written informed consent. All protocols adhered to the tenets of the Declaration of Helsinki. All animal studies were approved by the Institutional Animal Care and Use Committee of Nanchang Royo Biotech Co., Ltd (RYE2017090501). Standard animal care and laboratory guidelines are based on "Guidelines for the Care and Use of Laboratory Animals" (National Research Council, 8th edition, 2011). Collection and Pretreatment of MPEs In accordance with the TNM pathological classification of patients with lung cancer, we selected NSCLC patients with TNM grade IV and MPEs for enrollment. The process of collecting MPEs from patients with advanced lung cancer by thoracic puncture ensured strict sterility, and all MPEs were confirmed as NSCLC by exfoliative cytology. The collected MPE samples were transported to Nanchang Royo Biotech Co., Ltd. at 2 °C-8 °C within 72 h. Phosphate-buffered saline (PBS, 10 µL, PB2004Y, China) was added with 10 µL 0.4% trypan blue dye to observe the cell number and viability. The samples were diluted with PBS based on sample viscosity. MPE samples were centrifuged at 300 g for 10 min to enrich the cells. The precipitate was collected and resuspended in PBS, and mononuclear cells were isolated using a lymphocyte separation solution (LTS10771, TBD Science, Tianjin, China) in accordance with the instructions and PBMC Isolation tubes (601,001, TBD Science, Tianjin, China). The interphase was collected, resuspended in PBS, and then centrifuged at 400 g for 10 min to collect cells. The collected cells were counted, and their viability was tested. The cell suspension was collected and transported to the animal room at low temperature (2 °C-8 °C) for inoculation. Establishment of PDXs All animals used in this study were 6-8-week-old female BALB/c nude mice (GemPharmatech Co., Ltd., China). The mice were bred in the animal room of Nanchang Royo Biotech Co., Ltd. Under specific pathogen-free conditions, the mice were bred on a commercial mouse diet and provided with a 12 h light-dark cycle. A mixture of 1:1 cell suspension and Matrigel (Corning 354,234) was injected subcutaneously into the right forelimb of the mice in a 200 µL injection volume with a cell density of 1×10 7 . The mouse tumor volume was measured with a vernier caliper twice a week and calculated as follows: V=A*B 2 /2, where V is the tumor volume, A is the longest tumor diameter, and B is the shortest tumor diameter. When the tumor volume of the NSCLC PDXs reached 1000-1500 mm 3 , the tumor tissue was surgically stripped, and the mice were euthanized. The surgically removed tumor tissue was cut into 2 mm × 2 mm × 2 mm pieces with a surgical blade and inoculated sterile into the subcutaneous right forelimb of the new immunodeficient mice. The tumor was passaged to P4 using the same method for subsequent experimental study ( Figure 1). Animals showing signs of skin ulcers, hunched posture, weight loss, vocalization, irritability, or lack of grooming were carefully monitored and euthanized. Euthanasia was performed by carbon dioxide. Histological and Immunohistochemical (IHC) Analyses Tissue samples used for hematoxylin-eosin (H&E) staining and immunohistochemistry were derived from lymph node metastases and PDXs from lung cancer patients (No. 11 and No. 27) with MPE. The tumor tissue and PDXs were fixed in formalin and embedded in paraffin. Slides (4 µm) were prepared and stained with H&E for pathological evaluation. The tissue sections were incubated with Cytokeratin 7, P40, and Thyroid Transcription Factor-1 antibodies (Zhong Shan-Golden Bridge Biological Technology Co., Ltd., Beijing, China) at 4 °C overnight. The horseradish peroxidase-IgG secondary antibody was used at 37 °C for 15 min and detected by diaminobenzidine reaction. Short Tandem Repeat (STR) Profiling and Cytogenetic Analysis We used QIAamp DNA FFPE Tissue Kit (#56404, Qiagen, China) to isolate DNA from formalin-fixed and paraffinembedded tissues. The slides were dewaxed with xylene and then washed with ethanol to remove xylene. The sample was lysed overnight under denaturing conditions with proteinase K and then cultured at 90 °C to reverse the cross-linking of formalin. In a suitable solution environment, DNA was bound to the QIAamp MinElute column, and the residual pollutants were washed. Finally, the DNA was eluted. The multiple polymerase chain reaction (PCR) multiplex amplification system (CELL STR TM System) by Beijing HKgene Technology Co., Ltd. was used to analyze 20 STR loci and perform 1 sex locus amplification. The PCR products were analyzed with ABI 3130xl DNA Analyzer (Applied Biosystems). The test results were analyzed with GeneMapper ID-X v1.2 (Applied Biosystems) software. Statistical Analysis The association of PDX establishment with covariates was statistically analyzed. Whether the PDXs were successfully constructed was set to terminate the observation event, and factors, such as gender, survival time, and clinical examination and biochemical data, were considered covariates that may affect tumor formation rate. The classification was analyzed by chi-square test or Fisher's exact test, whereas the analysis of continuous variables was performed to determine whether it conforms to the normal distribution and whether t-test or Mann-Whitney U-test should be used. The Log rank test was used to compare survival curves. Cox proportional hazard regression was used for univariate and multivariate survival analyses. All statistical analyses were performed using SPSS23.0 version. All P values were bidirectional, and statistical significance was considered at P < 0.05. Establishment of PDXs of NSCLC In this study, we collected 31 cases of MPE samples of NSCLC with TNM grade IV and MPE and successfully constructed 11 cases of NSCLC with a tumor formation rate of 35.5%. The PDXs were established in nude mice and successfully passaged for other analyses. In this study, we selected the tumor tissues of two patients and their corresponding PDX models for histological and IHC analyses, in which patient NO. 11 was displayed in the results section, and patient NO. 27 was displayed in the Supplementary Materials ( Supplementary Figures S1 and S2). PDXs Preserve the Histopathology and Protein Expression Pattern of the Parental Tumor Evaluation of patient tumor showed that irregular glandular tubular cell nests were found in the fibro-fatty tissue. Several were papillary, the epithelial cells were crowded, the nucleus was large and vacuolated, and nuclear divisions were easily observed. Coagulative necrosis was found in certain areas (Figure 2). The tumor evaluation of PDXs showed a typical adenocarcinoma area under the microscope, the cancer tissue was solid nested, and the shape was similar to that of the original tumor ( Figure 2). In summary, the established PDXs were consistent with the histological characteristics of the original tumor. CK7 and TTF-1 are common markers of lung adenocarcinoma, and P40 is a common marker of lung squamous cell carcinoma. CK7, P40, and TTF-1 immunohistochemistry was performed on the patient samples and PDXs. IHC results of PDXs and NSCLC patients, which were CK7(+), P40(-) and TTF-1(+), were the same (Figure 3). The P40 negative result ruled out the possibility of lung squamous cell carcinoma, and CK7 and TTF-1 positive finding confirmed that the established PDXs and original tumor tissues were lung adenocarcinoma. The established PDXs maintained the IHC characteristics of the patients with NSCLC. In summary, the established PDXs were consistent with the histological features and protein expression pattern of the original tumor. STR Analysis Revealed PDX Tissue and Tumor Tissue of the Same Individual Origin STRs, also known as microsatellite DNA, are mainly used in genetic linkage map analysis, family identification, identity authentication, and other fields. The DNA samples of patients and PDXs were tested for STR genotyping in accordance with the above steps. The STR data showed no cross-contamination of other human cells. STR analysis results showed that the two STR data conformed to the law of inheritance and can be judged as the same individual source (Figure 4). Characteristics of Patients and MPEs Did Not Affect the Establishment of PDXs The PDX tumor formation rate was compared with these characteristics to determine whether any characteristics influence the successful establishment of PDXs (Table 1). The results based on age, gender, tumor metastasis, color, transparency, cell number, total protein content, lactate dehydrogenase (LDH) content, and carcinoembryonic antigen content of the MPEs showed no significant difference in PDX tumor formation rate. Therefore, the age, sex, clinical biochemical indicators, characteristics of MPEs, and other factors did not affect the establishment of PDX. Correlation of Engraftability and Clinical Outcome As a key point, we examined whether the survival of NSCLC patients with MPEs affected the establishment of PDXs. Except for one patient who could not be followed up due to loss of contact, we analyzed the survival of 10 (33.3%) patients in the tumor group and 20 (66.7%) patients in the nontumor group. The survival time of patients in the tumor group (median survival = 3 (month), 95% confidence interval (CI): 1.5-4.5) was significantly lower than that of patients in the nontumor group (median survival time = 16 (month), 95%: CI 7.0-25.0). In the Log rank test, P = 0.031 (P < 0.05, Figure 5, Table 2). Thus, the survival time of patients was evidently related to the establishment of PDX. Discussion The establishment of an appropriate preclinical model is essential for translational cancer research. As a preclinical model, PDX has shown advantages in drug screening, biomarker development, and joint clinical trials. [22][23][24] In recent 171 years, the number of PDXs of lung cancer has gradually increased. Previous studies used resected or biopsy tumor tissues to construct PDXs, but the present research used the MPEs of patients with lung cancer because most of patients with MPEs lose the opportunity for operation, and surgical samples cannot be obtained upon diagnosis. Moreover, the number of tumors, including primary NSCLC, and the ratio of transplanted tumors are limited, and surgical debris must be processed quickly, usually with a limited amount of raw tumor material. However, limited studies have shown that tumor cells from the pleural effusion of patients with NSCLC can be easily isolated and expanded and cultured efficiently. Therefore, we used MPEs as the source of tumor cells to establish a NSCLC PDX and achieved better results. A large number of studies have confirmed that PDXs can preserve well the morphological characteristics and protein expression patterns of tumors, and our experimental results are consistent with their findings. [25][26][27] Therefore, in this study, we selected the tumor tissues of two patients and their corresponding PDXs for histological and IHC analysis. H&E staining results showed that PDXs preserved the histological appearance of the primary tumor. Different pathological types of cancer have various clinical indicators. The IHC markers selected in this study included CK7, P40, and TTF-1. CK7 is the most sensitive immunological marker for lung adenocarcinoma, with a positive rate of nearly 100%. Therefore, CK7 is the first choice for the identification of primary lung adenocarcinoma. TTF-1 positivity also often indicates lung adenocarcinoma; squamous cell carcinoma is generally negative, and P40 is a common marker for lung squamous cell carcinoma. PDX tissue and original tumor p40 were negative, excluding squamous cell carcinoma. Considering the results of immunohistochemistry, we determined that the PDX tissue and original tumor were typical adenocarcinoma. Both presented consistent IHC features. The tumor tissue used for H&E staining and IHC analysis in this study was derived from lymph node metastases of patients with lung cancer instead of MPEs to obtain accurate results. In addition, our models used free cancer cells, which were in a liquid environment, that were extracted from MPEs. However, after the isolated cancer cells formed a solid tumor in the mouse, typical adenocarcinoma areas with The survival period of patients significantly affected the establishment of PDXs. The survival period itself was our key indicator before starting the study. We speculated that the survival period of patients can comprehensively reflect the degree of tumor malignancy to a certain extent. The higher the degree of malignancy, the higher the chance of tumor formation. In addition, statistics showed that whether a PDX was established is a risk factor for survival. This finding inversely proves that the more malignant tumor cell samples are, the shorter the survival time of the patient. Thus, the survival time of patients who successfully established PDXs was significantly shorter than that of patients who failed to establish PDXs. The color and transparency of MPEs were also our concern. We speculated that the bloody pleural fluid may be caused by the malignant degree of the tumor itself, but the experimental results did not support this assumption. In addition, the cell count of the extracted MPEs did not affect the establishment of PDXs. We speculated that the cell count of the MPE samples can reflect tumor malignancy to a certain extent. Therefore, the cell count of MPEs was determined, and no statistical difference was found. We speculated the small sample size as one of the reasons. In addition, MPEs contain neutrophils, lymphocytes, mesothelial cells, and so on because each patient has different ratios between cells of MPEs, and the cell count of MPEs does not fully reflect the number of cancer cells, which may explain the lack of statistical difference. In the follow-up research, we can further explore this speculation. In this experiment, we used lymphocyte separation solution to separate mononuclear cells in MPEs, that is, the interphase. These monocytes are not only cancer cells but also epithelial cells, inflammatory cells, and so on. We did not further isolate cancer cells. As a result, we can inoculate a mixture of cells to immunodeficient mice. The advantages of this process are simple operation and low cost. However, whether the mixed cells of nontumor cells affect the tumorigenesis rate of PDXs is unclear. In subsequent experiments, we can use magnetic bead purification or flow sorting to isolate cancer cells and construct a PDX. Then, we can explore the influence of nontumor cells in monocytes on the tumor formation rate of PDXs. DNA was extracted from the established P4 tissues by PCR and analyzed by electrophoresis (Supplementary Table S2). The samples tested contained genes from mice and humans (Supplementary Figure S3). Thus, the xenotransplantation model we established was derived from human tissues. To further verify the genetic relationship between PDX samples and patient tissue, we selected the tissue of patient No. 11 and the corresponding PDX model for STR testing. The STR results showed that the STR data of PDX and primary tumor conformed to genetic rules, and they can be judged to be from the same individual source. Conclusion H&E staining, IHC, PCR and STR of the patient samples and tumor-bearing mouse samples revealed similar morphological and molecular characteristics, which can be judged to have the same individual source. In addition, the patient's survival rate was negatively correlated with establishment of PDXs. However, Characteristics of patients and MPEs did not affect the establishment of PDXs. Our research confirmed that MPEs are a good source of tumor cells and provided a reference for the establishment of NSCLC PDXs in the study of the pathogenesis and treatment of the disease.
2023-02-19T16:16:08.435Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "8387580b777042751a2f11070b1d51314de524f8", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b97cfd82296ca0a540b55b9aabc5fffc30668335", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
260773933
pes2o/s2orc
v3-fos-license
Cost of illness of patients with small fiber neuropathy in the Netherlands Small fiber neuropathy societal costs were estimated at €148 million with daily practice small fiber neuropathy patient data. Health-related quality of life was the key association on health care and societal costs. Introduction Small fiber neuropathy (SFN), with a prevalence rate of 53 per 100,000 inhabitants, 25 is a disorder of the thinly myelinated Ad fibers and unmyelinated C fibers and clinically dominated by neuropathic pain and autonomic complaints. 12,29The diagnosis of SFN is based on at least 2 small nerve fiber-related clinical signs of the patient, normal nerve conduction studies (NCS) and an abnormal intraepidermal nerve fiber density (IENFD) in skin biopsy, and/or abnormal quantitative sensory testing (QST). 11,17Diabetes mellitus, autoimmune diseases, and sodium channel gene mutations are the most common conditions observed in patients with SFN, but in 53% of cases, no underlying condition is found. 8In addition to the initial treatment of the underlying condition, neuropathic pain treatment is needed, 15 but generally, these yield disappointing results. 15,29Severe SFN leads to a reduced quality of life (Qol), 2 with commonly associated anxiety and depression greatly interfering with patients' ability to function. 18Higher age and a higher number of comorbidities are prognostic factors for higher health care and productivity costs. 9,32he US annual total health care and patient and family costs of idiopathic painful neuropathy with SFN involvement since 2012 were estimated at $8055 (€7403) per patient. 27Total costs of work productivity loss were estimated to be $13,733 (€12,621) per patient due to a poorer health status, worse sleep outcomes, and loss of productivity. 27A significant association was found between health care and patient and family costs and pain severity, but no statistically significant associations were found in productivity costs. 27However, because of the differences between the United States and the Netherlands in how painful idiopathic SFN is diagnosed, the fact that fewer patients in the United States were in a paid employment and the differences between the healthcare system of the United States and the Netherlands, the study populations and the costs do not lend themselves for straightforward comparison.A cost of illness (COI) study of confirmed SFN has not yet been conducted.Therefore, the costs of SFN and the factors which influence healthcare consumption and productivity costs remain largely unknown.This COI study aims to examine the healthcare, patient and family, and productivity costs of patients with confirmed (by skin biopsy or QST proven) SFN in the Netherlands to estimate the annual SFN costs from a healthcare and a societal perspective. In addition, the associations of age, pain impact on daily life, anxiety, depression, and health-related Qol on these costs were investigated. Study design and patients This COI study was conducted at the diagnostic SFN service of the SFN Center of the Maastricht University Medical Center1 in Maastricht, the Netherlands.The SFN Center is a tertiary referral center for patients with suspected SFN, evaluating approximately 500 patients yearly.The diagnostic SFN service is based on a 1-day stay at the neurological day care unit with time slots reserved for interviewing, examining, diagnostic tests, and analyzing and discussing the findings among a multidisciplinary team.Diagnostic tests include a skin biopsy for identifying abnormalities in IENFD, NCS, and QST. Patient population and selection Using the waiting list registration, all patients with suspected SFN and $18 years of age, referred to the SFN Center between April 2017 and February 2020, were invited by e-mail to participate in this study.They were given access to an Internet-based electronic environment to complete the online questionnaires.For those patients not able to complete the online questionnaire, a paper version was provided.All questionnaires were completed before the patients' visit to the SFN Center, so while on the waiting list.Exclusion criteria were declining participation and significant language barrier.Only data of patients with confirmed SFN were analyzed.Between April 2017 and February 2020, 258 patients participated in the study.The flowchart of the waiting list up to and including their visit to the diagnostic SFN service is shown in Figure 1, and 67 patients did not visit the SFN Center yet.In 81.7% of the 191 patients, the diagnosis of SFN was confirmed (n 5 156); in 15.7%, the diagnosis could not be confirmed, and these were omitted from further analysis in this study. Standard protocol approvals, registrations, and patient consents The study was approved by the Medical Ethics Committee of Maastricht UMC1 (15-4-004).Informed consent of all patients was obtained before participating in the study, according to the principles of the Declaration of Helsinki. 35 Data collection Sociodemographic data (eg, age, sex, and education) and data on clinical characteristics (duration of SFN complaints) and patient-related outcome measures (PROMs; eg, pain impact on daily life, anxiety, depression, and health-related QoL) were obtained by the online survey and the electronic patient file. Pain impact on daily life was measured using the 11-point Pain Impact Numerical Rating Scale (Pain Impact NRS, with 0 meaning having no impact and 10 meaning having the worst imaginable impact). 14nxiety and depression were assessed using the Hospital Anxiety and Depression Scale (HADS) questionnaire, disaggregated for anxiety (HADS-A) and depression (HADS-D).Each subscale consists of 7 questions with answers recorded on a 4-point Likert scale.Scores can range from 0 to 21.Higher scores indicate more symptoms of anxiety and depression.22 The 5-level EuroQol 5D (EQ-5D-5L) was used to measure generic health-related QoL.20 The EQ-5D-5L consists of a Visual Analog Scale (VAS), which ranges from worst (0) to the best imaginable health (100), and 5 additional questions, each representing a health-related QoL dimension.These 5 additional questions covered mobility, self-care, usual activity, pain/ discomfort, and anxiety/depression.20 Each question has 5 response levels, classifying the severity of complaints for that specific dimension and allowing 3125 possible state-of-health combinations. Th were converted into EQ-5D utility scores according to the Dutch tariff.33 Possible EQ-5D utility scores range from 20.446 to 1.00, 33 with 20.446 being the worst imaginable state of health and 1.00 the best. Healthcare and patient and family costs related to SFN were measured with the iMTA Medical Consumption Questionnaire. Participants were asked to only report resource use and costs related to neuropathic and autonomic complaints of SFN, within a 3-month recall period. 4The iMTA Medical Consumption Questionnaire includes questions on the utilization of general practitioner (GP) visits, medical specialist visits, other healthcare provider visits (eg, psychologist), emergency room (ER) visits, hospital outpatient visits, and hospitalizations and use of paramedical care, prescription medications, outpatient tests, and procedures.In addition, questions on out-of-pocket costs for medical care and nonmedical resources (help with household or garden work, travel expenses, and help with daily activities, such as cooking) related to SFN are also included. The iMTA Productivity Cost Questionnaire (iPCQ) was used to measure productivity costs. 5Participants were asked to only report data on paid employment, reduced work schedule, absenteeism, unemployment, and the costs of productivity loss for unpaid employment activities as a result of neuropathic and autonomic complaints of SFN, using a 3-month recall period, in accordance with the Dutch guideline for cost research. 19gure 1.Study flowchart at the time of data analysis on August 21th 2020.SFN, small fiber neuropathy.Subjects were asked to score the pain impact on their working productivity of the last week on the Pain Impact NRS. 14 Statistical analysis Descriptive statistics were used to present sociodemographic, clinical characteristics, and PROMs data.The Pain Impact NRS scores on daily life were used to categorize subjects into 1 of 3 pain impact groups based on established cut-off points for neuropathic pain (0-3, mild; 4-6, moderate; and 7-10, severe). 36ifferences in demographics, pain impact on daily life, anxiety, depression, and health-related Qol among these groups were tested with an analysis of variance (Kruskal-Wallis test) where appropriate for the continuous variables, whereas a Chi-square test was used for the categorical variables.The continuous variables were tested for normality using a Kolmogorov-Smirnov test. Healthcare, patient, and family costs were calculated using per unit costs obtained from the Dutch guideline for cost research. 30nit costs were converted to the reference year 2020 by means of index numbers. 6To acquire annual average overall costs per patient, the measured 3-month costs were multiplied by 4. The productivity costs of paid employment were quantified using the friction cost approach, in which productivity loss is restricted to 85 calendar days (12 weeks). 19The cost of an hour of productivity loss from paid employment was calculated by using the Dutch guideline for cost research based on the average hourly salary costs per paid worker. 19roductivity costs from unpaid employment were valued on the basis of replacement costs for household care.This was equated to a standard hourly rate for cleaning work, as used by the Dutch Central Administration Office (CAK). 19he total societal costs consist of the sum of healthcare, patient and family, and productivity costs.Average total societal costs per patient were multiplied by the prevalence of SFN for the general adult population to estimate the total COI of SFN for the Dutch society.In 2020, the Dutch adult population aged $20 years totaled 15,592,909 residents (Central Bureau of Statistics 2020).By applying the prevalence rate of 53 cases per 100,000 inhabitants, it was calculated that approximately 8264 adults in the Netherlands have SFN. Usually, cost data are not normally distributed.Therefore, a nonparametric bootstrap resampling procedure with 1000 simulations was performed in SPSS to determine statistical uncertainty of the cost estimates per category.The differences in costs among the pain impact groups were established by calculating confidence intervals (CI) by the bootstrapping procedure. Table 1 Baseline characteristics of the SFN study population.Multivariate linear regression analyses were performed to estimate the association of age, pain impact on daily life, healthrelated Qol, anxiety and depression, with healthcare, patient and family, productivity, and societal costs.These variables were selected based on their relevance according to the literature. 2,18,32A backward stepwise method was used to test for interaction between the independent variables on all outcomes using an a of 0.05.For statistically significant interactions, results were presented with an interaction term whenever there was at least one continuous variable or stratified per category whenever there was at least one categorical variable.A log transformation was performed on the dependent variables to resemble normality.Where possible, the observed b9s were back transformed into a relative difference (in %) using the formula: (exp[b] 2 1) 3 100%.All analyses were performed using IBM Statistics SPSS version 25.0. Demographic and clinical characteristics Demographics, clinical characteristics, and PROMs of the 156 patients with confirmed SFN are shown in Table 1.The median age was 55.2 (IQR 47.2-61.6)years, and the majority (66.7%) was female. Differences between the 3 pain impact groups Between all 3 pain impact groups, statistically significant differences were seen in the Pain Impact NRS on daily life and health-related QoL utility scores (P , 0.001).Furthermore, statistically significant higher depression scores were observed in the severe pain group compared with the mild pain impact group (P , 0.001).No statistically significant differences were seen between the 3 pain impact groups regarding age, sex, level of education, duration of SFN complaints, anxiety, and diagnostic tests. Patient and family costs The total annual average SFN patient and family costs was €2076 (95% CI €1032-€3759) per patient (Table 3).Personal care was only received by patients in the severe pain impact group, with an annual average of €246 (95% CI €0.0-€705) per patient.Domestic and private paid domestic help only occurred in the moderate and severe pain impact group, with an annual average cost of €532 (95% CI €118-€1219) and €1178 (95% CI €56-€184) per patient, respectively.Half of the patients used informal care, which accounted for .80% of the total patient and family costs (annual average costs per patient €1,739, 95% CI €1181-€2386).The highest travel expenses were found in the severe pain impact group, with an annual average of €253 (95% CI €75-€612) per patient.Over-thecounter medication was used by 44% of the patients, and more than a quarter of the patients bought medical devices, on which an annual average of €550 (95% CI €358-€786) per patient was spent. Costs of productivity loss Less than half of the patients were in part-time paid employment (25%) or full-time paid employment (22%), and 21% was disabled.The average SFN productivity costs are presented in Table 4.Among patients in paid employment, the average weekly contract hours were 30.8 hours (95% CI 27.7-32.4),with an average monthly net income of €1387 (95% CI €1087-€1681) based on patients' reported net incomes.Absenteeism in the last quarter occurred in 56% of the patients in paid employment, with an average of 22.9 days (95% CI 16.7-29.1).Costs of productivity loss due to absenteeism per patient in paid employment was €3540 (95% CI €2486-€4676) per quarter.72% of all patients reported a reduction in performing daily household activities due to SFN, with an average quarterly reduction of 517.4 hours (95% CI 388.9-679.4)per patient.Average costs of productivity loss because of limitations in performing daily household tasks due to SFN was €8045 (95% CI €5978-€10,255) per patient.The total average quarterly costs of productivity loss of all patients were €12,167 (95% CI €13,351-€21,926) per patient. Societal costs The COI of patients with SFN (€, 2020) in the Netherlands is presented in Figure 2 and will be discussed hereafter.The total average SFN productivity costs accounted for 68% of the total societal costs at the patient level.The total healthcare costs of the adult population with SFN were estimated to be €29.8 million (95% CI: €26.5 million-€33.7 million).Total average societal costs of the adult general population with SFN in the Netherlands were estimated to be €147.7 million (95% CI €120.5 million-€176.3million). Statistically significant associations with costs There were no statistically significant associations found between age, pain impact on daily life, health-related Qol, anxiety and depression, and SFN healthcare costs (Table 5).In the SFN patient and family costs, a statistically significant interaction was found between health-related QoL and anxiety (p interaction # 0.001).Therefore, results for patients with mild/moderate anxiety (symptoms # 10) and severe anxiety (symptoms $ 11) are presented separately in Table 5. Health-related QoL was statistically significant inversely associated with patient and family costs in patients with mild or moderate anxiety symptoms (P , 0.01).An increase of 0.1 point on the EQ-5D utility score was associated with a 13.2% decrease (95% CI 220.8 to 4.9) in patient and family costs.No significant association with health-related QoL was observed in patients with severe anxiety symptoms (4.2% increase, 95% CI 210.0 to 20.6, P-value 0.571).For the costs of productivity loss and societal costs, a significant interaction was observed between pain (pain impact NRS # 6 vs pain impact NRS $ 7) and health-related QoL (per 0.1 point increase, continuous) (P , 0.05), and therefore, these results are presented separately by the pain group in Table 5. Health-related QoL was statistically significant inversely associated with productivity costs in the pain impact group NRS # 6 (P # 0.001).An increase of 0.1 point on the EQ-5D utility score was associated with a 41.4% decrease (95% CI: 257.5 to 219.3) in productivity costs.No statistically significant association with health-related QoL was observed in patients with a pain impact NRS of $ 7 (3.9%decrease, 95% CI 2 17.7% to 12.3%, P-value 0.612).Regarding the societal costs, health-related QoL was statistically significant inversely associated in the pain impact groups NRS # 6 and NRS $ 7 (P , 0.01 vs P , 0.05, respectively).A 0.1 point increase on the EQ-5D utility score was associated with a decrease of 14.6% (223.7% to 24.5%) in societal costs in the pain impact group NRS # 6 and a decrease of 5.8% (210.8% to 20.5%) in societal costs in the pain impact group NRS $ 7. Discussion To the best of our knowledge, this is the first study examining the healthcare and societal costs of clinically referred patients with confirmed SFN in the Netherlands.The total healthcare costs to Dutch society for the SFN adult population is estimated to be almost €30 million annually, which is approximately 0.03% of the total healthcare expenditure in the Netherlands in 2020 (€106 billion; Central Bureau of Statistics 2020).Overall, health-related QoL was statistically significant associated with SFN patient and family, productivity, and societal costs.Previous COI of confirmed SFN has not been performed.A cost study of idiopathic painful neuropathy with SFN involvement has been conducted, however, 27 which allows us to compare our results with previous research.Demographic and clinical characteristics of the 2 study populations were similar, but the study population in the previous study 27 was insufficiently defined due to inadequate diagnosis of SFN.Therefore, the results may not be a representative for the SFN population.In addition, data in the previous study was collected over a period of 6 months, which is inconsistent with data collection guidelines for costs studies and may have led to an underestimation or overestimation of costs. 28otal healthcare and patient and family costs (direct costs) of SFN in our study were lower (€5690), and severe pain was associated with statistically significant higher costs.In the previous study, 27 only the direct costs of the mild and moderate pain severity groups were statistically significant higher. The main contributors to the healthcare costs of SFN in our study were medical specialist care and active medical treatment and were different to main contributors identified in the previous study (prescription drugs and out-of-pocket medical costs). 27In our SFN study, severe pain was associated with statistically significant higher costs of medical specialist care and active medical treatment, whereas in the previous study, no association was found. 27he total productivity costs (indirect costs) of the 2 study populations were similar (approx.€12,000), 27 and the costs of the severe pain group were significantly higher compared with the mild pain group.In the previous study, the indirect costs of the pain severity groups were not statistically significant higher. 27he proportion of patients with SFN in paid employment (47%), who were retired (15%) or disabled (21%) in our study was different compared with the previous study (16%, 49%, and 23%, respectively). 27he main contributor to the high costs of productivity loss of SFN in our study was the cost of lost hours due to being limited in performing daily household tasks, which was significantly higher in the severe pain group compared with the mild pain group.In the cost study of idiopathic painful neuropathy with SFN involvement was the main contributor costs of disability, with no association found. 27ur average SFN healthcare costs are comparable with the UK healthcare costs of painful diabetic peripheral neuropathy 7 and the Dutch healthcare costs of fibromyalgia. 34Furthermore, the use of SFN pain treatments makes up 28% of the SFN healthcare costs, which is comparable with chronic neuropathic pain treatments in other academic pain centers. 31Neuropathic pain is associated with lower health utility scores, and the EQ-5D utility score of our SFN population (0.59) is comparable with diabetic neuropathy (0.61) 13 and fibromyalgia (0.54). 16Comorbidities such as anxiety and depression have a negative effect on Qol in patients with chronic peripheral neuropathic pain, 26 and our study results showed that severe pain was associated with higher depression scores.Health-related Qol is highly correlated with morbidity, mortality, healthcare, and societal costs, 13 and in our study, we used the EQ-5D as a generic instrument to measure health-related Qol.The derived utility scores can also be used in a planned cost-effectiveness study. 10egarding the SFN patient and family costs, over-thecounter-medication (eg, nutraceuticals) was often used in addition to the prescribed medications.Nutraceuticals, such as N-palmitoylethanolamide (PEA) 24 and vitamin D 23 are increasingly used 1 and may play a role in neuropathic pain treatment, but more scientific evidence is needed on their effectiveness. The leading factor of SFN productivity costs was the limitation in performing daily household tasks due to painful SFN.Important for estimating the COI of SFN is including replacement costs for patients' not performed daily household activities.That is, valuing lost productivity hours from unpaid employment activities, which should not be limited to activities actually taken over by informal care givers. 19Productivity costs in patients with a NRS # 6 were associated with higher pain impact on daily life and lower healthrelated QoL.Our results observed a 41% reduction in productivity costs per 0.1 point increase on the EQ-5D utility score.This study's contribution to the literature is in the detailed insight it provides into the societal COI of patients with confirmed SFN in the Netherlands.This study is based on daily practice data of patients with confirmed SFN, and we were able to investigate a number of associations between costs and patient characteristics. A limitation of our study was that tertiary care patients were included, who may experience more severe SFN symptoms than patients seeking help in primary or secondary care.However, our sample is a representative for most of the total SFN prevalence figure for 3 reasons: Our study population is comparable with (1) the mean age, the percentage male-female ratio, duration of SFN complaints, and mean average pain of a Swiss study population with SFN, 3 (2) the healthcare cost study from the US, 27 and (3) the inclusion of a considerable number of patients with mild and moderate complaints (n 5 69).Furthermore, because only older prevalence rates were available while rates are likely increasing due to increased global 9 ) Pain impact NRS on daily life, median ( Figure 2 . Figure 2. Cost of illness overview of SFN in the (2020). Table 2 Annual SFN-related healthcare costs. Table 3 Annual SFN-related patient and family costs. Table 4 Annual SFN-related productivity loss.
2023-08-11T06:17:18.304Z
2023-08-04T00:00:00.000
{ "year": 2023, "sha1": "c215d0a229a91021a387bb5d9cc060446fee836a", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/pain/fulltext/9900/cost_of_illness_of_patients_with_small_fiber.373.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5cc8b1114947fc6449ddc624badd1c979141706f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220336905
pes2o/s2orc
v3-fos-license
Social Capital and Mental Health Among Black and Minority Ethnic Groups in the UK Black and minority ethnic communities are at higher risk of mental health problems. We explore differences in mental health and the influence of social capital among ethnic minority groups in Great Britain. Cross-sectional linear and logistic regression analysis of data from Wave 6 (2014–2016) of the Understanding Society databases. In unadjusted models testing the likelihood of reporting psychological distress (i) comparing against a white (British) reference population Indian, Pakistani, Bangladeshi and mixed ethnic minority groups recorded excess levels of distress; and (ii) increasing levels of social capital recorded a strong protective effect (OR = 0.94: 95% CI 0.935, 0.946). In a subsequent series of gender-specific incremental logistic models-after adjustment for sociodemographic and socioeconomic factors Pakistani (males and females) and Indian females recorded higher likelihoods of psychological distress, and the further inclusion of social capital in these models did not materially alter these results. More research on the definition, measurement and distribution of social capital as applies to ethnic minority groups in Great Britain, and how it influences mental wellbeing is needed. Background Black and minority ethnic (BME) communities appear to be at a greater risk of psychosis compared to the white UKborn population [1][2][3] and rates of depressive symptoms are higher among BME groups in Europe [4]. In the UK, Pakistani men are twice as likely to report a Common Mental Disorder (CMD) when compared against white males [5,6]. Rates of mental illness differ among BME groups and are not reflective of rates in their country of birth [7]. Explanations of raised vulnerability for mental disorders among BME populations include issues with migration, settlement and experience of racism and discrimination, poverty and adverse environmental conditions [7][8][9]. Theoretical Framework Social capital refers to those potentially positive aspects of social life and is constructed through shared networks, norms, and trust. It enables a more effective pursuit of shared objectives [10] and is commonly described as having two components: cognitive social capital-subjective factors acting to keep networks together (and measured by indicators such as trust, social support and neighbourhood satisfaction); and structural social capital-attachment to organisations such as churches and measured by attendance and strength of commitment [11]. Unlike structural social capital, cognitive social capital has been indicated as an important predictor of mental wellbeing [12]. High levels of social capital may enhance a sense of belonging and thus increase collective wellbeing [13]. Conversely, where social capital is low individuals may feel insecure and alienated. While there is no real consensus on the relationship between social capital and mental wellbeing [14] some evidence suggests that smaller social networks, fewer close relationships, and lower perceived adequacy of social support are associated with depressive symptoms [15,16]. BME populations experience such issues in the United Kingdom (UK) [17,18]. While racism may have a detrimental effect on BME social capital and wellbeing [19,20] there has been scant research that considers how social capital impacts wellbeing among BME groups in the UK [1]. Participants We used a cross-sectional analysis of data drawn from Wave 6 (2014-2016) of the Understanding Society database, which contains representative samples of BME and white populations in the UK [21]. Understanding Society is a longitudinal survey of households in the UK [22]. Data Collection A detailed description of Understanding Society, sample design and the ethnic minority and migrant population sample structure has been published previously [23,24]. Comprehensive descriptions of the techniques and methodology used is published elsewhere [25], as are sampling methodologies [26]. Data collection was conducted face-to-face via computer aided personal interviews, with additional selfcompletion instruments such as the General Health Questionnaire-12 (GHQ12) administered separately. We extracted data from Wave 6 only. The final dataset used in analysis comprised 25,921 observations-a total arrived at as follows: a boosted sample (n = 4656) of ethnic minority participants in Wave 6 was excluded because they were not asked some detailed questions we relied on in this analysis; the natural attrition from Wave 1 to Wave 6 had been 35.4%, reducing the initial sample from 40,634 observations; and a small number of observations containing either missing values for the variables used in the analysis or where information had been gathered via proxies (less than 1%) were also excluded from analysis. Because of the relatively small amount of missing data, and large sample size it was thought unnecessary to impute this information. BME Groups Ten ethnicity groups were identified: white (British); white (Irish); white (other); mixed ethnicity; Indian; Pakistani; Bangladeshi; Caribbean, African, with a residual other category comprising minorities deemed too small to justify separate categories for analysis. The mixed group represents a growing group of UK citizens whose parents are each from different ethnic groups, primarily partnerships between white British and people from ethnic minority groups [27]. The white (other) group comprise those minorities who identify as both white and not British or Irish. Mental Health The GHQ12 is a self-administered screening test used among respondents in community and non-psychiatric clinical settings to assess psychological distress. It has reliability coefficients ranging from 0.78 to 0.95 and good sensitivity and specificity among BME groups [28][29][30][31]. From the GHQ12 caseness (psychological distress = yes) was derived as a binary field with a cut-off point of three or more (from range 0-12) signalling distress [29,32]. Social Capital The Individual components of social capital-each with responses ranging from one (strong disagreement) to five (strong agreement)-have been found to be valid elsewhere [33][34][35]. We summed these to give an overall score: participants were asked about their neighbourhood, and how strongly they felt about the following: the close-knit nature of their neighbourhood; the willingness of people to help neighbours; whether people in their neighbourhood can be trusted; whether people in the neighbourhood get along with each other; whether individuals belong to the neighbourhood; if they can borrow things from neighbours; and finally, if they feel similar to others in their neighbourhood. Allowing for reverse-coding the summary scale ranged from eight to forty (with higher scores indicating greater social capital). This social capital score demonstrated high internal consistency (Cronbach's alpha = 0.84). Migrant and Acculturation Factors From country of birth we derived born in UK (Yes/No). Acculturation and sense of assimilation was assessed via a continuous variable, British Identity measuring how individuals perceived the importance of being British, with responses on a scale from zero (not important at all) to ten (extremely important). Sociodemographic and Socioeconomic Factors These include: age (continuous); gender; marital status (grouped as single, married/cohabiting and, as a single group, those widowed, separated or divorced); family structure; and locale of residence-summarised as urban or rural (and generated by the core Understanding Society data management team using information provided by the Office for National Statistics Rural and Urban Classification of Output Areas). Family structure comprised four categories-single (no children), in a couple (no children), in a couple (with a child), or single (with a child). Proxy indicators of socioeconomic circumstance included home ownership (yes, no), economic activity (employed, not employed, retired) and educational level. Education was classified as: primary (no GCSEs); secondary (GCSEs, A-levels or equivalent nonvocational attainment) or tertiary (degree level). Analysis Analysis utilized SPSS Version 25 software (SPSS Inc., Chicago, IL, USA). Descriptive statistics for continuous variables included means, standard deviation and range, with percentages presented for categorical variables. All findings are presented for males and females separately. Independent sample T-tests for continuous variables, and Pearson's chi-square for categorical variables determined gender differences in the population. We calculated mean differences in social capital across ethnic groups with a one-way ANOVA, and used linear regression to explore the relationship between ethnicity and social capital. Binary logistic regression examined determinants of psychological distress for the total sample and for men and women separately. Fully adjusted odds ratios (ORs) and 95% Confidence Intervals were derived. For all analyses, p-values of less than 0.05 were considered significant. Ethics This study was completed in keeping with the relevant ethical and legal obligations of data usage from the UK Longitudinal Household Study: as this information is publicly available ethical approval was not sought. Results More than 20% of the sample were psychologically distressed (Table 1). Eighty percent of the sample were white British, and the largest single ethnic minority group was Indian (3.1%). The mean age of the sample was 49.2 years (standard deviation (SD) 17.4) and 56% (14,432) were female. The predominant education status was High School level, 63% were employed and 74% reported owning their house. Over 65% were married or cohabiting, and 48% lived in households with more than one adult and no children. Over three quarters lived in urban areas, 88% were born in the UK and 54% professed a religious affiliation. The mean for social capital was 29.1 (SD 5.0) and for British identity-where medians were more appropriate measures-the median was 8 (range 0-11). Prevalence of psychological distress ranged from 21% in the white British population to 34% in the Pakistani population (Table 2). Gender specific differences were evident across most factors, with the exception of locale of residence, nativity and Britishness. Females were more likely than men to be younger, better educated, a single parent, report a religious affiliation, be born outside the UK and to report psychological distress; and less likely to be employed or be home owners. Additionally, women reported higher social capital levels. Social capital varied significantly across ethnic groups (Table 3). Generally males recorded stronger effect sizes than women over the range of minority groups. Compared to the white British group, white (other), mixed, Caribbean, African and other ethnic groups reported lower social capital, while white (Irish) and Pakistani groups reported higher social capital. In the stratified analyses males recorded stronger effect sizes of white (Other), mixed, Caribbean, African and other ethnicities were more likely to report lower social capital whereas white (Irish), Pakistani and Bangladeshi groups reported higher social capital levels than the white British group. Similarly, for women, low social capital was reported by mixed, Caribbean, African and other ethnic groups (with no groups reporting higher social capital). Table 4 shows results from a series of unadjusted models examining likelihood of recording psychological distress for each of the factors included in the analyses. Compared to the white British five ethnic minority groups-mixed, Indian, Pakistani, Bangladeshi, and other-recorded excess likelihoods, highest (OR = 1.98: 95% CI 1.67, 2.34) amongst Pakistanis. Females were more likely than males to report distress (OR = 1.48: 1.40, 1.58); as were those not employed when compared to their employed peers (OR = 2.81: 2.59, 3.04); those living in urban areas when compared to their rural peers (OR = 1.25: 1.17, 1.35); and those not born in the UK (OR = 1.13: 1.03, 1.23). Those with higher education levels were somewhat protected, as was being currently married (compared to those never married or currently not married-OR = 1.54: 1.43, 1.66 and 1.44: 1.33, 1.57 respectively). Finally, the factors tested as continuous variables all show protective effects in their respective models: psychological distress declines by 1% with increasing age (OR = 0.99: 0.989, 0.992); and 6% (OR = 0.94: 0.935, 0.946) and 4% (OR = 0.96: 0.95, 0.97) for increasing levels of social capital and increasing strength of feelings of Britishness respectively. Table 5 shows the likelihood of experiencing psychological distress by ethnic group (compared against white British) in a series of incrementally adjusted models, ending with full adjustment for all selected characteristics. Only the results for ethnic group are presented (the full model table is available on request). In the minimally adjusted model (M1) those from Indian, Pakistani and mixed ethnicities suggesting that, in this study, social capital exerts a relatively weak independent effect in models which include sociodemographic and socioeconomic characteristics. Discussion To our knowledge this is the first study to examine the influence of social capital on the mental health of a wide range of ethnic groups in the UK. Our findings suggest that, compared to their white British peers, psychological distress may be more prevalent in some (but not all) BME communities. This corroborates other studies [1,5,36]. In the British Psychiatric Morbidity Survey [37] common mental disorders were found in around one adult in six and were more prevalent in specific population groups. These included Black women, adults under the age of sixty who lived alone, women resident in large households, unemployed adults, those in receipt of benefits and those who smoked cigarettes. In the EMPIRIC study [5] ethnic differences in CMD prevalence were modest. After adjusting for differences in socio-economic status CMD risk was higher amongst Irish and Pakistani men aged 35-54 years, compared to white UK-born people. Higher rates of CMD were also observed among Indian and Pakistani women aged 55-74 years, compared to white women of similar age. Higher rates of psychological distress among Indian and Pakistani groups may be partly related to racism and/or disadvantage [38][39][40][41], but it is unclear why some BME communities should be more affected than others. In this study, while unemployment is a more specific determinant of psychological distress among males, for women (additional to unemployment) more personal, culturally significant factors exert particular pressures: for example, low educational attainment, marriage (but without children), and being born outside the UK. Previous research suggests specific socio-cultural factors-influence of extended family, single women not chosen for marriage, infertility, gender of offspring and social isolation-that may be relevant to psychological distress in Pakistani women [43]. In this study, for men and women, home ownership, greater sense of British identity and higher levels of social capital were protective for mental health, suggesting that economic security and settlement in the UK (both possibly indicated by home-ownership) influence wellbeing within such communities. These findings underscore the different experiences and concerns of women and men in minority ethnic communities, with consequences that are differentially distributed across minority ethnic groups and which could be determined by the length of settlement and the resilience of their respective communities in coping with socioeconomic adversity [44,45]. Thus, for example, educational attainment may have wider implications for women than men in traditional communities-while women may be more restricted in relation to their wider social and educational access this, however, may also signify higher levels of integration and increased social inclusion within the group [46,47]. Similarly, while infertility is a source of distress for many women it may carry greater resonance in more traditional communities [48]. In this study, for men and women separately and for ethnic groups, levels of social capital appear significantly associated with mental ill-health, corroborating current evidence of associations between social capital and CMDs [49]. The protective effect of social capital on mental wellbeing is in agreement with other evidence [50][51][52][53]. However, in the fully adjusted models (which included sociodemographic and socioeconomic characteristics) inclusion of the measure for social capital did not materially mitigate recorded levels of psychological distress. This underlines the importance of contextual social and political factors and how these may impact on the mental health of BME populations. This study indicates which ethnic minority groups in the UK experience a greater risk of psychological distress, indicating where investment of mental health resources are needed. Gender differences for distress among BME groups imply that appropriate interventions should be specific for men and women. The influence of social capital on mental wellbeing warrants further study. Investment into civic society type organisations, in order to build up trust and cohesion could possibly improve mental wellbeing. Conclusions This study suggests that in the UK certain BME groupsespecially Indian, Pakistani and Bangladeshi groups-are at an increased risk of psychological distress. Levels of social capital are high in Pakistani and Bangladeshi men, but low for Caribbean and African women. While determinants for psychological distress may differ considerably among men and women, and social capital appears to be an important determinant of mental health for both men and women and for specific ethnic groups, its effect is diluted when examined against socioeconomic and sociodemographic considerations. These findings indicate possible need for investment in community-specific public health interventions to improve sense of security and belonging particularly among minority ethnic groups. Strengths and Limitations Understanding Society contains a representative sample of minority ethnic populations in the UK. The inclusive and broad conceptualisation of ethnicity and self-reporting eliminates researcher bias. Using a large representative sample aids reliability. While the GHQ12 is not a diagnostic tool, it is well validated for use with ethnic minority groups. The analysis is cross-sectional, and as such no causality can be implied. Another limitation relates to the (unvalidated) measure of social capital used-however, items used in its construction relate to a number of the constructs of social capital. Further validation is warranted. Compliance with Ethical Standards Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-07-05T14:10:14.899Z
2020-07-04T00:00:00.000
{ "year": 2020, "sha1": "490c88b41652c19f555ab100d1e7563231046a89", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10903-020-01043-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "490c88b41652c19f555ab100d1e7563231046a89", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
122539368
pes2o/s2orc
v3-fos-license
Photoluminescence from surface GaN/AlGaN quantum wells: Effect of the surface states We report on photoluminescence (PL) measurements at 85 K for GaN/AlGaN surface quantum wells (SQW's) with a width in the range of 1.51-2.9 nm. The PL spectra show a redshift with decreasing SQW width, in contrast to the blueshift normally observed for conventional GaN QW's of the same width. The effect is attributed to a strong coupling of SQW confined exciton states with surface acceptors. The PL hence originates from the recombination of surface-acceptor-bound excitons. Two types of acceptors were identified. It is well known that a decrease in the width of the semiconductor quantum wells (QW's) formed in the epitaxial stacks leads to an increase in the energy of quantum states which appears as a blueshift of the corresponding photoluminescence (PL) band. 1 In such quantum systems, the medium confining the well creates barriers, which prevent wave functions of electronic states from extending beyond the QW.If a QW is located at the surface of the epitaxial stack [surface QW (SQW)], the vacuum level at the surface of the QW material is assumed to play the role of such a barrier.However, the modeling of the surface as an abrupt termination of the structure with a quasi-infinite potential barrier has unambiguously been shown to be in a disagreement with experimental observations. 2The study of ultra-high-vacuum PL from the GaAs/Al 0.3 Ga 0.7 As QW's revealed a strong quantum coupling of the QW confined states to the surface states. 2 As a result, the PL from the GaAs QW's, confined by a thick Al 0.3 Ga 0.7 As barrier on one side and capped by Al 0.3 Ga 0.7 As barrier with thickness varied from 0 to 100 nm on the other side, showed a redshift and a decrease in intensity as the Al 0.3 Ga 0.7 As cap layer width decreased, i.e., with surface states approaching to the QW states.The effect of relaxed surfaces subjected to air on the SQW states is expected to be more significant because of surface donorlike and acceptor-like centers associated with some impurities and structure defects. 3However, the general tendency is expected to be similar to that for ultra-highvacuum PL measurements since the difference is only in the nature of the surface states.[6] In the current paper, we report on PL from the GaN/Al 0.2 Ga 0.8 N SQW's excited with the photon energy of 3.86 eV, which is just above the Al 0.2 Ga 0.8 N barrier bandgap (3.8 eV).The PL from SQW's has been compared to that from a conventional Al 0.2 Ga 0.8 N/GaN/Al 0.2 Ga 0.8 N QW measured under the same experimental conditions.With decreasing SQW width in the range of 2.9 -1.51 nm, we observed a redshift of the PL bands of the order of 10 meV.Since in the conventional QW's, a decrease in the QW width of the same order leads to the blueshift, 4-6 the observed PL redshift points to a coupling of the SQW confined exciton state to the surface acceptor states.The PL hence originates from the recombination of surfaceacceptor-bound excitons ( 0 s A X A ).Our findings are different from those obtained with more energetic excitations 7,8 (photon energy higher than 4 eV), where PL spectra are likely to reflect, in addition, the nonequilibrium carrier and phonon dynamics. The GaN SQW's of different thicknesses (1.51, 1.6, 1.65, 1.7, 2.15, and 2.9 nm) were grown on a 100 nm Si(111) wafer in metalorganic chemical vapor deposition (MOCVD) reactor at nominally 1000 o C. 7 The epitaxial stack consisted of a (Al,Ga)N-based transition layer, followed by ~ 800 nm of unintentionally doped (UID) GaN.The device layer consisted of ~ 31.5 nm of UID Al 0.2 Ga 0.8 N, which was capped with a thin UID GaN layer of different thicknesses mentioned above.The device layer for the conventional QW consisted of ~ 8 nm of UID Al 0.2 Ga 0.8 N and 4 nm UID GaN, which was capped with 10 nm UID Al 0.2 Ga 0.8 N.All PL measurements were carried out in a vacuum temperature-controlled cryostat at 85 K.An optical parametric amplifier, pumped by a 1 kHz regenerative amplifier seeded by an 80 MHz Ti:Sa oscillator operating at 790 nm (170 fs pulses) in combination with the Topas light conversion system emitting the 321 nm (3.86 eV) light of an average power of 0.3 -0.8 mW has been used as a source for PL excitation.Also, the PL spectra measured with unfocused light from the continuous wave (CW) He-Cd laser (325 nm -3.81 eV) of an average power of 2 mW has been used for comparison.The PL response was monitored either by a charge-coupled device camera or by a streak camera through the fiber optics and monochromators.The streak camera temporal resolution edge was 30 ps. PL spectra for GaN SQW's of different widths and PL spectrum for the conventional GaN QW measured with pulse excitation are shown in Figs. 1 and 2 together with spectra for GaN SQW's measured with CW laser excitation.The spectrum for the conventional QW consists of two bands peaked at around 3.449 and 3.478 eV, which are assigned to the QW and bulk GaN (GaN buffer layer) excitonic emissions, respectively.5][6] In contrast, the PL from GaN SQW's exhibits features that cannot be explained by combination of the quantum confinement and quantum-confined Stark effects.The dominant PL band is peaked at energy (3.472 eV for 2.9 nm SQW) that is below the bulk GaN excitonic emission (the latter one is expected to be weak for these samples), and progressively redshifts (~ 10 meV) as the SQW width decreases (Fig. 3).The longitudinal optical (LO)-phonon sideband of the main peak and the less intense shoulder peak can be seen at lower and higher energies, respectively, when the CW laser excitation is applied (Figs.1-3).The shoulder peak, which is distanced from the dominant PL peak by ~ 16 meV, shows a similar redshift (Fig. 3).5][6] We attribute this difference to an additional effect of the surface states on the SQW confined exciton states.Here we suppose that the built-in electric field plays a significant effect on GaN SQW's.Because the piezoelectric field is known to mainly affect the AlGaN barriers and not the GaN layer, 5 its effect on SQW's is weak.However, this is not the case for the spontaneous polarization field, which is expected to give a dominant contribution into the quantum-confined Stark effect in SQW's. 6Nevertheless, we show below that the redshift of the PL band due to the quantum-confined Stark effect cannot explain the PL redshift observed.5][6] The space-charge field originating from the Fermi-level pinning at the surface can be disregard since it changes on a much larger length scale of ~ 100 nm.Thus we can safely neglect all field effects with exception of the spontaneous polarization field and argue that the GaN SQW confined exciton state should be coupled to surface states, in a similar way as in GaAs SQW's. 2 Specifically, the dominant PL band and the shoulder peak originate from the recombination of surface-acceptor-bound excitons, which are bound to two types of surface acceptors: ( 0 1 s A X A ) and ( 0 2 s A X A ), respectively.At the temperature used (85 K), the effect of surface neutral donors is negligible due to the much smaller binding energy of excitons bound to donors. 3ere we stress that the main distinctive feature of GaN SQW's, as compared to the conventional QW's, is a high density of surface acceptors (~ 10 12 cm -2 ). 9 One can estimate the sheet exciton density in SQW's by taking into account the Beer's law distribution of carriers photoexcited in z direction (perpendicular to the SQW plane) at the maximal laser intensity applied (9.4×10 10 W/cm 2 ) and the GaN material parameters.The power density absorbed in the media is div , where I 0 is the initial laser intensity at the sample surface, R = 0.2 (for photon energy of 3.86 eV) is the normal incidence reflectivity, and  = 1.0×10 5 cm -1 is the absorption coefficient for photon energy of 3.86 eV.Hence, the power density absorbed within the absorption length is = 2.76×10 13 W cm -3 .The resulting density of excitons photoexcited in the media of the absorption length width is 7.6×10 18 cm -3 , which corresponds to the 8.3×10 12 cm -2 sheet exciton density photoexcited in the SQW.The latter value is comparable to the surface acceptor density and so the surface-acceptor-bound excitons are expected to dominate the light emission process.This is a a reason why the PL from ( 0 s A X A ) excitons dominates the emission spectrum at 85 K.We do not discuss here the identity of these acceptors by noting only that the energy difference between ( 0 1 s A X A ) and ( 0 2 s A X A ) peaks well matches the range of differences for typical binding energies of excitons bound to deep acceptors in GaN. 3 As this takes place, the PL from other sources is expected to be much weaker.For instance, the contribution to the PL from 2D electron gas, which can possibly be formed at the GaN buffer layer/Al 0.2 Ga 0.8 N barrier interface, is expected to be relatively small, because of a large spatial separation of 2D electrons and holes localized at bulk acceptors.The assignment of the observed PL to ( 0 s A X A ) excitons in SQW's is also supported by time-resolved measurements (Fig. 4 and 5).In order to control the streak camera temporal resolution edge (25 ps), we also measured PL spectra with higher laser power, which revealed a stimulated emission feature appeared in the lower energy range additionally to the PL bands mentioned above. 11As follows from Figures 4 and 5, the PL decay for conventional QW and SQW's is close to the streak camera temporal resolution edge.As an example, we point that the decay time measured for 2.9 nm SQW is 34 ps while for thinner SQWs this time becomes even shorter and so follows the temporal resolution edge.This is consistent with the time-resolved measurements of the GaN multiplequantum-well structures. 11In contrast, the PL from the bulk GaN (buffer layer) in the conventional QW sample shows much longer decay of 147 ps.The difference is attributed to the higher rate of nonradiative recombination in QWs and SQWs as compared to the bulk GaN, which we associate with impurities and structure defects at the interfaces.We stress here that the time-resolved measurements allow us to distinguish between the buffer layer PL from the conventional QW sample and the SQW ( 0 s A X A ) exciton PL, despite the small difference in their peak positions (Figs. 1, 2, 4, and 5). Thus the observed redshift of ( 0 s A X A ) exciton PL is caused by the quantum coupling between a neutral acceptor at the surface and the exciton state in the SQW.The actual shift of PL peak is hence determined by a competition of several confinement-induced effects.The first effect is the upward shift of electronic levels with decreasing GaN SQW width due to the quantum confinement effect: 6]12 On the other hand, the quantum-confined Stark effect, which is caused by the strong built-in electric field originating from spontaneous polarization in SQW's, gives the following PL energy: , where p is a permanent dipole moment of an exciton, F is the spontaneous polarization field being an extent of electron and hole ground state wave functions in the direction perpendicular to the SQW plane, z (the indexes are applied for either the electron (e) or the heavy hole (hh), respectively,  e and  h are the corresponding frequencies, and e is the electron charge).In other words, the different confining potentials for electron and holes lead to their spatial separation creating the permanent dipole momentum which interacts with the field.This effect gives a redshift of the excitonic PL band, which is linear with F and so scales as 1/L W . Also, the built-in electric field pulls the electron and hole wave functions apart inducing the corresponding blueshift, which scales as .Finally, one should take into account a decrease of the exciton Bohr radius (a B = 2.7 -2.8 nm, 12,14 ) with decreasing L W in the range of 2.9 -1.51 nm, which, consequently, leads to an increase in the exciton binding energy and causes the redshift of the excitonic PL, which scales as 1/L W .As this takes place, the effect of the screening of the polarization field by free carriers on the exciton binding energy is assumed to be weak due to the small density of the free carriers photoexcited with CW excitation in the SQWs.For free excitons, the blueshift is known to dominate over the redshift, resulting in an overall blueshift of the excitonic PL band. 12The reason is that the Stark, Coulomb, and dipole energies scale only as 1/L W , while the quantum confinement and polarization effects scale as .However, the situation is even more complicated in QW's heavily doped with donor or acceptor impurities, where PL is dominated by bound exciton complexes.Systematic studies of neutral-donor-bound exciton PL revealed a non-monotonic dependence of PL peak position on the QW width. 15With a reduction of the QW width, the binding energy of the exciton bound to a neutral donor exhibits a maximum at L ~ a B and then decreases. 15For narrow QW's, this effect prevails over confinement-induced upward shift of electron levels and results in the overall redshift of PL band.We observe a similar behavior in SQW's as well due to the high concentration of surface acceptors.In this case, the excitons are bound to a sheet of acceptors that are located in a close proximity to the GaN SQW.Since the interaction between free exciton and neutral acceptor is short-ranged, the binding energy of the ( s A X A ) exciton is very sensitive to the ratio between the effective Bohr radius of a neutral acceptor and the SQW width.The effective Bohr radius of an acceptor 2 , where E A is the acceptor ionization energy, 16 can be estimated for most common neutral acceptors in UID GaN [Ref.3] as 0.40 nm (C), 0.34 nm (Si), 0.31 (Mg), and 0.24 nm (Zn).Because the built-in electric field aligns excitons along the z direction, the shortening of the interaction scale between excitons and surface neutral acceptors cannot be remedied by a reorientation of the exciton dipole moment with respect to the QW plane.This explains the sharp width dependence of PL redshift and intensity for narrow SQW's.The PL intensity drop with decreasing SQW width is caused by an increase in the nonradiative surface recombination. 17 dependences of PL shift shown in Fig. 3 can be well fitted by adding to the aforementioned effects the binding energy effect and assuming that the binding energy of the ( 0 s A X A ) exciton has a dipole-dipole nature and hence scales as .Note that the dependences of the PL peak position and PL intensity on the GaN SQW width (Fig. 3) are similar enough to those observed for GaAs QW's when Al 0.3 Ga 0.7 As cap barrier thickness was varied from 100 to 0 nm. 2 This is consistent with the model discussed in the current paper. The PL spectra measured with pulse excitation reveal a very large broadening (Fig. 1 and 2), which increases with laser power in the range of 0.3 -0.8 mW (the photoexcited carrier density of 2.8×10 18 -7.6×10 18cm -3 ).The peak position remains almost unchanged for different powers applied, but it redshifts with decreasing SQW width, similarly to PL spectra measured with CW laser excitation.The higher energy wing of the broadening shows an exponential behavior due to the hot exciton effect. 1,18,19ssuming the PL spectral shape of the form 18 k T  , where   is the emitted photon energy, x E and are the exciton energy and temperature, x T B k is the Boltzmann constant, the surfaceacceptor-bound exciton temperature can be estimated by fitting the PL spectra measured (Figs. 1 and 2).The maximal temperature obtained is 151 K (~ 13 meV), i.e., less than the binding energy of excitons bound to the neutral acceptors and the exciton binding energy (26.3 meV). 3The lower energy wing of the broadening results from the band-gap renormalization by the hot plasma, which progresses with the power density of excitation. 20 Photon Energy (eV) Photon Energy (eV) ( V 0 denotes the difference of potentials at the surface and at the GaN/Al 0.2 Ga 0.8 N interface), and  This is consistent with the aforementioned shortening of the PL decay time with decreasing SQW width.The SQW width 6 M. Leroux, N. Grandjean, M. Laugt, J. Massies, B. Gil, P.Lefebvre, P. Bigenwald, Phys.Rev. B 58, R13371 (1998).
2019-04-12T19:16:34.494Z
2009-05-12T00:00:00.000
{ "year": 2009, "sha1": "7d43aa2a186eed92e9d01d74bdd176ab5327b19b", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "7d43aa2a186eed92e9d01d74bdd176ab5327b19b", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
145847423
pes2o/s2orc
v3-fos-license
Laser systems for time-resolved experiments at the Pohang Accelerator Laboratory X-ray Free-Electron Laser beamlines The experimental lasers at the PAL-XFEL beamlines, from the source to the sample position, are described. Introduction The intense, ultrashort and highly coherent pulses from X-ray free-electron lasers (XFELs) have opened new fields of ultrafine and ultrafast X-ray sciences in physics, chemistry and biology (Bergmann et al., 2017). In particular, direct observation of atomic-scale changes such as the formation/ dissociation of chemical bonds (Kim et al., 2015;Suga et al., 2017), collective atomic motions in solids (Fritz et al., 2007;Gerber et al., 2017) and phase transitions (Gaudin et al., 2012) has become possible with femtosecond-scale time resolution. Time-resolved X-ray experiments are typically performed with the pump-probe technique (Minitti et al., 2015). Any method of initiating dynamic processes in matter may act as a pump (Jo et al., 2011;Wang et al., 2014); among which, femtosecond laser pulses are the most accessible tool for making instantaneous changes in materials. In this case, the XFEL probes the photo-induced change caused by the pump pulse after some time-delay, t. The laser system and its beamdelivery paths should be located and maintained in a controlled environment for reliable provision of optical pulses in the experiments. In addition, the optical laser and the XFEL must be precisely synchronized to not deteriorate the experimental time resolution. Finally, optical laser pulses tuned to the absorption wavelength of the sample have to be delivered to certain interaction points along the XFEL beamline. In this paper, we report on the optical laser systems for time-resolved XFEL experiments currently proceeding at the Pohang Accelerator Laboratory (PAL) beamlines. We describe the paths from the laser output to each sample position and the available laser parameters as well as the synchronization scheme and experimental conditions in terms of the optical laser. Laser facilities at PAL-XFEL beamlines Each beamline currently operating at the PAL-XFEL has a dedicated optical laser system for time-resolved XFEL experiments. The laser systems in the beamlines have similar configurations, which consist of a Ti:sapphire oscillator (Vitara-T, Coherent, Inc.) and a regenerative amplifier followed by a single-pass amplifier (Legend Elite DUO HE, Coherent, Inc.). To minimize the instabilities of the system, all components except the pulse compressor are positioned in a cleanroom environment whose temperature, relative humidity on the optical table, and particle density are maintained at 24 C AE 0.5 C, 45% AE 5% and Class 10 000 (ISO 7), respectively. Amplified, uncompressed laser pulses are delivered from the laser room to the beamline through an evacuated tube. Therefore, the beam path is safely isolated from the outside, and nonlinear effects as well as spectral and temporal distortions caused by high peak power can be suppressed. The intrinsic beam divergence and position instability of the Ti:sapphire laser could lead to poor beam quality and fluctuation at a distant point. Therefore, we employ telecentric relay imaging by using a concave-convex doublet (each formed with a concave and convex lens) at each end of the evacuated beam tube and we employ active beam-pointing stabilization during beam transportation. A pulse compressor, an optical delay line, and frequency conversion units such as a harmonic generator and an optical parametric amplifier (OPA) are located in a light-tight enclosure at the beamline. Finally, femtosecond laser pulses with appropriate wavelength and intensity excite the sample at the interaction point. Specifications of the laser systems currently operated and the laser parameters at the sample positions are provided in Table 1. Optical lasers for hard X-ray beamlines The PAL-XFEL operates two hard X-ray (HX) beamlines for X-ray scattering and spectroscopy (XSS) and nano-crystallography and coherent imaging (NCI) research (Ko et al., 2017;Park et al., 2016;Kim et al., 2018). Two identical Ti:sapphire laser systems are installed in the HX laser room located above the NCI hutch structure. This allows the optical laser to go directly down to every laser booth in the hutches, where pulse compression, delay control and frequency conversion take place. In principle, one laser system is assigned per beamline and each laser system can be used as backup for the other if necessary. The repetition rate of the amplified output can be selected among integer divisions of 120 Hz (Table 1). During 2018, the PAL-XFEL was operated at a repetition rate of up to 30 Hz, so the most frequently used rates of the optical laser were 15 and 30 Hz for typical optical pump XFEL probe experiments. In the case of the HX optical laser systems, the transformlimited pulse duration is optimized at 100 fs (full width at halfmaximum; FWHM). As shown in Fig. 1, the laser pulses providing maximum pulse energies of 10 mJ after compression are bifurcated after the delay stage (IMS600LM, Newport). The first optical path is for the 800 nm fundamental and its harmonics (HGS-T, Coherent, Inc.), with pulse energies up to 1 mJ and 0.7 mJ for the second harmonic generation (SHG, 400 nm) and third harmonic generation (THG, 266 nm), respectively. The output pulse energy can be controlled through a motorized attenuator on the 800 nm beam path, consisting of an achromatic /2 plate and two reflective thin-film polarizers. The second path is for the OPA system (TOPAS Prime, Light Table 1 Specifications of the Ti:sapphire laser system at the beamlines and their available laser parameters. Figure 1 Optical laser layout in the hard X-ray beamlines of the PAL-XFEL. Conversion Ltd) pumped by 3.5 mJ pulse À1 centered at 800 nm. The OPA system is capable of providing tunable femtosecond pulses ranging from ultraviolet (>240 nm) to farinfrared (< 20 mm). Currently, we provide OPA output up to 2600 nm (only for the XSS beamline) by using a frequency mixer (NirUVis, Light Conversion Ltd) in combination with the OPA system. A difference frequency generation unit (NDFG, Light Conversion Ltd) capable of producing wavelengths up to 20 mm will be added later. Finally, before focusing to the sample position, the beam height is adjusted by a periscope to be equal to that of the XFEL. For time-resolved serial femtosecond crystallography (SFX) experiments we also provide a nanosecond Q-switched laser (Minilite II, Continuum) at the NCI beamline (not shown in Table 1). The available wavelengths are 1064 nm (< 50 mJ pulse À1 ) from the Nd:YAG laser, its harmonics (532 nm and 266 nm) and 355 nm with repetition rates of up to 15 Hz. The energy fluctuation and typical pulse duration of the nanosecond laser are 2.6% (r.m.s.) and 5 ns AE 2 ns, respectively. Optical lasers for a soft X-ray beamline The pump-probe experiments at the soft X-ray (SX) scattering and spectroscopy (SSS) beamline, which includes the X-ray absorption spectroscopy/X-ray emission spectroscopy (XAS/XES) station (Park, Kim, Min et al., 2018) and the resonant SX scattering (RSXS) station, are supported by a 4 mJ Ti:sapphire laser system installed in the laser room located at the most downstream end of the SX experimental hall. After the external pulse compressor at the experimental hall, the optical laser provides a transform-limited pulse duration of 40 fs (FWHM) operated up to 1.08 kHz. Specifically, the repetition rate of the optical laser can be selected among integer divisions of 1.08 kHz through control of the pulse slicer outside the regen cavity. Fig. 2 depicts the optical layouts of the XAS/XES endstation in the SX beamline. After the external pulse compressor, two optical paths can be selected via removable mirrors. The first optical path is used for delivering typical pump sources (i.e. 800 nm, 400 nm and 266 nm) through a laser-in-couple chamber. The pump intensity is adjusted with a Watt Pilot motorized attenuator (Altechna). The pump delay line (IMS600LM, Newport) has a travel range of 600 mm, which corresponds to a time delay of 4 ns. A second path (under development) is planned for a vacuum ultraviolet (VUV) source in the range 20-100 eV based on the high harmonic generation (HHG) process (Park, Kim, Min et al., 2018). A beam splitter after the HHG delay line (ILS300LM, Newport) equally bifurcates 800 nm pulses so that one arm can share the path for the pump beam including the pump delay line. Finally, at the XAS/XES end-station, various combinations of two-color time-resolved experiments will be possible by utilizing SX FEL, optical laser and VUV pulses. Regardless of optical path, before the laser-in-couple chamber (or the end-stations), the beam height is adjusted by a periscope to be equal to that of the XFEL. Meanwhile, the RSXS end-station is in preparation at the SSS beamline and the optical laser setup will be integrated accordingly. Synchronization with XFEL To achieve the highest experimental time resolution, the relative phase between the XFEL and the optical laser should be fixed as precisely as possible. The operation scheme is shown in Fig. 3. All laser systems in the PAL-XFEL beamlines are synchronized with an S-band radiofrequency (RF) clock of 2856 MHz and the event-timing system phase-locked to the master clock of 476 MHz with a line frequency of 60 Hz. Because the event timing is related to the XFEL beam rate, one of the frequencies from the event receiver acts as an external trigger for the laser amplifier. The HX laser systems use 120 Hz, whereas the SX laser system uses 1.08 kHz, i.e. triple the 360 Hz fundamental event clock. The S-band RF is used as a reference for cavity-length feedback, allowing the oscillator to operate at 79.33 MHz, i.e. 1/36 of the RF (Min et al., 2016). For the HX beamlines, the RF (2856 MHz) is directly distributed through a commercial reference clock transfer system (Libera Sync 3, Instrumentation technology) with two single-mode fibers, which compensate for phase drift caused by environmental changes (Zorzut et al., 2015). Optical laser layout in the soft X-ray XAS/XES end-station of the PAL-XFEL. phase drift at 130 fs m À1 K. Additionally, we measured the daily temperature variation of the RF link to be 0.08 C (peakto-peak) in the 100 m-long SX undulator section, whereas it was measured to be 0.01 C (peak-to-peak) in the 90 m-long SX experimental hall. Overall, $1 ps of phase drift has been observed over 10 h during experiments (data not shown). For the SX beamline, we plan to implement the same synchronization scheme as used in the HX beamlines. A balanced optical and microwave phase detector (BOM-PD) built in-house based on a Sagnac interferometer is used for the cavity-length feedback of the Ti:sapphire oscillators (Kim et al., 2006). It compares the phase error between the RF reference and the oscillator output and controls the cavity length to minimize the error signal. The residual phase jitter from 1 Hz to 100 kHz was 14 fs as measured by the out-of-loop method (Min et al., 2016). Stability of the optical laser Although the optical laser and the XFEL are synchronized within femtosecond-level precision, temporal jitter always exists between them at the sample position in the beamline because the optical paths and noise sources differ. The arrival time of the XFEL relative to the optical laser was measured at the XSS beamline using the spectral encoding method (Harmand et al., 2013;Bionta et al., 2014). Briefly, a 2 mm Si 3 N 4 membrane was pumped by the XFEL pulse with a flux density of 370 mJ cm À2 at 7.0 keV and probed with a white light continuum generated by 800 nm, 100 fs optical pulses. The induced transmission change was recorded by a spectroscopic chargecoupled device (iVAC 324 FI, Andor Technology) after a spectrograph (SP-2300i, Princeton Instruments) with 500 nm blazed grating at 300 lines mm À1 . Fig. 4(a) presents the timing-jitter statistics for the 6000 XFEL shots, for which the measured width of the jitter was 42 fs (FWHM). The instrument response function extracted from the time-resolved diffraction measurement for thin-film Bi(111) was 137 fs (FWHM) without jitter correction , which has good correspondence with the estimated value based on the pulse durations, arrival-time jitter and geometrical factor. The position stability of the optical laser is another key factor in pump-probe experiments. The beam position at the sample is easily affected by environmental changes, as the optical laser in the beamline passes through numerous optics along the optical path, which is at least 15 m at this facility. Moreover, a slightly misaligned beam path or wobble motion in the mechanical delay stage may lead to considerable drift of the beam position. We further minimized the drift by adding the beam stabilization loop (Compact, MRC systems GmbH) after the delay line. The position stability was measured for 1 h at the sample position of the XSS beamline. Synchronization map between the experimental laser systems and the PAL-XFEL timing system. DRO: dielectric resonator oscillator; BOM-PD: balanced optical and microwave phase detector. 1.5 m lens was imaged with a long-distance microscope and a camera. The imaging system was calibrated by using a 300 mm pinhole. No notable variation of the spatial beam profile was observed during the measurement. Finally, the beam position was extracted from the intensity-based center of mass of the profile. The angular stability [ Fig. 4(b)] at the sample position was 4.7 mrad (H) and 3.2 mrad (V) in r.m.s., which corresponds to less than 3% of the 250 mm (1/e 2 width) spot size. Overlap with XFEL Prior to the pump-probe experiments, the optical laser and the XFEL must form both temporal and spatial overlaps at a given point of the sample. The pump-probe pulses are simultaneously detected by a fast GaAs photodiode (G4176, Hamamatsu) with a rise time of 30 ps. While monitoring with a 12.5 GHz oscilloscope (DPO71254C, Tektronix), the coarse temporal overlap can be set through a time-delay stage after the phase-locking of the BOM-PD. To reduce ambiguity, this process is repeated whenever the wavelength is changed. Once the spatial overlap is set at the interaction point, the time-zero position can be found by scanning transient responses from reference materials, e.g. Bi(111), YAG or Si 3 N 4 . In the case of liquid-phase samples, a change of the wide-angle X-ray scattering signal from the solvent by the optical pump is often useful. Finally, the spatial overlap can be optimized further by monitoring the transient-signal amplitude in real time or by comparing a damaged spot on a wafer at the sample plane. Conclusion During the second half of 2018, the optical pump laser became accessible for all time-resolved experiments at the PAL-XFEL beamlines. In addition to the laser wavelengths currently provided, we aim to extend the available wavelength to farinfrared and further to terahertz radiation at the XSS beamline, and UV to near-infrared OPAs will also be added to the NCI and SSS beamlines. Thanks to highly accurate synchronization and low timing jitter between the XFEL and the optical laser, experimental time resolutions of less than 150 fs have been achieved without jitter correction. Additional beam-position feedback before the sample made it possible to maintain spatial overlap without considerable position drift over the user beam time. We plan to apply feedback devices to all beamlines of the PAL-XFEL. Finally, we are going to introduce a high-resolution RF phase shifter for the laser oscillator, with which we expect to be able to correct for the slow time drift that could occur during an experiment through an additional delay control in combination with arrival-timing diagnostics.
2019-05-07T14:05:58.341Z
2019-04-15T00:00:00.000
{ "year": 2019, "sha1": "b935bbb9c338cecdfd0bf42c749ff2bda388d923", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/s/issues/2019/03/00/yi5071/yi5071.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "59f08237a276eb13efe8e7391b6ffd22d3964aac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
51921770
pes2o/s2orc
v3-fos-license
Encapsulation of Chemotherapeutic Drug Melphalan in Cucurbit[7]uril: Effects on Its Alkylating Activity, Hydrolysis, and Cytotoxicity The formation of inclusion complexes between drugs and macrocycles has proven to be an effective strategy to increase solubilization and stabilization of the drug, while in several cases improving their biological activity. In this context, we explored the formation of an inclusion complex between chemotherapeutic drug Melphalan (Mel) and cucurbit[7]uril (CB[7]), and studied its effect on Mel alkylating activity, hydrolysis, and cytotoxicity. The formation of the inclusion complex (Mel@CB[7]) was proven by absorption and fluorescence spectroscopy, NMR, docking studies, and molecular dynamics simulations. The binding constant for Mel and CB[7] was fairly high at pH 1 ((1.7 ± 0.7) × 106 M–1), whereas no binding was observed at neutral pH. The Mel@CB[7] complex showed a slightly decreased alkylating activity, whereas the cytotoxicity on the HL-60 cell line was maintained. The formation of the complex did not protect Mel from hydrolysis, and this result is discussed based on the simulated structure for the complex. INTRODUCTION Melphalan (Mel, Scheme 1) is an antineoplasic drug, which is indicated for the treatment of multiple myeloma and other types of cancer. 1,2 Being a drug of the family of nitrogen mustards, its antitumoral effect is related to the alkylation of DNA. 3−5 Mel is practically insoluble in water at neutral pH, and it rapidly hydrolyzes in biological media, factors that have an impact on its usability. In this context, a supramolecular approach to improving the overall drug performance, such as the use of cyclodextrins, 6,7 is interesting to explore due to its demonstrated success in pharmaceutical formulations. For example, Evomela is an injectable formulation of Mel that uses a modified β-cyclodextrin (Captisol) to improve its solubility and stability by the formation of an inclusion complex. 8 Cyclodextrins show in general low binding affinities, which is their main downside as a supramolecular solubilizing agent. 6 More recently, the family of cucurbit [n]uril macrocycles (CB [n]s, Scheme 1) has emerged as promising candidates for drug delivery applications. 9−11 Some characteristics that make CB [n]s notable are their low toxicities, solubilizing properties, high thermal stability, high binding affinities, and good solubility in biological fluids. 6,9,10,12−16 Previously, a report by Isaacs and collaborators showed that Mel and other alkylating agents can be efficiently solubilized by acyclic CB [n]s, 17 which are very versatile in the binding of several molecules of biomedical relevance. Nevertheless, there is no information regarding how complexation within these macrocycles could affect their stability, alkylating activity, and cytotoxicity. Complexation of drugs by CB [n]s has stimulated much interest over the past decade, 9,10,12,18,19 and there are several interesting reports of how complexation affects bioactivity and/or biodistribution. 11,[14][15][16]2011,[14][15][16]20 Therefore, we were interested in investigating the formation of a supramolecular complex between Mel and CB [n]s and if this process would stabilize it, as it has been shown for several other drugs, 21 while maintaining its alkylating activity and cytotoxic effects. For this study, we chose cucurbit [7]uril (CB [7]), which has a cavity size comparable to that of βcyclodextrin, and it has been shown to encapsulate phenylalanine, 22,23 which is structurally related to Mel. RESULTS AND DISCUSSION Absorption spectra of Mel in the absence and presence of increasing concentrations of CB [7] show a marked decrease in the absorption band at 260 nm (Figure 1), which is consistent with the encapsulation of the drug. It must be noted that these changes were observed at pH 1 (0.1 M HCl) and no such changes were observed at pH 7(see Figure S1 in the Supporting Information). These results indicate that the protonation state of Mel is essential for binding. Mel possesses three pK a for 2-chloroethylamino, α-carboxylic, and α-amino groups of 1.42, 2.75, and 9.17, respectively. 24 Because binding was observed only at pH 1, the protonation of the 2chloroethylamino and α-amino groups seems to be essential for a strong binding to the macrocycle. This observation is consistent with previous reports about the cation−dipole interactions between the guest and CB [n] portals, which are lined with carbonyl groups. 21,23,25 It is important to note that this interaction with the portals could lead to pK a shifts when the drug is encapsulated inside CB [n]s, 26−28 which was not evaluated in this work. The fact that there is no appreciable binding at pH 7 (zwitterionic species) could be related to a destabilization of the complex because of repulsive interactions with the negative charge density of the carbonyl groups at the portals of CB [7]. Fluorescence emission spectra also showed a noticeable decrease in intensity as the concentration of CB [7] in the sample increased (Figure 1, inset), which further supports that the formation of an inclusion complex with CB [7] is taking place. The binding constant for the Mel@CB [7] complex obtained from the fluorescence titrations was (1.7 ± 0.7) × 10 6 M −1 (Figure 2). The value for the binding constant with CB [7] is fairly high and falls within the range reported for several benzimidazol-derived drugs. 12 This binding constant (K 11 ) can be related to the solubilizing capacity of the macrocycle by a phase-solubility diagram ([drug] vs [macrocyle]) assuming a 1:1 binding, as depicted by eq 1. S 0 refers to the intrinsic solubility of the drug, whereas the slope is obtained from the linear fit of the data. 29 The solubility of Mel hydrochloride is reported to be 3.11 mg mL −1 ; 30 thus, considering the K 11 obtained for CB [7] in this work, the simulated slope would be unity. This means that CB [7] is a very good solubilizing agent for Mel and it is close to the slopes reported for acyclic CB [n]s (0.81−1.2). 17 In comparison, (SBE) 7m -β-CD (Captisol), which possesses a binding constant of 142.7 M −1 with Mel (from phase-solubility diagram), has a simulated slope of 0.6. 30 The inclusion of Mel inside the cavity of CB [7] is further supported by the 1 H NMR spectra (Figure 3), which show strong downfield shifts for Mel aromatic hydrogens, whereas the signals for the α-carbon hydrogen and the 2-chloroethyl protons are not changed (see Figures S2 and S3 in the Supporting Information for the assignment). Residual peaks from the solvent at around 3.2 ppm prevent the observation of the hydrogens of the methylene group; however, it is clear from the spectra that the aromatic ring is placed inside the cavity, whereas the rest of the molecule sits outside of the macrocycle. This inclusion mode is consistent with molecular docking studies, which show the preferential inclusion of the aromatic portion of the molecule inside CB [7], with the 2chloroethylamino group and the α-carbon groups sticking out through the portals ( Figure 4). The complex shows favorable binding energy (−5.64 kcal mol −1 ), which is in line with the high binding constant determined experimentally. It must be emphasized that the value of the binding energy is relative and cannot be correlated directly with the value of the binding constant. However, it is a good indication that the complex is fairly stable. The simulations show that the complex is stabilized by three hydrogen bonds with the carbonyl groups ( Figure 4); although hydrophobic interactions and cation− dipole interactions between the protonated amino groups and the portals are certainly contributing to the binding. The docking studies show that the formation of the complex is less favorable at pH 7 than at pH 1 (see Figure S4 in the Supporting Information), but weak interactions in solution cannot be completely ruled out. Because solvation can largely affect the formation of the complex and its conformation over time, molecular dynamics (MD) simulations were performed for 10 ns to assess the stability of the complex ( Figure 5). The results show that Mel remains inside the cavity of CB [7] for the duration of the 31,32 simulation and that at least one hydrogen bond is retained throughout the entire time, with sporadic additional hydrogen bonds being formed. It is interesting to note that the conformation of the complex changes very little during the simulation and that the 2-chloroethylamino group is always positioned at the rim of CB [7]. This conformation would maintain the alkylating activity of Mel because this group is responsible for the alkylation of DNA bases. 3 Alkylating activity is essential for Mel chemotherapeutic effect. Therefore, we tested if the complexation of Mel by CB [7] altered such property by following the generation of a colored product at 545 nm after reaction with 4-(4nitrobenzyl)pyridine (NBP), which is based on the alkylation of the pyridine moiety of the reagent giving a chromophore product at basic pH. 33 The results in Figure 6 show that there is a slight decrease in the relative alkylating activity of Mel when included inside the cavity of CB [7]; however, this effect is minor. These results agree with the binding mode discussed above from NMR, docking studies, and MD simulations, where the 2-chloroethylamino group is located on the outside of the macrocycle, protruding through one of the portals. Therefore, alkylating activity is roughly maintained. The main problem that Mel has as a drug is its instability in aqueous media due to rapid hydrolysis at neutral pH. 24,34,35 Evomela is reported to be stable for 1 h after reconstitution at room temperature. 8,36 To assess if complexation within CB [7] protected the drug from hydrolysis, we performed a series of experiments where Mel was incubated at physiological temperature, and subsequently hydrolysis products were quantified by high-performance liquid chromatography (HPLC) based on previous reports from the literature. 24,35,37 In the case of the CB [7] complex, before the analysis, Mel was released from CB [7] using adamantylamine (ADA) as a competitor due to its high binding constant (1.2 × 10 10 M −1 ). 38 Release from the macrocycle is necessary for quantification because the extinction coefficient of the complex is lower than that for free Mel as shown in Figure 1. In these experiments, it is also important to consider that Mel will hydrolyze somewhat during sample preparation and during the HPLC run. Therefore, control experiments were performed for nonincubated samples and the small amounts of hydrolyzed products detected were subtracted from the incubated samples (see Figures S5 and S6 in the Supporting Information). Loss of the chlorine atoms leads to their replacement by hydroxyl groups. Therefore, there are two main hydrolysis products, the monohydroxy (MOH) and the dihydroxy (DOH) derivatives of Mel, 24,39 though other products have been identified by mass spectrometry (MS). 35 The chromatogram in Figure 7 corresponds to a representative experiment, which shows that Mel incubation produces a single hydrolysis product with a retention time of 3.6 min. This product is the same for Mel or the Mel@CB [7] complex and was attributed to the MOH derivative based on mass spectral analysis (see Figures S7 and S8 in the Supporting Information). Note that the DOH derivative can be detected by MS but at a relatively low abundance, indicating that it is a minor product. Comparisons of the integrated areas of the chromatogram peaks for Mel and MOH yielded a hydrolysis ratio of 15.7 ± 2.5% for Mel and 11.8 ± 2.7% for Mel@CB [7]. These two values are the same within error, indicating that CB [7] complexation does not protect Mel from hydrolysis. It is noteworthy that Mel hydrolysis is strongly pH-dependent and higher rates of hydrolysis are observed at neutral or basic pH. 24,39 This behavior is consistent with the proposed mechanism of hydrolysis, involving a nucleophilic attack of the unprotonated amino group toward one of the chlorine-bearing carbon atoms. 24,39 The results obtained herein are in line with previous discussions about the binding mode within CB [7]; thus, the exposure of the 2-chloroethylamino group to the solvent does not change its reactivity toward hydrolysis. This is different than previously reported slowing of the rate of hydrolysis by Captisol because in that case the 2chloroethylamino group is embedded within the hydrophobic cavity of the macrocycle, 30 and this is clearly a limitation for the CB [7] complex. Although hydrolysis is not prevented, alkylating activity was almost unmodified, which is still a good antecedent for its therapeutic action. Finally, the cytotoxicity of Mel and Mel@CB [7] complex was assessed in human leukemia cell line (HL-60) as a model for its therapeutic action. The cytotoxicity assays shown in Figure 8 revealed that there is no significant difference between the efficacies of Mel and its CB [7] complex in inducing cancer cell death. Samples in the presence of only CB [7] showed no cytotoxicity, as reported for several cell lines. 13,18 It is important to emphasize that even when the alkylating activity was slightly decreased and its hydrolysis was not prevented, the Mel@CB [7] complex performs as well as the drug by itself, but CB [7] encapsulation offers enhanced solubility. One can speculate that because the binding of Mel to CB [7] was observed only at acidic pH and not at pH 7, encapsulation could help improve drug delivery for an oral formulation of Mel, as the drug would be released after passing through the stomach. CONCLUSIONS Mel was effectively encapsulated inside CB [7], which was demonstrated by changes in the absorption and fluorescence spectra, NMR, docking studies, and MD simulations. The binding mode corresponded to the inclusion of the aromatic ring inside the cavity, whereas the α-amino, α-carboxylic, and 2-chloroethylamino groups protruded through the portals. Stabilization of the complex was due to a combination of hydrogen bonding, hydrophobic interactions, and cation− dipole interactions. The protonation state of Mel was fundamental for the binding, being observed experimentally only for the fully protonated form at pH 1. It must be emphasized that Mel hydrochloride is viable for an injectable formulation (Alkeran). Encapsulation of Mel inside CB [7] could hold promise for oral intake, where the complex might be stabilized. The formation of the Mel@CB [7] inclusion complex showed a slight decrease for the alkylation activity, but the cytotoxicity was not affected, as shown for the HL-60 cell line. On the other hand, hydrolysis was not prevented as shown for the encapsulation of Mel in the β-cyclodextrin derivative (Evomela), and this is proposed to be due to the binding mode within the macrocycle. In CB [7], the aromatic ring is inside the cavity of the macrocycle with the 2chloroethylamino group placed outside of the cavity, whereas for the β-cyclodextrin derivative, this group remains inside the cavity, slowing down hydrolysis. 4.2. Sample Preparation. Stock solutions of Mel (1 mg mL −1 ) were prepared by dissolving the drug in ethanol/HCl solution (99:1). Diluted samples were prepared in 0.1 M HCl (pH = 1) or 10 mM phosphate buffer, pH 7 (pH meter Hanna HI2221). Final concentrations were determined by their UV− vis absorption spectra using a molar extinction coefficient of (4.9 ± 0.2) × 10 3 M −1 cm −1 at 260 nm in 0.1 M HCl, which was determined in this work. Stock solutions of CB [7] were prepared in water (≈1 mM) and titrated against a known concentration of Cob + by UV−vis spectroscopy according to the method reported in the literature. 40 ADA stock solutions (10 mM) were prepared in water. Absorption and Fluorescence Measurements. The association of Mel (16 μM) to CB [7] (0−50 μM) was measured by absorption and fluorescence spectroscopy. Absorption was measured on a HP8453 spectrophotometer using 1 cm pathlength cuvettes. Fluorescence emission spectra were obtained by exciting the samples at 260 nm (5 nm bandwidth) using a LS55 PerkinElmer fluorimeter. The temperature was kept at 25°C using a waterbath. Binding isotherms built from the fluorescence data were adjusted using numerical analysis as reported previously. 31,32 4.4. NMR Measurements. Mel (2.5 mg) was dissolved in 500 μL of DCl/D 2 O (1:20) with the aid of sonication in the absence or presence of 1 equiv of CB [7]. The NMR spectra were obtained using a Bruker Avance III HD instrument working at 400 MHz. 4.5. Structure Optimization and Molecular Docking. Mel in different protonation states and CB [7] were constructed using Gaussian 03 41 and optimized using the B3LYP method and 6-31G** base set. 5 The partial charges of the compounds were corrected using ESP methodology. Topology and parameters for all structures were obtained using the SwissParam server. 42 Molecular dockings of Mel inside CB [7] were carried out using AutoDock 4.0 suite software. 43 The grid maps were calculated using the autogrid4 subprogram and were located in the center of CB [7]. The volumes for the grid maps were 70 × 70 × 70 points with a grid-point spacing of 0.375 Å. The autotors option was used to define the rotating bonds in the ligand. The following parameters were employed in the Lamarckian genetic algorithm dockings: initial population of 1500 random individuals with a population size of 150 individuals; 2.5 × 10 6 energy evaluations, a maximum number of 27 000 generations, a mutation rate of 0.02, and a cross-over rate of 0.80. The docked complexes were built picking the lowest docked-energy binding positions with a relatively high number of conformations. 4.6. Molecular Dynamics Simulations. Mel@CB [7] complexes in different protonation states were solvated by a TIP3 water model and submitted for 10 ns MD simulations using an NPT ensemble. The calculations were performed using NAMD 2.6 software. 44 Periodic boundary conditions were applied to the systems in the three coordinate directions. A pressure of 1 atm and a temperature of 298 K were maintained throughout the simulations. 4.7. Alkylating Activity. The alkylation induced by Mel was measured according to a protocol reported in the literature. 33 Briefly, 5 mL of 200 μM Mel in 0.2 M acetate buffer, pH 5, were mixed with 1.5 mL of a 10% NBP solution in methanol and the mixture was incubated at 100°C for 30 min. The same procedure was adopted for a sample containing Mel and 1.5 equiv of CB [7]. Control experiments at pH 1 yielded the same results. After cooling for 15 min, the product was extracted with 3 mL of chloroform, and then 3 mL of 3 M NaOH was added and the sample was vortexed thoroughly. After centrifugation at 1500 rpm, the absorbance of the chloroform layer was measured at 545 nm. The experiments were performed in triplicate. 4.8. Hydrolysis. The measurement of the hydrolysis degree of Mel was adapted from previously reported methods. 24,35,37 Mel (100 μM) in 0.1 M HCl was incubated at 37°C for 3 h in the absence or presence of 1 equiv of CB [7]. After incubation, ADA (200 μM) was added to the samples containing CB [7] to release Mel from the macrocycle. Hydrolysis products were measured on an Hitachi Elite LaChrom HPLC system using an isocratic mobile phase of acetonitrile and 0.1% formic acid in water (32:68), RP-18 endcapped column (5 μm, 250 × 4 mm 2 , Merck), 1 mL min −1 flow, and 260 nm for the detection wavelength (L-2455 diode array detector). Control experiments with nonincubated samples were performed to take into account the hydrolysis of Mel during the analysis (preparation and HPLC column run), and the small amounts of hydrolyzed products detected were subtracted from the incubated samples. 4.9. Cytotoxicity Assay. HL-60 cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and 1% antibiotic/antimycotic in a humidified atmosphere of 5% CO 2 at 37°C. Cells were seeded in 96-well plates at a density 3 × 10 5 cells/well. Mel or Mel@CB [7] were added at a final concentration of 200 μM and incubated for 24 h. After the treatment, cell viability was determined by the MTT assay (10% v/v of 5 mg mL −1 MTT solution was added to each well and incubated for 2 h). Then, the formazan crystals formed by the reaction between metabolically active cells and MTT were dissolved by adding a solution of 10% sodium dodecyl sulfate in 0.01 M HCl into each well. The plate was left overnight in an incubator to finally read its absorbance at 570 nm using a Biotek Synergy HT microplate reader. Author Contributions The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.
2018-08-14T19:40:32.972Z
2018-07-26T00:00:00.000
{ "year": 2018, "sha1": "2f3f22155995fa25831b48c9d74d2e50351ef2ef", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b01335", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d51d2d352abaddbcba63371f0d08b2212fff7852", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
264833440
pes2o/s2orc
v3-fos-license
Composition dependence of electronic defects in CuGaS2 CuGaS2 films grown by physical vapour deposition have been studied by photoluminescence (PL) spectroscopy, using excitation intensity and temperature dependent analyses. We observe free and bound exciton recombinations, three donor-to-acceptor (DA) transitions, and deep-level transitions. The DA transitions at ~ 2.41 eV, 2.398 eV and ~ 2.29 eV are attributed to a common donor level ~ 38+-5 meV and three shallow acceptors at ~ 76 meV, ~ 90 meV and 210 meV above the valence band. This electronic structure is very similar to other chalcopyrite materials. The donor-acceptor transitions are accompanied by phonon replicas. Cu-rich and near-stoichiometric material is dominated by the transitions due to the acceptor at 210 meV. All films show deep-level transitions at ~ 2.15 eV and 1.85 eV due to broad deep defect bands. Slightly Cu-deficient films are dominated by intense transitions at ~ 2.45 eV, attributed to excitonic transitions and the broad defect transition at 2.15 eV. Introduction Cu(In,Ga)S2 (CIGS) is not only a promising material for a single junction solar cell, but also a strong candidate as a top cell in tandem applications to absorb high-energy photons of the solar spectrum [1][2][3].An efficiency of 16% has been reported by Barreau et al. [4], considerably lower than the efficiencies of the selenide chalcopyrites Cu(In,Ga)Se2 which have reached 23.6% (M.Edoff et al, in preparation) [5].In particular, Cu(In,Ga)S2 suffers from a high deficit in VOC [6][7][8][9][10].This deficit is partly due to interface recombination, which can be mitigated by the correct choice of buffer layer, but also to a large part due to non-radiative recombination in the absorber bulk [10].It is therefore essential to study the electronic defect structure of this semiconductor. The selenide chalcopyrite Cu(In,Ga)Se2 together with the ternaries CuInSe2 and CuGaSe2 have been intensely studied and an understanding of the electronic structure and of the impact of composition has been established [11][12][13][14][15][16][17][18].Accordingly, shallow donor and acceptor levels, as well as deep defects have been identified in both CuInSe2 and CuGaSe2.It is particularly interesting to compare the wide gap compound CuGaSe2 with the low gap material CuInSe2.For the wide bandgap CuGaSe2, Spindler et al. have reported that defects levels shift mid-gap and defects that were shallow in CuInSe2 become deeper in CuGaSe2 [14].Thus, as Ga is substituted for In, shallow defects become deeper and hence form deep levels which serve as channels for unwanted nonradiative recombination in Cu(In,Ga)Se2 absorbers [11-14, 19, 20].The electronic defects in Cu(In,Ga)S2, unlike the selenide counterpart, are less studied in comparison to the selenide chalcopyrite [21][22][23][24][25]. Figure 1: Overview of transition energies of CuGaS2 identified from some preview studies. However, it has been shown that the shallow defects in ternary CuInS2 are similar to those in selenide chalcopyrites, with three shallow acceptors and one shallow donor, plus two deep broad defect bands close to 0.8 eV and 1.1 eV [9,26].For CuGaS2, a comprehensive electronic defect structure is still incomplete [27][28][29][30][31][32][33][34].For high efficiency sulfide chalcopyrite solar cells, it is necessary to include Ga [9], it is therefore essential to study the defect structure of CuGaS2. Previous reports on defects in CuGaS2 have identified one or two donor-acceptor (DA) transitions around 2.39-2.41eV, and these transitions were attributed to a common shallow donor around 20-25 meV [30,35].Earlier studies also reported a shallow donor energy level at 45-50 meV [27,28]. In addition, deep level transitions are observed at 2.1 eV and around 1.7 eV to 1.8 eV [32,36].An additional deep transition around 2.3 eV has been identified as either a DA transition or due to a deep defect.The review of the photoluminescence transitions of CuGaS2 from literature has been summarized in Fig. 1. In this work, photoluminescence spectroscopy has been performed on CuGaS2 films grown by physical vapour deposition to understand the electronic defect structure.This report will conclude by presenting a novel solar cell on Cu-rich CuGaS2 absorber.A deeper understanding of the electronic defect structure in CuGaS2, will also enhance the understanding of the role of Ga in Cu(In,Ga)S2 films and solar cells. Deposition process for CuGaS2 films The polycrystalline CuGaS2 films investigated in this work were deposited by one-stage coevaporation of elemental copper and gallium with a source temperature of ~1250°C and ~1150°C, respectively, under a sulfur pressure between 5.9 × 10 −5 mbar to 8.5 × 10 −5 mbar.The various compositions of the CuGaS2 films were obtained by changing the temperatures of the elemental sources and thus the flux of Ga and Cu.The deposition was on a molybdenum coated high temperature glass with better heat resistance than soda-lime glass [37] at actual substrate temperature of ~ 690°C.Such high substrate temperature is necessary to obtain high-quality Gacontaining films particularly for pure CuGaS2 [38][39][40].This is partly due to: (i) the slow elemental migration and reaction of Ga relative to In, and the relatively high melting point of Ga-based samples than In-based samples [38,41,42], as seen in the Cu2S-In2S3 and Cu2S-Ga2S3 phase diagrams [38,42].(ii) The higher melting point of sulfides compared to selenides due to the lower atomic weight of S compared to Se [43][44][45]. The characterization of the crystallinity, phases and vibrational properties of the films was by Xray diffraction (XRD) using the CuKα radiation, and Raman spectroscopy with an excitation wavelength of 532 nm.The surface morphology and cross-section micrographs were obtained by a scanning electron microscope (SEM), and the chemical composition was determined by energy dispersive X-ray spectroscopy (EDX) with beam energy of 20 kV on as-grown films before etching.Therefore, the compositional ratio of Cu-rich films mentioned in this report is an integration of the ternary chalcopyrite phase and secondary copper sulfide (CuxS) phase.As such, "Cu-poor" refers to material with a ratio [Cu]/[Ga] ≤ 1, while "Cu-rich" refers to [Cu]/[Ga] ≥ 1. The Cu-excess phases were removed by etching in an aqueous solution of 10 % potassium cyanide (KCN) for 5 minutes [46], before the photoluminescence measurements. Lower substrate temperatures during the deposition of the CuGaS2 films resulted in poor quality films showing unidentifiable crystallographic phases among those close to CuGa3S5 and CuGa5S8 [47][48][49].This shares a similarity to CuGaSe2 when deposited at low temperatures [50].Conversely, at these high deposition temperatures, group VI elements such as sulfur and selenium, have low sticking coefficient and are extremely volatile, which increases the possibility and the rate of sulfur loss and re-evaporation [34,[51][52][53], necessitating a high pressure of sulfur during growth.The growth parameters of films grown at various sulfur vapour pressures and substrate temperatures, labeled G1-G4, are presented in Table 1, the X-ray diffractogram on the films are also shown in Fig. 2. Analysis of the chemical composition of film G1, deposited with actual substrate temperature of 600° C and chamber pressure of 5.9 × 10 −5 mbar, showed that the [S]/([Cu]+Ga]) ratio was 0.69. From the X-ray diffractogram in Fig. 2, the deficiency of sulfur in G1 promoted the growth of a γCu9Ga4 phase.An increase in the growth temperature to 620 °C in G2 minutely increased the S content to 0.71 and the slight decrease of the γCu9Ga4 phase.Ultimately, by simultaneously increasing both the deposition temperature and chamber pressure to ~ 690°C and 5.9 × 10 −5 mbar respectively, the S content increased to 1.0 and the unwanted γCu9Ga4 phase was suppressed. Hence, the deposition of CuGaS2 requires a larger S overpressure [27], than would be needed for pure CuInS2 or Cu(In,Ga)S2 [39,54], in order to mitigate sulfur loss.Consequently, during the CuGaS2 deposition process, the sulfur pressure in the chamber is maintained in the range of 5.9 × 10 −5 mbar to 8.5 × 10 −5 mbar.The deposition time of ~ 2 hours is used to achieve thicknesses of approximately 2μm. Effect of growth conditions on the structural properties of the films Before presenting the results and discussing the optical characterization of different spectral regions on the films, it is imperative to ascertain the quality of the films under investigation.Hence, in the following section, the material characterization in terms of composition analyses, preferential chalcopyrite orientation, crystallinity and microstructural structural properties obtained from SEM-EDX, XRD and Raman analyses will be examined. The chemical composition of the different films studied, as analyzed by EDX, is between 0.94 and rough granular surface with pyramidal grains which are compact and well-connected to the back on the Mo back-contact.On the other hand, both micrographs in Fig. 3a and Fig. 3c show that the Cu-rich films featured smoother surfaces with larger and denser grains.This is in accordance with other chalcopyrite compounds, where it is well established that copper-excess promotes the formation of large grain sizes and improves crystallinity [55][56][57][58][59]. Additionally, the high deposition temperature and pressure could have contributed to the quality of both Cu-rich and Cu-poor films, as these conditions foster effective nucleation and improve the quality of the grain growth [34,37].The characteristic crystallographic orientation of the prevalent phase in the layers obtained from XRD is depicted in the diffractogram in Fig. 4a.The ICDD database -ICDD PDF 00-025-0279 is used as a reference to index the peaks.The most prominent peak is the (112) plane of CuGaS2. The peak at 41° in Fig. 4a is due to the Mo back contact.A measure of the crystal quality is manifested in the split of the 220 and 204 peaks, resulting from the tetragonal distortion occurring in the chalcopyrite unit cell.The crystal quality of the films investigated is also corroborated by the absence of secondary phases in Fig. 4a, besides the CuS that is expected in a Cu-rich film. Fig. 4b shows the Raman spectrum of a Cu-rich CuGaS2 film.The dominant line at 310 cm -1 is the A1 mode, which corresponds to the vibration of the sulfur (or group VI) atom [60,61].This mode is also the dominant Raman mode in other chalcopyrite compounds such as CuInS2, CuInSe2, CuGaSe2, etc. [62,63].The other less intense but notable Raman-active modes appearing at 276 cm -1 , 364 cm -1 and 384cm - 1 correspond to the highest longitudinal optical phonon modes [60], while the peak at 408 cm -1 has been attributed to MoS2 [64].Since Raman spectroscopy is surface-sensitive [65], the detection of the MoS2 peak could be due to holes in the film.The impact of the modes will be revisited in relation to the observed phonon replicas in the PL spectra discussed in the subsequent section. However, the nonappearance of any characteristic secondary phase, is an indication of the high quality of the CuGaS2 film.To summarize, the results from the SEM-EDX, XRD and Raman analyses on the CuGaS2 thin films investigated attest to the good quality of the films. Photoluminescence features of CuGaS2 at low temperature First, a summary of the PL spectra of CuGaS2 with varying compositional ratios is presented together with the attributions for the different observed transitions.Afterwards, the methods used in analyzing and assigning the different peaks to a specific transition are discussed.Fig. 5 shows an overview of different CuGaS2 PL spectra by composition at 10 K.The spectra feature (i) near band edge emissions with sharp intense excitonic (EXC) peaks around 2.48 eV, 2.49 eV and 2.502 eV; (ii) shallow defect-related emissions between 2.25 eV and 2.45 eV: several free-to-bound (FB) and donor-acceptor transitions (DA) with their phonon replicas, and (iii) a broad deep defect peak at ~ 2.15 eV.The influence of the [Cu]:[Ga] composition on some peaks can be clearly observed in the 2.3 eV transition (DA3), where the intensity of the peak increases with increasing Cu content, even dominating and screening other peaks in the spectrum for the film with [Cu]/[Ga] ratio = 2.The attribution of the peaks to the transitions in the figure will be derived in the following sections. For slightly Cu-rich films with [Cu]/[Ga] ratio of 1.3 for example, the relative intensity of the 2.3 eV transition to the other peaks reduces, and it is noticeable that the 2.3 eV transition overlaps with the broad peak around 2.15 eV.In contrast, the intensity of the broad peak at ~ 2.15 eV and another one at 1.85 eV (see Fig. 17) increases with lower Cu-content and it dominates the Cu-poor material alongside the excitonic transition at 2.48 eV and transitions around 2.40 eV. To investigate the different spectral regions, the relative intensity of the transitions described above is considered, as such, the near band-edge emissions and shallow defects are investigated using the near stoichiometric and Cu-rich films, while the deep defects are studied with Cu-poor and near stoichiometric films.The assignment of a peak to a specific transition follows the evaluation of PL flux and energy position in dependence of the excitation intensity in a double logarithmic scale and semi-logarithmic scale, respectively [66].The high luminescence of some samples allowed for a wide range of excitation intensity over many orders of magnitude, and the doublelogarithmic plot of the excitation-dependent integrated PL flux of such samples results in a curvature which cannot be described by a single power law. The curvature is inherent and occurs when multiple defect levels participate in the recombination process [67].Using rate equations and charge balance, exhaustive conditions beyond just the simple case, where a single power law can describe PL flux dependence on excitation intensity, has been reported in Ref [67][68][69].A more comprehensive double-power law expression that better describes the curved shape where , (i = 1, 2) take on multiples of and 0 is a turning point or crossover excitation at which a state interacting with the recombination process becomes saturated [67].Essentially, for a curved double-log plot, the k-values for exciton-related transitions are between , whereas for defect-related transitions 2, however, more complex cases can be found in Ref. [67,68]. Table 2: Summary of the behaviour of the power law exponent (k) and -values in dependence of excitation intensity.The values of k take on multiples of To distinguish between DA and FB transitions we use the characteristic blue-shift of the emission energy of DA transition with increasing excitation intensity [66,70].This energy position is expressed by where is the DA peak energy position, is the bandgap, is the donor defect energy relative to the conduction band and is the acceptor defect energy relative to the valence band.The last term is the Coulomb energy, with q being the elementary charge, 0 is vacuum permittivity, is the relative permittivity and is the spatial distance between the donor and acceptor [66,71]. As the excitation intensity increases, the density of neutralized donors and acceptors increases, and the spatial distance between the donor and acceptor atoms decreases, thereby increasing the influence of the Coulomb interaction.The relationship between the transition energy position in dependence of excitation is empirically described by where typically takes values between 1-5 meV per decade of excitation intensity [72]. Near band-edge luminescence of CuGaS2 (2.46-2.53 eV) The band-edge emissions are assessed using the film with the highest Cu-content of [Cu]/[Ga] ratio = 2.0, due to its high luminescence flux and well-resolved peaksalthough not obvious in Fig. 5 because of the high luminescence of the DA3 transition.The luminescence strength of this film also supports the enhanced crystallinity when the material is processed under high Cu excess. A plot of the PL spectra in the near band-edge region between 2.46 eV and 2.53 eV at different excitation intensities is illustrated in Fig. 6.We will argue in the following that the emission line B is the ground state of the free exciton, while A is the first excited state.The lines C-F will be identified as bound excitons. Of the six peaks delineated, the most intense peak is at ~ 2.481 eV (D), with transitions at 2.488 eV (C) and 2.496 eV (B) at lower intensities, but visible in all spectra.On the high-energy end, the weak line at ~ 2.518 eV (A) is visible only at high excitation, and on the low-energy end, the intensity of a transition at ~ 2.474 eV (E) decreases while the 2.468 eV (F) peak is more resolvable at higher excitation intensities.In Fig. 6, the lines do not show a shift of energy position with increasing excitation intensity, which preliminarily leaves them as either excitonic or free-tobound transitions.To discriminate between the two possibilities, the PL flux in dependence of the excitation intensity for the different peaks is evaluated in a double-log plot shown in Fig. 7a.The multiple-power law in equation ( 1) is rather used to fit the curves, and the fit of the emission line 2.496 eV (B) is presented in Fig. 7b as an example. The k-exponent results in ~ at high excitation intensity.As mentioned in the introduction to this section, the exponents take on multiples of , and the change of exponent occurs when competing transitions or a defect involved in the transition saturates.The line B transition at 2.496 eV is attributed to the free exciton transition, since it occupies the highest energy position (apart from line A which is only detected at higher excitation intensity and will be discussed later).Bound excitons appear at lower energies due to larger binding energies of the exciton to defects [28,66,73].Attributing line B to the free exciton is further substantiated by its subsequent use in deducing the free exciton binding energy ( ) from the first excited state, as shown next.In previous reports, and in agreement with this report, free exciton has been observed between 2.489 eV and 2.504 eV from photoreflectance spectroscopy and PL analyses [28,31,74].In the different reports, the disparity of the free exciton energy positions were linked to lattice strain and the different analytical techniques [52].The emission line A at 2.518 eV in Fig. 6 matches the first excited state (n = 2) of the free exciton, and the free exciton binding energy can be determined from the energy difference between the ground state ( = 1) and the first excited state ( ( = 2) − ( = 1)).The determined free exciton binding energy of 29 meV is in the range of reported free exciton binding energy for CuGaS2 between 28-32 meV [28,36,75], which further justifies the designation of line A as the first excited state of the free exciton.The knowledge of makes it possible to deduce the bandgap value at 10K, which will be important in the determination of the defect level energies.Therefore, in this study, we report the corresponding bandgap, ( g = + ), for CuGaS2 as 2.525 eV at 10 K. For CuGaS2, the hole effective mass ( ℎ ) deduced from Hall-effect analysis and by calculation is ℎ = 0.69 [76,77], where is the electron mass, and the dielectric constant obtained from optical-absorption analysis is = 8.5 [78].Different values between 0.12-0.19 have been reported for the reduced mass of CuGaS2 by different groups, consequently, the electron effective mass ( ) deduced from the reduced mass is between 0.13-0.26 [28,45,76,77,79].Therefore, the mass ratio ( ℎ ⁄ ) for CuGaS2 is between 0.19-0.38.Sharma et al. found that the limit of mass ratio for a stable exciton, bound to a charged donor and a charged acceptor is 0.20 and 0.29 respectively [80], as such, the mass ratio for CuGaS2 suggests that the binding of excitons to both ionized donors and acceptors in CuGaS2 would result in unstable ionized complexes [80]. However, binding energy for the neutral complex of both the donor ( 0 , ) and the acceptor ( 0 , ) can be found from the expressions ( 0 ,) = 0.07 + (5) where, and are donor and acceptor energies respectively [28,80,81].Similar to the deduction of the binding energy for free exciton, the difference between a bound exciton and the bandgap corresponds to the binding energy of the bound exciton [82,83]. From the knowledge of the bandgap and exciton binding energy, the probable ionization energies of the donors or acceptor corresponding to an emission line can be calculated from equation (4) and equation ( 5).The values are summarized in Table 3 for emission lines C to F. Previous reports have associated similar transition to the line C at 2.488 eV to a bound exciton recombination [30,56], while some other reports have attributed a comparable emission to the 2.481 eV line (D) as a FB recombination involving a transition between a neutral donor and the valence band-edge [56,84].According to the estimation presented in Table 3, it seems that the 2.488 eV exciton (C) is bound to neutral acceptor at 67 meV or neutral donor at 114 meV, while the 2.481 eV emission (D), is bound exciton to a neutral donor at 125 meV or a neutral acceptor 214 meV.The existence of either of these levels and applicable level is presented in the succeeding sections. As we will show in the following, the only shallow donor we find has a binding energy of 35 meV, which makes it unlikely that any of those excitons are bound to a donor.On the other hand, we find shallow acceptor states at energies near 100 meV and 200 meV, to which the bound excitons C and D would correspond.Additionally, we find several deep defects, for which we can only speculate at the moment, they might be the defects to which the excitonic lines E and F are bound.Lastly, in previous reports, transitions identical to the line E have been assigned to a FB transition involving a shallow level [27,28].However, following the excitation-dependent analyses of the line E showing exciton-related behavior.The consideration of the transitions at 2.474 eV (line E) and 2.468 eV (line F) as exciton-related transitions would require that the exciton be bound to a deep defect level, as inferred from Table 3. Shallow defects, donor-to-acceptor pair transitions and phonon coupling Several sharp peaks dominate the typical PL spectrum of Cu-rich CuGaS2 at 10 K, between the range of 2.45 eV and 2.10 eV, as seen in Fig. 5 and Fig. 8.Some of the peaks appear in groups at regular energy intervals, and as it will be shown in the following: these are phonon replicas associated with shallow donor-to-acceptor (DA) transitions.The series of sharp peak follows an intense line, known as the zero-phonon line (ZPL), which is followed on its low energy end by several successive peaks of weakening intensities.These peaks are separated by the energy of the coupling LO-phonon.The excitation-and temperature-dependence behavior of the phononreplicas is identical to the emission at the ZPL.As we show below, the spectral intensity dependence of such phonon-assisted transitions is well described by the Poisson distribution expressed by, where is the number of phonons involved in the interaction, is the intensity of the nth phonon replica and , known as the Huang-Rhys factor, is the coupling strength of the electron-phonon interaction of the corresponding defect [85].For shallow (weakly localized) defects, the electronphonon coupling is weak and < 1, thus, the ZPL is the most intense peak and does not shift in peak energy.On the other hand, if = 1, there is a change in the maximum intensity, as the first phonon replica becomes of the same intensity as the ZPL.Lastly, when > 1 there is a strong electron-phonon coupling of localized defects, leading to a shift in the maximum intensity away from the ZPL to a lower energy, since the phonon replicas have higher intensities than the ZPL.It is worth mentioning that for broadened emission bands, phonon replicas do not manifest by the sharp peaks, rather by a broad asymmetric distribution [86][87][88]. In the next subsections, each of the donor-to-acceptor pair transitions (DA) peaks as shown in Fig. 8, that is, DA1, DA2 and DA3, along with their accompanying phonon replicas, will be discussed.Interestingly, the corresponding free-to-bound transitions between the conduction band and the acceptor level are already observed at low temperature. DA3 transition at ~ 2.29 eV The low-temperature (10 K) PL spectrum showing the transition related to 2.29 eV, measured at a low excitation intensity where the peaks are well resolved and without the strong influence of other defect peaks, is presented in Fig. 9.It is worth noting that the sample in Fig. 9 is the same as the sample in Fig. 5 with [Cu]/[Ga] = 1.8.However, while the spectrum presented in Fig. 9 is measured at 10 µW, the spectrum presented in Fig. 5 is at 100 µW.The spectrum (Fig. 9) features a series of peaks with the most intense line at ~ 2.29 eV followed by several successive lines of weakening intensities on the lower energy end.These weakening lines are energy-spaced by ~ 45±1 meV, corresponding to the lowest of the three highest energy optical phonon modes of 45.2 meV, 47.6 meV and 49 meV [60,89], which are equivalent to the Raman modes observed at the frequencies of 364 cm -1 , 384 cm -1 and 408 cm -1 as seen in the Raman spectrum of CuGaS2 presented in Fig. 4b.A fit of the spectral and intensity pattern by the Poisson distribution in equation ( 6), while also considering a background of emissions from deep defects, yielded a Huang-Rhys factor ≈ 0.80 ± 0.05 and a ZPL at ~ 2.285 eV.This value of S and the energetic distance between the ZPL and the band gap are in agreement: for defect transition more than 200meV away from the bandgap a rather high Huang-Rhys factor is expected [66,87]. To identify the exact nature of the transition, the PL spectra acquired at different excitation intensities for the energy between 2.10-2.35eV are presented in Fig. 10a.It is visible that there is a blue-shift of peak positions for the ZPL and the phonon replicas in parallel as the excitation intensity is increased.Such a shift in energy position is due to the influence of Coulomb interaction and indicative of a DA transition and it is expressed by the equation (3).The actual shift of energy position can be extracted from a plot of the energy positions against the excitation intensity.As shown in Fig. 10b, for the transition at ~ 2.29 eV, the plot of energy position against excitation intensity shows a curvature.This is due to the fact that, for a sufficiently wide range of excitation intensity, the energy positions of DA transitions assume an S-shape behaviour [67,90].The peak position approaches the energy position for infinite donor-acceptor pair separation at the lowest excitation, while at the highest excitation; the peak position approaches the summation of infinite donor-acceptor pair separation and the Coulomb energy for minimum donor-acceptor pair separation [90].Excitation dependence of the integrated PL flux for the DA3 transition is reported in a double-logarithm plot shown in Fig. 11.It can be seen that the plot in Fig. 11 is a curvature which is adequately evaluated by equation (1).The fitting by two power law exponents results in = 2 2 at low excitation intensity and = 1 2 at higher excitation intensity.The change of exponents, referred to as crossover, occurs at ~ 3-6 mW/cm 2 of excitation intensity.This crossover indicates that a defect level or a deeper mid-gap level interacting with the recombination process of DA3 transition saturates at this intensity [67].On the high-energy end of DA3, is a low-intensity peak at ~ 2.32 eV as seen in Fig. 9 and Fig. 10a. The peak becomes more intense with increasing excitation intensity, as shown in Fig. 10a, until it is eventually obscured by the broadening DA3 transition.Nevertheless, it is still noticeable in Fig. 10a that the energy position barely changes with increasing excitation intensity.Given that the energy position of FB transition does not shift with energy position, and owing to its proximity to the DA3 transition, the weak peak at ~ 2.32 eV is assigned FB3.It is noteworthy that the FB3 transition might account for the curvature of the excitation dependence of PL flux for DA3 as illustrated in Fig. 11, since a shallow defect participating in the DA3 transition could saturate [67]. This is established by the value of the crossover excitation at ~ 3-6 mW/cm2 in Fig. 11 being close to the screening of FB3 in Fig. 10a, as seen above in the PL spectrum at 5.76 mW/cm 2 in Fig. 10a. Temperature-dependent analyses of the PL spectra shown in Fig. S2 of the Supplementary information, give further support to this attribution of DA3 and FB3.As the temperature increases, the intensity of DA3 decreases whereas the relative intensity of FB3 increases before the thermal quenching of the transition.For the transitions at ~ 2.43 eV and 2.45 eV indicated as FB2/BX and FB1 respectively in Fig. 13a, the integrated PL flux for both peaks with respect to excitation intensity in a double-log scale is shown in Fig. 14a.The single power law fit of both transitions also gives a power law exponent for both transitions.This linear dependence of the PL flux on excitation intensity can be interpreted as transitions originating from DA at low excitation, FB transitions or BX transition [66,67], although both transitions at ~ 2.43 eV and 2.45 eV have been tentatively reported as FB transitions [30,31].The energy positions in dependence of excitation intensity which is presented in Fig. 14b show no significant shift of energy position with increasing excitation intensity over three orders of magnitude for the 2.45 eV peak, making its consideration as an FB transition compelling.eV and free-to-bound peaks at 2.436 eV and 2.449 eV. Temperature-dependent measurements were performed to understand the behaviour of the DA1 and DA2 transitions, and to know the influence of temperature on the associated FB or excitonic transitions associated with DA1 and DA2.This is because shallow defects will be thermally emptied with increasing temperatures, and contribute to a FB transitions [66].The temperaturedependent spectra presented in Fig. 15, show that as the temperature increases, the intensity of the DA1 and DA2 peaks decrease since a shallow defect level involved in the transitions is thermally emptied.It becomes obvious that the 2.43 eV line (labelled FB2/BX in Fig. 13a) consists of two peaks, one at 2.428 eV and another at 2.436 eV.It is observed that the relative intensity of the 2.428 eV line (BX) rapidly decreases and is quenched at approximately 50 K, as is typical for a bound exciton.Its energy position suggests that, it is bound to a much deeper defect than the excitons discussed above in section 4. The relative intensity (compared to the DA transitions) of the 2.436 eV (FB2) and Summary and tentative shallow defect levels in CuGaS2 Fig. 18 summarizes the photoluminescence spectra of a slightly Cu-rich CuGaS2 with all the peaks identified.In the course of investigating the CuGaS2 semiconductors in this present work, several well-resolved exciton-related transitions were detected.The bandgap at 10 K is determined as 2.525 eV from the free exciton and its first excited state at 2.496 eV and 2.518 eV, respectively. In this report, several sub-band edge transitions, were identified as DA transtions interacting with a common shallow donor level at 38±2 meV and shallow acceptors 75±5 meV (A1), 90±3 meV (A2) and 210±5 meV (A3).Metzner et al. have also reported the shallow transitions and defect levels: a shallow donor at 25 meV and two shallow acceptors at 89 meV and 109 meV [30].In addition, we observe the deeper acceptor A3; the related DA3 transition becomes more intense with higher Cu-content.Botha et al. also reported such defect for slightly Cu-rich CuGaS2: an acceptor 210 meV above the valence band with a donor likely at ~ 53 meV [84].We have shown that DA1, DA2 and DA3 interact with a common shallow donor ~ 38±3 meV.CuGaSe2 [11][12][13][14]19] and CuInS2 [26,58].For a detailed comparison see Reference [9].… An overview of the transition energies identified from literature and those identified in this work is presented in Fig. 19.It can be observed that different transitions were identified independently by different groups.In this work, within the range of energies investigated, all the different transitions energies separately reported were identified and their composition dependence clarified. Novel solar cell on CuGaS2 The room temperature bandgap of CuGaS2 is around 2.45 eV.The wide bandgap makes it not interesting for use as a single junction solar cell.Nevertheless, it is also important to understand how the defects of CuGaS2 might influence the electrical properties of a single junction solar cell. The absorber used is a Cu-rich film with a [Cu]/[Ga] ratio of ~1. to the quasi-Fermi level splitting.Thus, leading to a power conversion efficiency of mere 1.8%. We speculate that the high interface VOC deficit originates from two factors: (i) from the near interface defects [93] as the device was prepared using the Cu-rich CuGaS2 absorbers, (ii) a negative conduction band offset at the CuGaS2/(Zn,Mg)O interface, due to high conduction band minimum of CuGaS2 and relatively low conduction band minimum of (Zn,Mg)O.While the former limits the VOC by reducing the QFLS near the interface and can be mitigated by doing a chalcogen treatment [94], the later limits the VOC by reducing the QFLS, and requires a buffer that is better matched to the conduction band minimum of the CuGaS2.Nonetheless, This work demonstrates that it is possible to make working solar cells with CuGaS2 though significant efforts are required to achieve useful VOC and power conversion efficiencies. Figure 2 : Figure 2: X-ray diffractogram of various films showing the effect of increasing growth temperatures and sulfur pressures from G1-G4.The ICDD PDF 00-025-0279 and 00-037-1492databases have been used to reference the peaks.The details are presented in Table1. 2. 0 in [Cu]/[Ga] atomic ratio.Fig. 3 shows the SEM micrographs depicting the typical surface morphology and cross-sectional images of Cu-rich and Cu-poor films.The specific composition of the films shown are [Cu]/[Ga] = 1.3 for the Cu-rich film, and [Cu]/[Ga] = 0.94 for the Cu-poor film.The micrographs were obtained after the CuxS secondary phase was etched by 10 % KCN solution.The top-view, Fig. 3b, and the cross-sectional image, Fig. 3d of the Cu-poor film show Figure 4 : Figure 4: (a).X-ray diffractogram of the Cu-rich as-grown CuGaS2 film in Fig. 3 showing the reflection planes.(b) Raman spectrum of a Cu-rich CuGaS2 film with the Raman active modes. 2 [ 67].A simple summary of k-value for different transitions 11 investigated in this work is presented in Table Figure 6 : Figure 6: Near band-edge spectra of Cu-rich CuGaS2 measured at 10 K taken at several excitation intensities over three orders of magnitude.The transition peaks are at 2.518 eV (A), 2.496 eV (B), 2.488 eV (C), 2.481 eV (D), ~2.474 eV (E) and 2.468 eV (F).The dashed lines highlight the constant energy positions with increasing excitation intensity. Figure 7 : Figure 7: (a) Excitation intensity dependence of integrated PL flux for transitions lines 2.474 eV (E), 2.481 eV (D), 2.488eV (C) and 2.496 eV (B) fitted with the double power law.(b) A fit of the emission line 2.496 eV (B) with two power laws exponents = 3 2 and = 3 2 at high and low excitation, respectively. 0 denotes the turning point between the two excitation regimes, i.e., the flux where one of the defects is saturated. Figure 9 : Figure 9: Low-temperature (10 K) PL spectrum of a CuGaS2 film of [Cu]/[Ga] = 1.8 between 2.0-2.4 eV.The figure shows a fit of the phonon-assisted transition at ~ 2.29 eV (DA3) by Poisson function with consideration for deep defects.The low intense peak at ~ 2.32 eV is an associated FB transition related to DA3 which will be discussed afterwards. Figure 10 : Figure 10: (a) The low-temperature (10 K) PL spectra of Cu-rich CuGaS2 at different excitation intensity, demonstrating the shift of energy position of DA3 and its phonon replicas with the increase in excitation intensity.The dotted arrows are used to guide the eye for the shift in energy position.(b) Excitation intensity dependence of the energy position of DA3 transition in a semilogarithm plot at 10 K. 2 f 1 / 2 Figure 11 : Figure 11: Double logarithmic plot of the DA3 transition with the integrated PL flux as a function of excitation intensity.The values are extracted from the integrated PL flux of the Cu-rich CuGaS2 spectra in Fig. 10a. Figure 12 : Figure 12: Arrhenius plot of the integrated PL flux with respect to temperature for the thermal quenching of the DA3 transition in a Cu-rich CuGaS2 film. Figure 13 : Figure 13: (a) Low temperature (10 K) PL spectrum of Cu-rich CuGaS2 measured at 0.9 mW/cm 2 , showing phonon replicas accompanying the DA transitions at 2.410 eV (DA1) and 2.398 eV (DA2).The sample used to analyze the transitions is the CuGaS2 film with [Cu]/[Ga] = 1.3 in Fig. 5 and Fig. 8.In Inset is the full spectrum of the PL spectrum of the film from Fig. 8.The dashed line centered at approximately 2.15 eV describes a broad transition related to a deep defect.The region in focus is indicated by the red box in the inset.(b) The Integrated PL flux as a function excitation intensity of the 2.398 eV (DA2) and 2.410 eV (DA1) transitions at 300 K. (c) Excitation intensity dependence of the energy position of DA1 and DA2 transition in a semi-logarithm plot at 10 K. Figure 15 : Figure 15: Temperature dependent PL spectra on Cu-rich film with [Cu]/[Ga] ratio of 1.3.The temperature-dependent measurement leads to the resolution of bound exciton transition at 2.428 2.448 eV (FB1) lines increases as temperature increases up to 70 K before decreasing and quenching at 120 K, supporting their attribution as FB transitions.Given the proximity to the energy positions of DA1 (2.410 eV) and DA2 (2.398 eV), the transitions at 2.449 eV and 2.436 eV can be sufficiently associated with the DA1 and DA2 transitions as the related FB transitions at, FB1 (2.449 eV) and FB2 (2.436 eV) respectively.By the energy difference between the DAs and FBs, FB1 and FB2 appear to involve a common shallow donor at 35 meV.In accordance with the attribution of FB1 and FB2, and the estimated CuGaS2 bandgap of 2.525 eV at 10K, the 2.449 eV (FB1) and 2.436 eV (FB2) transitions are estimated to involve acceptor levels at ~ 75 meV and ~ 90 meV, respectively. Figure 17 : Figure 17: (a) Region of broadband deep defects featuring transitions centered at approximately 1.85 eV and 2.15 eV.(b) Energy position as a function of excitation intensity for the deep defect at ~ 1.85 eV. Figure 18 : Figure 18: Summary of identified transitions for Cu-rich CuGaS2 at 10 K Figure 19 : Figure 19: Overview of transition energies of CuGaS2 from literature with transitions identified in this work. Figure 20 : Figure 20: Tentative defect model for CuGaS2 as reported in this work.A shallow donor level (D1) and three shallow acceptor levels (A1, A2 and A3) were identified.Two broad defect levels are also assumed to be involved in transitions in CuGaS2. 3 . The room temperature PL spectrum of the absorber is shown in Fig. 21.The room temperature spectrum is dominated by the broad transition centered around 1.5 eV as shown in Fig. 21.The device possesses a quasi-Fermi level splitting (QFLS) of 1.68 eV, and consequently a rather large deficit of 0.42 eV compared to the Shockley-Queisser open-circuit voltage ( ) [91] owing to the defects in the material.The QFLS was determined by evaluating the PL quantum efficiency of the absorber, to determine the non-radiative loss or the reducing factor from the ideal [92]. Figure 21 : Figure 21: Room temperature photoluminescence spectrum of the CuGaS2 absorber completed into solar cell.In inset is the magnification of the band-to-band transition. Fig. 22 Fig. 22 shows the current density-voltage characteristics of the CuGaS2 device prepared with (Zn,Mg)O buffer layer, Al:ZnMgO sputtered i-layer and an Al:ZnO window layer.The device demonstrated a VOC of 821 mV leading to a very high interface VOC deficit [93] when compared Figure 22 : Figure 22: Current density-voltage curve of CuGaS2 device prepared with (Zn,Mg)O buffer layer Figure S1 : Figure S1: Near band-edge spectra of Cu-rich CuGaS2 measured at 10 K taken at several excitation intensities over three orders of magnitude.The transition peaks are at 2.468 eV (F), ~ 2.474 eV (E), 2.481 eV (D), 2.488 eV (C), 2.496 eV (B) and 2.518 eV (A).The dashed lines highlight the constant energy positions with increasing excitation intensity Figure S4 : Figure S4: PL spectra of a moderately Cu-rich film with [Cu]/[Ga] ratio of 1.3, showing the Figure S5 : Figure S5: Fitting of DA1 and DA2 and their phonon replicas by a Poisson distribution taking into account deep defects and DA3 and FB3 transitions. Table 1 : Influence of deposition properties on the sulfur-content in CuGaS2 films Table 3 : Estimated values of exciton binding energies, neutral donor and acceptor energy levels calculated using equation (4) and equation (5) for the emission lines C-F. 43 eV (FB2) Energy position (eV) Excitation intensity (mW/cm 2 ) (b) .436 eV, which are better resolved by the temperature-dependent analysis.Therefore, we conclude that below the excitation intensity of 10 mW/cm 2 in Fig.14b, the 2.428 eV transition dominates, however, as the excitation intensity increases beyond 10 mW/cm 2 , the intensity of the 2.436 eV transition increases and dominates, hence the shift of energy position observed in Fig.
2023-11-02T06:42:20.609Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "963eb3ded17b19a2110b1afefe577c40893299e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "963eb3ded17b19a2110b1afefe577c40893299e1", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259018828
pes2o/s2orc
v3-fos-license
Digital Citizenship and Life Satisfaction in South Korean Adolescents: The Moderated Mediation Effect of Poverty This study examined the moderated mediation effect of poverty on the paths between enactive mastery experience in digital life and life satisfaction mediated by digital citizenship and digital life among Korean adolescents using structural equation modelling. This cross-sectional study involved a secondary data analysis of 2020 national data in The Report on the Digital Divide provided by the National Information Society Agency (NIA) of Korea. Data from 1084 Korean adolescents were analyzed using IBM SPSS Statistics for Windows, version 26.0 and SPSS PROCESS macro. The results demonstrated a significant moderated mediation effect of poverty. Enactive mastery experience, which encompasses the self-knowledge, perceived task difficulty, and contextual factors of adolescents living in poverty, was associated with digital life and life satisfaction through the mediation of digital citizenship. For adolescents living in poverty, in contrast to their non-poor counterparts, enactive mastery experience in digital life and digital citizenship are two critical factors in life satisfaction. Therefore, institutional support enabling adolescents and their communities to forge partnerships is necessary to foster these two factors. Introduction In this digital era, the use of digital products and services is associated with people's life satisfaction [1]. Adolescents in modern society are digital natives whose digital life has a substantial impact on most aspects of their lives, including education, peer relationships, and hobbies [2]. The life satisfaction of adolescents is particularly important as adolescence is a time of life that influences the overall health and development of an individual [3]. However, poverty is closely related to digital exclusion [4], and adolescents in impoverished families living in an environment with more disadvantages than their non-poor counterparts show low life satisfaction [3]. Abundant previous studies have argued that even in a disadvantaged environment, life satisfaction can be altered by an individual's overall cognitive appraisal of life, depending on not only one's environmental characteristics but on personal intrinsic and extrinsic resources [5,6]. Self-efficacy is a key factor that can enhance the quality of life of adolescents living in poverty [3]. Bandura (1997) [7] defined self-efficacy as "beliefs in one's capabilities to organize and execute the courses of action required to produce given attainments". Bandura further noted that enactive mastery experience is the most powerful source of influence in the formation of self-efficacy. In particular, enactive mastery experience depends on task difficulty, self-knowledge, and contextual factors (including 2 of 10 suitability of support from others and adequacy of available resources), which constitute the root of any change in self-efficacy. Enactive mastery experience by task difficulty, self-knowledge, and contextual factors in digital life may increase the life satisfaction of adolescents. Digital competence has been noted as an important survival technique for people living in the digital era as well as an essential component of education and learning [8,9]. A low level of belief in digital competence leads to low academic achievement and work performance [10]. Moreover, perceived ease of use regarding digital technology has influenced the adaptation to online education. For example, a decrease in task difficulty can increase the attitude toward the use of the online test platform and the intention to use an online test [11,12]. In addition, a lot of extrinsic resources are required to make digital life smooth. To illustrate, for effective digital learning, preparation by teachers, school, and home environments and accessibility for students should all be appropriate [13]. Therefore, in order to devise strategies to enhance the quality of life of adolescents in poverty, it is necessary to understand perceptions around the knowledge of digital competence, the level of awareness of the difficulty of a digital task, and the presence or absence of support from others or available resources on the digital life of adolescents. To address psychobehavioral problems, including smartphone and game addiction and cyberbullying, both educational institutions and research institutions have emphasized the technological drawbacks and misuse of technology by users [14]. Nonetheless, Walters et al. (2019) [15] highlighted the importance of digital citizenship. Digital citizenship is defined as the online code of conduct for the safe, ethical, and responsible use of technology with the capacity to develop the necessary technology and perspectives toward a digital lifestyle [16,17]. In a study on cyberbullying, Zhong et al. [18] noted that among the factors of digital citizenship, internet etiquette and understanding of and compliance with cyber laws and regulations are negatively correlated with cyberbullying. Recent research has likewise emphasized the importance of digital citizenship as a factor to prevent the side effects of digital use [19]. The critical role of digital citizenship is likely to apply consistently to adolescents in poverty. The increased life satisfaction of adolescents is associated with positive adaptation in adulthood [20]. If adolescents living in this digital era can acquire confidence in digital use and own their digital citizenship to adapt positively to digital life with healthy psychosocial functions, then their life satisfaction will improve, including that of adolescents living in poverty. Thus, elucidating the life satisfaction of adolescents living in poverty is especially crucial. Therefore, this study aimed to identify the moderated mediation effect of poverty on the paths between enactive mastery experience in digital life and life satisfaction mediated by digital citizenship and digital life among Korean adolescents based on the theory of self-efficacy. Study Design Our study used secondary data from the Report on the Digital Divide [21] collected by the National Information Society Agency (NIA) of Korea to determine the moderated mediation effect of poverty on the paths between enactive mastery experience in digital life and life satisfaction mediated by digital citizenship and digital life among Korean adolescents. For this, we devised and tested a hypothetical model ( Figure 1). Children 2023, 10, x FOR PEER REVIEW 3 of 10 Data Collection The Report on the Digital Divide [21] includes the population representing the nonpoor class, in addition to the population representing a special group of older adults, the poor class, persons with disabilities, and North Korean refugees. The participants in the Report on the Digital Divide [21] were 7000 persons in the non-poor class, 2200 persons in the poor class, 2200 persons with disabilities, 2200 persons working as agricultural or fishery farmers, 700 North Korean refugees, and 700 immigrants married to Koreans; based on the data of a total of 15,000 people, the level and reality of the digital information gap of each group were investigated. For the survey, the proportional quota sampling method was applied to the sample by gender, age, and regional local government, and face to face interviews were conducted, during which the participants completed a structured questionnaire. The NIA's study period was from September to December 2020. In the present study, we excluded data with missing values on the items of the main variables. Thus, from the data of 1085 middle school to high school adolescents in the nonpoor and poor classes, who comprised our target sample, our analysis used the data of 1084 adolescents, including 648 and 436 middle to high school students in the non-poor and poor classes, respectively. The non-poor class refers to household members living in households across the nation. The poor class indicates the registered beneficiaries of National Basic Livelihood based on the National Basic Living Security Act of Korea. Enactive Mastery Experience Perceived Task Difficulty We measured perceived task difficulty based on self-assessment by adolescents regarding their ability to perform activities using mobile devices (e.g., smartphones, tablets). The assessment included seven questions as follows: "I can change device settings, such as display, sound, security, alarm, input, etc"., "I can set up a wireless network (Wi-Fi)", "I can move a file from a mobile device to a computer", "I can send files and photos on my mobile device to another person's device", "I can install/remove/update an app in mobile devices", "I can test/treat malware (e.g., virus, spyware) on mobile devices", and "I can write texts or create data (e.g., memos, documents) on mobile devices". The participants rated each question on a scale from 1 = Strongly disagree to 4 = Strongly agree, with Data Collection The Report on the Digital Divide [21] includes the population representing the nonpoor class, in addition to the population representing a special group of older adults, the poor class, persons with disabilities, and North Korean refugees. The participants in the Report on the Digital Divide [21] were 7000 persons in the non-poor class, 2200 persons in the poor class, 2200 persons with disabilities, 2200 persons working as agricultural or fishery farmers, 700 North Korean refugees, and 700 immigrants married to Koreans; based on the data of a total of 15,000 people, the level and reality of the digital information gap of each group were investigated. For the survey, the proportional quota sampling method was applied to the sample by gender, age, and regional local government, and face to face interviews were conducted, during which the participants completed a structured questionnaire. The NIA's study period was from September to December 2020. In the present study, we excluded data with missing values on the items of the main variables. Thus, from the data of 1085 middle school to high school adolescents in the non-poor and poor classes, who comprised our target sample, our analysis used the data of 1084 adolescents, including 648 and 436 middle to high school students in the non-poor and poor classes, respectively. The non-poor class refers to household members living in households across the nation. The poor class indicates the registered beneficiaries of National Basic Livelihood based on the National Basic Living Security Act of Korea. Enactive Mastery Experience Perceived Task Difficulty We measured perceived task difficulty based on self-assessment by adolescents regarding their ability to perform activities using mobile devices (e.g., smartphones, tablets). The assessment included seven questions as follows: "I can change device settings, such as display, sound, security, alarm, input, etc.", "I can set up a wireless network (Wi-Fi)", "I can move a file from a mobile device to a computer", "I can send files and photos on my mobile device to another person's device", "I can install/remove/update an app in mobile devices", "I can test/treat malware (e.g., virus, spyware) on mobile devices", and "I can write texts or create data (e.g., memos, documents) on mobile devices". The participants rated each question on a scale from 1 = Strongly disagree to 4 = Strongly agree, with higher scores indicating higher levels of perceived task difficulty. Cronbach's α was 0.871 in our study. Self-Knowledge We measured self-knowledge based on the self-assessment by adolescents regarding their confidence and attitude when encountering new technology (four questions) and personal efforts to acquire new technology (two questions). The questions on confidence and attitude were "I tend to adapt easily to new technology or products", "I am confident in learning how to use new technology or products on my own", "I tend to use new technology or products more efficiently than others", and "I think the ability to use digital devices is critical to continuous economic activities in the future". The questions on personal efforts were "I am motivated to actively acquire new technology" and "I consider myself a lifelong learner and enjoy taking necessary courses". The participants rated each of the six questions on a scale from 1 = Strongly disagree to 4 = Strongly agree, with higher scores indicating higher competence and efforts by the adolescents. Cronbach's α was 0.738 in our study. Contextual Factors We measured the contextual factors that underly problem-solving in the use of digital devices based on the adolescents' self-assessment of how they try to solve problems related to mobile devices (e.g., smartphones, tablets). The assessment consisted of the following six questions: "I solve the problem by myself without help from others", "I get help from family (e.g., siblings, parents, nephews, and nieces)", "I get help from a friend", "I get help from a classmate or someone I know", "I search information on the internet", and "I seek professional help at service centers, etc.". Each question was rated on a scale from 1 = Strongly disagree to 4 = Strongly agree, with higher scores indicating more effective contextual factors. Cronbach's α was 0.699 in our study. Digital Citizenship Regarding digital citizenship, our measurement used the self-assessment data of adolescents regarding their participation in activities and interactions in the digital world, information security, and respect for others using the following four questions: "I can connect and communicate with others on the internet and cooperate with others for problemsolving, completing tasks, and assignments", "I can use the internet to actively exchange opinions on political or social concerns and issues and participate in various activities, such as discussions, donations, and volunteer works to solve public problems", "I can protect myself and others from the various risks of internet use, such as disclosure of personal information or information of others", and "I can understand, acknowledge, and accept differences in opinions through a responsible use of the internet that does not involve illegal media or violates the rights of others". Each question was rated on a scale from 1 = Strongly disagree to 4 = Strongly agree, with higher scores indicating higher levels of digital citizenship in adolescents. Cronbach's α was 0.739 in our study. Digital Life As for digital life, we measured it based on the self-assessment of adolescents regarding their level of use of data services on mobile devices (e.g., smartphones, tablets) in the past year. The assessment included the following four items: "Search for information and news", "E-mail", "Media contents (e.g., movies, music, e-books)", and "Education contents (e.g., various online lectures and courses)". The participants rated each item on a scale from 1 = Never to 4 = Frequently, with higher scores indicating more frequent engagement in digital life. Cronbach's was 0.815 in our study. Life Satisfaction Lastly, we measured life satisfaction based on the self-assessment of adolescents using the following five questions: "My life is close to my ideals in most cases", "My life is based on a set of excellent conditions", "I am satisfied with my life", "I have been able to acquire the important things I desire in my life", and "If I were to live another life, I would change almost nothing from the present life". They rated each question on a scale from Children 2023, 10, 973 5 of 10 1 = Completely dissatisfied to 4 = Completely satisfied, with higher scores indicating higher life satisfaction. Cronbach's α was 0.859 in our study. Demographic Characteristics Our study considered the following demographic variables: population, gender, education, household type, income, and region. The subcategories were non-poor and poor classes for population; male and female for gender; middle and high school for education; detached house, apartment, and townhouse for household type; <1500 USD, ≥1500 to <2300 USD, ≥2300 to <3000 USD, and ≥3000 USD for income; and urban and rural. Ethical Considerations As this study used secondary data that did not contain personal identification data of the participants, it was exempted from review by the institutional review board at the university of the authors and subsequently approved (IRB No. MC23ZASI0011). All files and data were discarded at the completion of the study. Data Analysis We used IBM SPSS Statistics for Windows, version 26.0, for data analysis. To analyze the moderated mediation effect, we used the PROCESS macro (4.2 version) developed by Hayes [22]. For demographic characteristics, we conducted a frequency analysis to estimate frequency and percentage values. In analyzing the reliability of each instrument, we used the values of Cronbach's alpha. Descriptive statistics with mean and standard deviation were obtained for each instrument. Skewness and kurtosis were used to test the normality. To analyze the differences according to demographic characteristics, we conducted an independent t-test and one-way ANOVA with Scheffé's post hoc analysis. To identify the correlations across instruments, we used Pearson's correlation analysis. We used bootstrapping for 50,000 samplings (number of bootstrap samples) to determine the moderating effect of poverty on the mediation effect of digital citizenship and digital life regarding the effects of perceived task difficulty, self-knowledge, and contextual factors on life satisfaction. Using the Model 88 process macro, the moderated mediation effect was verified. The significance level was set at 0.05. Among the demographic characteristics, those with a significant effect on life satisfaction, i.e., population, income, and region, were set as control variables for the analyses. Differences in Main Variables According to Demographic Characteristics The participants' demographic characteristics and mean scores for life satisfaction by demographic are listed in Table 1. Descriptive Statistics and Correlations across Variables For the main variables, the absolute skewness and kurtosis were 3 and 7, respectively, which satisfied the normality assumption (Table 2). Across the variables, we found a significant positive correlation between life satisfaction and self-knowledge (r = 0.205, p < 0.001), contextual factors (r = 0.090, p < 0.01), digital citizenship (r = 0.130, p < 0.001), and digital life (r = 0.082, p < 0.01). Table 3 presents the result of testing the moderated mediation effect of poverty after controlling for the variables of demographic characteristics with a significant effect on life satisfaction using the Model 88 process macro via bootstrapping. Figure 2 illustrates the path diagram of the variables. The dual mediation for digital citizenship and digital life came from perceived task difficulty (boot B = 0.025, 95% CI: 0.010~0.049), self-knowledge (boot B = 0.012, 95% Children 2023, 10, 973 7 of 10 CI: 0.004~0.023), and contextual factors (boot B = 0.010, 95% CI: 0.003~0.020), and the path with an effect on life satisfaction exhibited a significant moderated mediation effect of poverty. Each path of dual mediation was significant for the poor class and was not significant for the non-poor class. Table 3 presents the result of testing the moderated mediation effect of poverty a controlling for the variables of demographic characteristics with a significant effect on satisfaction using the Model 88 process macro via bootstrapping. Figure 2 illustrates path diagram of the variables. Discussion We analyzed the 2020 data of the Report on the Digital Divide [21] collected by the NIA to verify the moderated mediation effect of poverty in the structural model of life satisfaction mediated by digital citizenship and digital life under the influence of perceived task difficulty, self-knowledge, and contextual factors. Our results demonstrated Children 2023, 10, 973 8 of 10 a significant moderated mediation effect of poverty, whereas the main path of mediation in the structural equation was significant solely for adolescents living in poverty. Hence, the enactive mastery experience of adolescents living in poverty, which includes perceived task difficulty, self-knowledge, and contextual factors, was shown to be associated with life satisfaction and cyber-wellness in digital life through the mediation of digital citizenship. Our finding on the role of enactive mastery experience in increasing life satisfaction and cyber-wellness in the digital life of adolescents living in poverty coincided with those of a previous study reporting that digital use is a crucial channel to increasing the life satisfaction of low-income individuals [1]. Choices in an individual's life are considerably limited by financial difficulties, and uncertainties over one's present and future prospects, as well as those of one's family, generate repetitive daily hardships. Holmes and Burgess (2022) [4] argued that the opportunities for online access using fast but costly data services are limited in persons with very low incomes. In the case of limited online access, daily life is affected as well, and adolescents may experience problems in online learning or peer relations on the internet. Marum et al. (2014) [5] showed that the sense of enactive mastery has a moderating effect on the relationship between financial difficulties and life satisfaction. The limitations engendered by poverty could reduce the feeling of enactive mastery, thereby suppressing overall life satisfaction. According to Bandura (1997) [7], enactive mastery experience needs to be increased in order for individuals to acquire self-efficacy. Our results highlight that positive belief in self-competence and perception of the difficulty of imminent tasks (as intrinsic resources) along with contextual factors (as extrinsic resources) could be critical factors in enhancing self-efficacy related to digital life and the quality of life of adolescents living in poverty. Similarly, previous studies have shown that self-efficacy and social support increase the life satisfaction of adolescents [3,6]. Thus, institutional support is needed to increase self-efficacy related to digital life for adolescents living in poverty. In addition, we confirmed the mediatory role of digital citizenship in the structural model of life satisfaction of adolescents living in poverty. Digital citizenship is composed of digital access, trade, communication, literacy, etiquette, laws, rights and responsibilities, health and well-being, and security [23]. In a digital space, digital citizenship enables one to judge what is right and what is wrong. For adolescents living in poverty, the opportunities to acquire digital citizenship diminish as the experience of digital exclusion increases. Lim et al. (2016) [24] highlighted the need for partnerships between adolescents and their communities for cultivating digital citizenship. They claimed that an important principle in the digital space is "Respect for Self and Others" and "Safe and Responsible Use", and for this, adolescents are required to foster the ability of self-control to behave in a responsible manner in the digital world, whereas communities, including parents, teachers, and local governments, should provide the education and perform necessary monitoring. A previous study revealed that digital citizenship is an ability that can be internalized through training [16]. The digital citizenship of adolescents living in poverty can be cultivated through collective efforts of the individual, family, educational institution, and government. The limitations of this study were as follows. As a cross-sectional study, our work was limited in the interpretation of cause-effect relations across variables. In addition, only the variables found in the raw data were applied in constructing the model, which may limit the validity of the model. The 2020 data of the Report on the Digital Divide [21] did not focus on the digital life and life satisfaction of adolescents, which limits the in-depth interpretation of results. Further studies should investigate a wider scope of resources that hold significance in the digital life of adolescents, in addition to the variables examined in our study. Despite these limitations, the present study is significant in analyzing a representative dataset to verify the critical effects of enactive mastery experience in digital life and digital citizenship on the life satisfaction of adolescents living in poverty as compared with their non-poor counterparts. An important direction for future research is to develop and evaluate interventions that can help young people from low-income families have a
2023-06-02T15:03:36.022Z
2023-05-30T00:00:00.000
{ "year": 2023, "sha1": "5bd57fcf0803db2f1dec6dc0451ec6ba5b6f6b01", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/children10060973", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "112ba2276a20cf1ba5680936826329fbbdd4b24d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
8000581
pes2o/s2orc
v3-fos-license
Characterization of Synaptically Connected Nuclei in a Potential Sensorimotor Feedback Pathway in the Zebra Finch Song System Birdsong is a learned behavior that is controlled by a group of identified nuclei, known collectively as the song system. The cortical nucleus HVC (used as a proper name) is a focal point of many investigations as it is necessary for song production, song learning, and receives selective auditory information. HVC receives input from several sources including the cortical area MMAN (medial magnocellular nucleus of the nidopallium). The MMAN to HVC connection is particularly interesting as it provides potential sensorimotor feedback to HVC. To begin to understand the role of this connection, we investigated the physiological relation between MMAN and HVC activity with simultaneous multiunit extracellular recordings from these two nuclei in urethane anesthetized zebra finches. As previously reported, we found similar timing in spontaneous bursts of activity in MMAN and HVC. Like HVC, MMAN responds to auditory playback of the bird's own song (BOS), but had little response to reversed BOS or conspecific song. Stimulation of MMAN resulted in evoked activity in HVC, indicating functional excitation from MMAN to HVC. However, inactivation of MMAN resulted in no consistent change in auditory responses in HVC. Taken together, these results indicate that MMAN provides functional excitatory input to HVC but does not provide significant auditory input to HVC in anesthetized animals. We hypothesize that MMAN may play a role in motor reinforcement or coordination, or may provide modulatory input to the song system about the internal state of the animal as it receives input from the hypothalamus. Introduction Songbirds are used as a model system to understand the neural basis for learned motor behaviors, particularly vocalizations. Learned vocalizations require the integration of auditory signals with appropriate motor output to shape the target sound. In addition, maintenance of song requires feedback about the ongoing motor pattern, and the ability to modulate the motor pattern. In zebra finches, (Taeniopygia guttata) song is a male-specific behavior that is controlled by a set of identified nuclei, known collectively as the song system. The song system can be divided into two main pathways: the anterior forebrain pathway (AFP) and the vocal motor pathway. The AFP is part of a basal ganglia forebrain loop that is primarily involved in song learning and plasticity, while the vocal motor pathway is required for song production. This report begins to analyze the role of a thalamocortical pathway in maintenance of this complex learned behavior by first characterizing the impact of this pathway on a key vocal motor nucleus, HVC. HVC is a cortical region critical for song learning and production that has both pre-motor neurons and neurons that project to the AFP [1,2,3,4,5], and it is thought to contain the pattern generating circuit for song [6,7,8,9]. HVC neurons also receive auditory input that is highly selective for the bird's own song (BOS) [1,10,11,12,13,14]. Therefore, HVC is a potential site of sensorimotor integration and is a focus of many studies aimed at understanding auditory motor integration, particularly for learned behavior. The four known inputs to HVC are the caudal mesopallium (CM), interfacial nucleus of the nidopallium (NIf), nucleus uvaeformis (Uva), and medial magnocellular nucleus of the nidopallium (MMAN). CM and NIf provide the major auditory input to HVC [15,16,17]. Uva projects directly to HVC as well as indirectly via NIf [18,19], appears to provide modulatory input to HVC [18], and is also important for interhemispheric coordination of HVC activity [20]. The role of MMAN's input to HVC is less clear. MMAN is particularly interesting as it not only projects to HVC, but it also forms a potential sensorimotor feedback loop via the robust nucleus of the arcopallium (RA) and the dorsomedial nucleus of the posterior thalamus (DMP) (Figure 1) [21,22]. This feedback loop is interesting for at least three reasons. First, in many experience-dependent pathways there is feedback to regions involved in motor production -the most studied of which are thalamo-cortical feedback loops in vertebrates [23,24]. The loop involving MMAN and HVC represents a thalamo-cortical feedback loop that may be important for song learning and production. Although not tested here directly, MMAN may provide motor feedback to the song system. Second, MMAN anatomically receives input from the hypothalamus, via DMP, presenting the possibility of input to the song system about the internal state of the animal [25]. This input may influence when a bird sings, which could be important for survival and reproductive success. Third, DMP projects bilaterally to both the ipsilateral and contralateral MMAN, making this loop one of only two known bilateral pathways in the song system. The only other known bilateral pathway is through Uva [19]. Because song production requires bilateral control of the vocal organ [26,27], interhemispheric coordination is paramount in neural control of song. While the function of MMAN remains unclear, several lines of evidence suggest that it plays an important role in song learning and maintenance. First, directed singing increases expression of the immediate early gene, early growth response-1 (egr-1; or ZENK, a zinc-finger-containing transcriptional regulator), indicating that MMAN is active during motor production of song [27]. Second, bilateral MMAN lesions in juveniles result in the development of highly abnormal, short, and relatively unstereotyped song, indicating that MMAN is necessary for normal song learning [17]. While bilateral MMAN lesions in adults with stereotyped song do not result in major song abnormalities as seen in juveniles, the lesions do cause a consistent increase in song variability -especially at the beginning of song production [21]. These effects of MMAN lesions on song learning and maintenance could be related to the disruption of a sensorimotor-feedback loop involving MMAN and HVC. Third, MMAN also displays auditory activity and responds to auditory playback of the BOS [28]. Although almost all of the auditory activity in HVC originates from NIf and CM there could be other auditory inputs to HVC as preliminary data suggest that bilateral lesions of these two areas do not result in song degradation, as seen with deafening [28]. Taken together, these data suggest that MMAN is involved in sensorimotor feedback, this feedback is necessary for song learning and maintenance, and MMAN may provide sensorimotor feedback information to HVC to modulate ongoing motor output during song production. This idea is supported by a study showing that MMAN activity does not show greater response to the first syllable of bird's own song playback (as is seen in HVC) and that song-evoked bursts of activity in MMAN can last greater than 100 ms after onset of song [28]. To further understand the role of MMAN's input to HVC, we characterized the auditory responses in both areas and confirm and extend previous reports that MMAN responds selectively to auditory information and that functionally excites HVC. Because MMAN responds selectively to BOS, we also tested whether MMAN provides auditory input into HVC by measuring HVC's auditory response when MMAN was inactivated. We had two alternative hypotheses: 1) MMAN provides auditory input into HVC, and thereby represents a fourth auditory input into HVC that is required for song learning, or 2) MMAN does not provide auditory input into HVC and therefore may have another role such as to provide motor feedback or additional sensory information to HVC necessary for modulation of song production. Some of these data have appeared in abstract form [29]. Subjects A total of 36 adult (.90 days post hatch) male zebra finches (Taeniopygia guttata) were used for this study. All procedures performed in this study were done so in accordance with a protocol approved by the Institutional Animal Care and Use Committee at the W.M. Keck Science Department of Claremont McKenna College, Pitzer College and Scripps College. All efforts were made to minimize suffering. All birds used in this study were obtained from the colony in the W.M. Keck Science Department or from a local supplier. All birds were provided food and water ad libitum and were on a 14:10 hour day:night light cycle. Stimuli Before each experiment, the song of each male bird was recorded by placing the bird in a sound-attenuation chamber (Eckel Industries, Cambridge, MA) with a female bird. Songs were recorded using Sound Analysis Pro [30]. Songs were filtered (highpass 300 Hz, low-pass 8000 Hz) and edited using Goldwave (Goldwave Inc., St. John's, Newfoundland, CAN). Edited songs included 2-3 motifs, the largest repeatable unit of a song, for the bird's own song (BOS), the BOS in reverse (REV), and conspecific (CON) song. All songs were presented at ,70 dB (SPL), measured with a sound level meter (rms, A-weighted, RadioShack). Surgery Prior to each experiment, recorded birds were anesthetized with a total of 90-100 mL of 20% urethane, administered in 3 injections of 30-40 mL in the pectoral muscle over the course of 1 hour. Two hours after the last injection, lidocaine (2%, Hospira Inc., Lake Forrest, IL) was injected under the scalp, and the scalp was dissected along the midline. The approximate x-y location of MMAN was marked on the surface of the skull and a head post was mounted to the anterior part of the skull with dental cement (Coltene/Whaledent Inc., Cuyahoga Falls OH) and cyanoacrylate (Krazy Glue TM ). Once the cement hardened, the bird was placed on a heating pad (FHC, Bowdoin, ME) on an air table (TMC, Peabody, MA) surrounded by sound foam attached to the interior wall of a faraday cage. The mounted head post immobilized the bird's head, and the body temperature was held constant (37uC). The head angle of the bird was set 40u relative to horizontal. A speaker was placed approximately 35 cm in front of and facing the bird. Electrophysiology and song presentations Multiunit extracellular recordings were made with carbon fiber electrodes (Kation Scientific, Minneapolis, MN). Small craniotomies were made in the skull above the approximate locations of MMAN and HVC, and electrodes were lowered into the brain using micromanipulators (Siskiyou, Grants Pass, OR; Newport). All recordings were amplified (A-M Systems, Sequim, WA), filtered (300 Hz highpass, 5 kHz lowpass), digitized at 20 kHz (Micro1401, CED, Cambridge, England) and collected using Spike 2 software (CED). For HVC, the final electrode position was approximately 2.4 mm lateral of the bifurcation of the midsagittal sinus and 200 to 500 mm ventral to the dorsal surface of the brain. For MMAN, the final electrode position was 5.2 mm anterior and 0.5 mm lateral of the bifurcation of the midsagittal sinus and 1.8 to 2.0 mm ventral to the dorsal surface of the brain. Both nuclei were identified by their individual characteristic firing pattern, correlated spontaneous activity [31] and auditory responses (see results). All recordings were from the ipsilateral MMAN and HVC. Spontaneous and song-evoked activity was recorded in MMAN and HVC simultaneously. For each recording, 20 to 40 repetitions of each song type (BOS, REV and CON) were interleaved with a 762 second inter-stimulus interval. After each recording session, electrolytic lesions (+10 mA for 5 seconds) were made at the MMAN recordings site to enable histological confirmation of the recording location (see below). To characterize the synaptic latency between MMAN and HVC, MMAN was stimulated (A-M Systems Model 2100) while an extracellular recording was made in HVC. All stimuli were single pulses 0.3 ms in duration and 10-50 mA in amplitude. The threshold for eliciting a response in HVC was 10-20 mA. For the inactivation of MMAN, MMAN and HVC were first located by using carbon fiber electrodes to record from MMAN and HVC, then the carbon fiber electrode in MMAN was then replaced with a glass electrode filled with 250 mM GABA (Sigma-Aldrich) in 1 M NaCl. Song-evoked activity was then collected for 10 to 40 repetitions of each song before inactivation (pre), during inactivation (GABA), and 5-10 minutes after GABA application (post). GABA was puffed (30-50 ms at 16-20 psi) out of the recording pipette with a picospritzer (Toohey Co., Fairfield, NJ). For some experiments, a small quantity of rhodamine dye (,0.5-1%) was mixed with the GABA and puffed into brain to mark the location and spread of the inactivation. In experiments where dye was not used, the location of the injection site was marked by making an electrolytic lesion (+10 mA for 5 seconds). The inactivation site was later identified histologically. Histology After each experiment, the bird was euthanized with a lethal dose of Nembutal (0.05 cc, 50 mg/mL) and perfused transcardially with 0.9% saline followed by 4% paraformaldehyde (in 0.025 M NaPO 4 buffer). The brain was then removed from the skull and stored in 4% paraformaldehyde until histological processing. Brains were cryoprotected in 30% sucrose in 4% paraformaldehyde overnight and then sectioned coronally on a freezing microtome (Microm) into 70 mm sections. Lesion sites were identified after the slices were stained with cresyl violet. Digital images of the rhodamine labeling were superimposed on images of the same section viewed under combined darkfield and fluorescent illumination (CorelDraw). MMAN is located medial to LMAN and between the mesopallial lamina (LaM) and lamina pallio-subpallialis (LPS) [25]. LMAN can be readily identified with darkfield illumination or from the cresyl violet staining as the size of its cell bodies is much larger than those in the surrounding tissue (see Figure 2B). Identification of MMAN in cresyl violet-stained sections is very difficult so a recording was considered in MMAN if the lesion or fluorescent marker was located in between the two laminae and medial to LMAN. Data analysis To quantify the auditory response to BOS, REV, and CON in MMAN and HVC, the response strength and z-scores were calculated using a MATLAB script (written by E.S. Fortune). The response strength is calculated as the difference between the mean multiunit firing rate (spikes/second) during song playback stimulus and the mean firing rate during a baseline pre-stimulus period (1.5-2.5 s) of the same duration. We use the term 'spikes' to refer to any event over a user-defined threshold. Because of the high degree of variability in response strength, response strengths were also normalized and expressed as z-scores. The z-score is calculated as the difference between the firing rate during the stimulus and the baseline firing rate divided by the standard deviation of the difference: S S is the mean firing rate during song playback, B B is the mean baseline firing rate, and the standard deviation is calculated by taking the square root of the variance of S S plus the variance of B B minus the covariance of S S and B B [17]. The selectivity of the response in HVC and MMAN to one stimulus compared to another was measured using the d9 metric. This metric provides a statistical measure for the discriminability between two stimuli [32]. The d9 value was calculated using a MATLAB script (written by E.S. Fortune and edited by J. McGrady Achiro) using the following equation: R R is the mean response strength to the stimulus (STIM), and s 2 is its variance. For our analyses, the selectivity for BOS (STIM1) was compared with REV and CON (STIM2). A d9 of 0.5 was used as the criterion for deeming a response selective [33]. Spontaneous activity in MMAN and HVC Like other canonical nuclei of the song control circuit, MMAN showed characteristic spontaneous bursts of activity that were correlated with spontaneous bursts in HVC [17] (Figure 2). Overall, MMAN displayed more background activity than HVC (HVC, 13.862.0 spikes/s; MMAN, 20.063.0 spikes/s; paired ttest, p,0.01). To confirm our recording site, in most experiments we lesioned the recording site ( Figure 2B). MMAN is not easily identifiable by Nissl stain, unlike most nuclei in the song system. It is located medial to the lateral nucleus of the anterior nidopallium (LMAN) and between two fiber tracts, the lamina mesopallialis (LaM) and lamina pallio-subpallialis (LPS) [21]. Although we could not absolutely confirm the location of the recording site in every experiment, we determined that we were recording from MMAN based on its electrophysiological properties, which included correlated spontaneous bursts of activity with those in the ipsilateral HVC and auditory selectivity for the BOS. Our electrode tracks were always just lateral to the midsagittal sinus and thus medial to LMAN [22] (see Figure 2). Auditory response in MMAN and HVC HVC responds selectively to playback of BOS over other stimuli, including REV and CON [1,10,11,12,13,34]. A previous report showed auditory responses in MMAN that were also selective for BOS over REV [31]. To further characterize the auditory responses in MMAN and HVC, we made simultaneous extracellular recordings from ipsilateral HVC and MMAN and presented auditory stimuli (Figure 3; n = 17 in 16 birds). Playback of BOS, REV, and CON elicited auditory responses in MMAN similar to those recorded simultaneously in HVC (Figure 3). MMAN showed a significant response over baseline to playback of BOS and CON, but not REV (one-tailed t-test; Table 1), whereas HVC showed a significant response to BOS, REV and CON (Table 1). Both MMAN and HVC had a significantly greater response to BOS than to REV or CON (z-scores used for calculation: one-way ANOVA, p,0.05, F = 10.6 for HVC; F = 7.76 for MMAN; Tukey HSD p,0.05). MMAN responded significantly less to BOS than did HVC (z-score values, paired t-test, p,0.01), but did not have a different response to REV and CON than HVC (z-score values, paired t-test, REV; p = 0.15, CON; p = 0.20). A direct comparison of simultaneously recorded auditory responses in ipsilateral HVC and MMAN to auditory stimuli revealed that, within a recording, HVC responded more to BOS than MMAN ( Figure 4C; points lie above the unity line; p,0.01). In addition, there was little difference between the response to REV and CON in simultaneous recordings from MMAN and HVC as those points were clustered around the unity line ( Figure 4C; REV, p = 0.23; CON, p = 0.12). For this analysis, significance was determined by resampling procedures in R to determine the likelihood that the observed number of points would lay above the unity line at random. Briefly, 10000 resamples from the original data were carried by randomizing the HVC values (with replacement), repairing them with the MMAN values, and re-calculating the percentage of points that lay above the line. The selectivity of HVC and MMAN for BOS over other auditory stimuli can also be measured using the d9 value, a statistical measure of discriminability between two stimuli [35]. A significant preference for one stimulus over another is defined as 20.5.d9.0.5. Both HVC and MMAN showed significant preference for BOS over REV and CON ( Figure 4D; BOS v CON: HVC, 1.9560.5; MMAN, 1.0660.2; BOS v REV: HVC, 2.1360.5; MMAN, 1.1660.22). In addition, HVC had significantly greater d9 values for BOS versus CON and for BOS versus REV than did MMAN (paired two-tailed t-test, p,0.05) ( Figure 4D). Comparing d9 values from simultaneous recordings in MMAN and HVC ( Figure 4E) we found that more points were significantly above the unity line for BOS v CON (p,0.05), supporting the idea that HVC was more selective than simultaneously recorded MMAN. However, for BOS over REV, the points were not significantly above the unity line, suggesting that HVC was not more selective for BOS over REV (p.0.05; resampling analysis as for z-score values). If the non-significant responses in MMAN (points in the grey) were removed for both BOS v REV and BOS v CON, then the remaining values did lie significantly above the unity line (p,0.05). Thus, if MMAN is selective for BOS over REV or CON, then the response in HVC is statistically greater. Thus, like HVC, MMAN neurons are more Stimulation To examine the functional synaptic input from MMAN to HVC, we stimulated MMAN while recording extracellularly in the ipsilateral HVC ( Figure 5). Stimulation in MMAN resulted in a complex excitatory response in HVC (n = 3). In one case there was a clear and consistent excitatory response in HVC with a delay between 10-17 ms ( Figure 5A). In two cases, the MMAN stimulation resulted in a very long-lasting response in HVC. The example shown had a response between 9-70 ms ( Figure 5B). The initial response (first peak) appeared to be consistent with the time of the response shown in Figure 5A. The second phase of the response could be due to recurrent activation of the feedback loop through HVC. Stimulation outside of MMAN did not result in a consistent latency response in HVC (n = 3; data not shown). These data suggest that MMAN provides functional excitatory input to HVC, with a synaptic delay of 10-17 ms, consistent with a previous report [25]. Inactivation of MMAN As MMAN responds to auditory stimulation and provides functional excitatory synaptic input to HVC, it is possible that MMAN also provides sensory feedback to HVC and contributes to the auditory response in HVC. GABA A receptors have been localized in MMAN [36], so to test this idea we inactivated MMAN with GABA while recording auditory evoked activity in HVC (n = 4 in 3 birds). The effect of GABA in MMAN on auditory responses in HVC was calculated using the activity (response) during auditory stimuli (Figure 6), as the response strength was more variable, presumably due to a large variability in baseline activity. GABA inactivation of MMAN had no significant effect on ipsilateral HVC auditory activity in 2 of 4 experiments ( Figure 6; one-way ANOVA, p.0.05). In one experiment there was a continual increase in auditory response, even after GABA had washed out of MMAN (stars in Figure 6C&D). This response was unusual, and each condition (pre, GABA, post) was significantly different from the others (Tukey post-hoc p,0.05). One other experiment showed a significant decrease in response to GABA inactivation of MMAN compared to pre and post (open squares in Figure 6C&D; ANOVA, p,0.05, Tukey post-hoc test). Pre and post were not different from each other (p.0.05, Tukey post-hoc test). The site of GABA injection was histologically confirmed in all cases (see Figure 6B for an example). In summary, inactivation of MMAN with GABA had little reliable effect on auditory response in HVC. These data suggest that MMAN is not a significant source of auditory input to HVC and its auditory activity may be the result of input through the feedback pathway. In two experiments in which MMAN was missed, GABA application resulted in a dramatic decrease in auditory responses in HVC ( Figure 7A; one-way ANOVA, p.0.05, Tukey post-hoc). The HVC auditory response was significantly smaller during GABA application compared to pre-and post-application (Tukey post-hoc, p,0.05). Histological analysis showed that, in these cases, GABA was injected ventral to MMAN (Figure 7). In both cases, there was little to no action potential activity in HVC during auditory playback when this area was inactivated. Furthermore, inactivation of this area not only decreased auditory response in HVC but also greatly reduced all activity in HVC (data not shown). The identity of this area is not known, but may be the medial part of Area X [37]. The rhodamine dextrane that was co-applied with the GABA showed retrogradely labeled cells in Area X ( Figure 7B) indicating that Area X neurons project to this area. Discussion To gain a better understanding of the influence of MMAN on HVC activity we recorded simultaneously from these two areas. We found similar auditory responses in HVC and MMAN, although HVC was more selective for BOS over other songs than MMAN. This could be due to increased signal to noise as a consequence of lower spontaneous activity in HVC compared to MMAN. In addition, we found that stimulation of MMAN functionally excited HVC. We found that MMAN inactivation had little effect on auditory-evoked responses in HVC, indicating that MMAN does not provide significant auditory input to HVC, in anesthetized birds. A direct comparison of the timing between ipsilateral MMAN and HVC auditory-evoked activity shows a complex interaction between MMAN and HVC activity where sometimes HVC activity leads that in MMAN and sometimes MMAN activity leads that in HVC [31]. Using cross-correlation analysis of multiunit activity in MMAN and HVC, Seti and Okanoya (2008) found that sometimes HVC led MMAN activity by 2-25 ms and sometimes MMAN led HVC activity by 10-25 ms. The delays between MMAN and HVC auditory activity are consistent with the electrical stimulus-evoked activity in HVC, although we typically saw a longer delay in responses. A previous report showed that electrical stimulation of HVC resulted in a 4-20 ms delay in neural response in MMAN [25]. The complex timing is mostly likely due to the feedback loop from the cortex through the thalamus back to the cortex. Single unit recordings from MMAN and HVC may help resolve the more ambiguous timing between the two nuclei, however this is unlikely to fully resolve this issue as the same phenomenon has been shown in lateral MAN (LMAN) and HVC using intracellular recordings from both sites [38]. Another way to potentially resolve the ambiguous timing is to repeat the experiments presented here after lesioning RA to functionally remove the feedback. Although MMAN is selective for auditory playback of the BOS and stimulation of MMAN excites HVC, inactivation of MMAN has no consistent effect on auditory responses in HVC. This is in contrast to what is seen with inactivation of the two main known auditory inputs to HVC, NIf and CM, which results in a significant loss of auditory activity in HVC [15,17]. The inconsistent effect of MMAN inactivation on HVC activity could be due to several factors, including an inconsistent volume of injected GABA and GABA injections that extended beyond the bounds of MMAN. We found that GABA application slightly ventral to MMAN resulted in a profound loss of auditory activity in HVC. The proximate location of this dorsal site to MMAN makes precise inactivation of MMAN even more difficult and could account for the small decrease in HVC auditory activity in one of the inacativation experiments. Further experiments are needed to more fully characterize the identity and influence of the unknown area on auditory activity in HVC. The small and inconsistent influence of MMAN on HVC auditory activity suggests MMAN plays another, perhaps modulatory role on HVC activity, or could provide input about other sensory (e.g. proprioceptive) information to HVC which could be important for modulation of song. One intriguing possibility is that MMAN provides input to the song system about the internal state of the animal, via its indirect input from the lateral hypothalamus [25]. This may be critical for regulating song production, and possibly song frequency, by integrating information about sexual maturity, the time of day, or other information regarding the internal state of the animal. It is possible that the influence of MMAN on HVC auditory activity is dampened by the anesthesia, although the anesthetic used here usually enhances auditory responses in the canonical song system and auditory responses are greatly reduced in awake birds [39]. Future chronic recordings from awake, behaving finches will be needed to more fully determine the effects of MMAN on auditory and motor activity in HVC. Contributions of MMAN to bilateral coordination It has been proposed that the feedback loop to HVC through MMAN may work to coordinate bilateral HVC activity because it allows motor information from one RA to reach the contralateral HVC via DMP and MMAN [20,22]. This loop through DMP is one of only two known feedback loops with bilateral projections in the song system [19]. The other pathway projects from RA to PAm (paraambigualis, see Figure 1) or to DM (dorsomedial nucleus of the inter-collicular region, not shown), then from each of these to Uva (not shown), and back to HVC [20]. It has been shown that while bilateral lesions in Uva disrupt song production, unilateral lesions in Uva only disrupt song temporarily [20]. Interestingly, unilateral Uva lesions permanently disrupt song if MMAN is also impaired [40]. Bilateral lesions of MMAN in adult finches result in an increase in song variability at the beginning of a song, which is consistent with MMAN's role in coordinating activity between the two hemispheres. Taken together, these data suggest that MMAN is an important component of hemispheric coordination in the vocal motor pathway. Several reports suggest that the pattern generating circuit for song production is located in HVC [5,7,8]. In addition, activity in both hemispheres is highly coordinated [41]. Bilateral activation of both HVCs by MMAN may provide a way to provide consistent feedback that can coordinate bilateral activity in HVC, similar to activity that can reset activity in pattern-generating circuits. A novel auditory input to HVC? Although inactivation of MMAN did not significantly alter auditory responses in HVC, we found, surprisingly, that inactivation of the area ventral to MMAN resulted in a dramatic reduction of spontaneous and auditory activity in HVC. The identity of this area is unknown, although it may be the medial part of Area X. Consistent with this result, Kubikova et al. (2007) showed lesions in medial Area X resulted in lower ZENK expression in the ipsilateral HVC than the contralateral HVC [37], while MMAN lesions did not result in differences in ZENK expression in ipsi-versus contralateral HVC. Future work will further characterize this area and its anatomic connectivity.
2016-05-12T22:15:10.714Z
2012-02-22T00:00:00.000
{ "year": 2012, "sha1": "fef25baa8a7ffaca659bb47c2a72c356279bb201", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0032178&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fef25baa8a7ffaca659bb47c2a72c356279bb201", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254535330
pes2o/s2orc
v3-fos-license
An Overview of the Stability and Fretting Corrosion of Microgrooved Necks in the Taper Junction of Hip Implants Fretting corrosion at the head–neck interface of modular hip implants, scientifically termed trunnionosis/taperosis, may cause regional inflammation, metallosis, and adverse local tissue reactions. The severity of such a deleterious process depends on various design parameters. In this review, the influence of surface topography (in some cases, called microgrooves/ridges) on the overall performance of the microgrooved head–neck junctions is investigated. The methodologies together with the assumptions and simplifications, as well as the findings from both the experimental observations (retrieval and in vitro) and the numerical approaches used in previous studies, are presented and discussed. The performance of the microgrooved junctions is compared to those with a smooth surface finish in two main categories: stability and integrity; wear, corrosion, and material loss. Existing contradictions and disagreements among the reported results are reported and discussed in order to present a comprehensive picture of the microgrooved junctions. The current research needs and possible future research directions on the microgrooved junctions are also identified and presented. Introduction Over the past four decades, modularity at the head-neck (trunnion) junction of hip implants has become popular as it enables surgeons to address patient-specific anatomical and geometrical requirements in hip joint arthroplasty [1][2][3]. In addition, modularity enables surgeons to select different materials for the junction components (head and trunnion) [4][5][6] with lower risks/costs in replacement surgeries [7,8]. However, despite these advantages, the modularity is reported to be a root cause of mechanically assisted crevice corrosion (MACC) [9][10][11]. The Morse taper interface of the head-trunnion junction which is highly loaded by physical activities in the presence of a corrosive environment (body fluid) ends up with some metallic ions/debris released from the interface, which consequently causes regional inflammation, metallosis, and adverse local tissue reactions (ALTRs) [4,5,7]. The severity of this damage depends on various factors such as the taper angle mismatch [12][13][14], geometrical dimensions [15,16], surface topography [17,18], the type, direction, and magnitude of the applied loads [19,20], and the assembly force and/or procedure [21][22][23]. The current understanding recommends a well-engaged interlock as one possible solution for minimizing the damage at the junction interface [4,13,14,24]. Surface finish/roughness is one of the key parameters affecting the interface engagement which is of interest because manufacturers traditionally believe that it can improve the junction integrity and durability [4]. There are two main classes of surface finish: (1) trunnions of the same roughness as their head counterparts, and (2) trunnions produced with purposely designed microgrooves (also called ridges/undulations) [25]. The periodic pattern of such microgrooves is sometimes classified with an amplitude and wavelength of more than 4 µm and 100 µm, respectively [25]. These microgrooves were originally created on the trunnion's surface to minimize the risk of brittle fracture which may naturally occur in ceramic heads. These microgrooves are also believed to subside the stress field in the junction components and increase the engaging area due to the plastic deformation occurring at the tip of the microgrooves, thereby creating possible pointed cold welding [4]. These may subsequently reduce the probability of fluid ingress into the crevice-like gap of the junction due to the toggling effects, thus reducing the susceptibility to corrosion. The presence of such microgrooves is believed to partly compensate for the influence of unavoidable taper angle mismatch which results in a smaller contact area and possible drop in integrity [26][27][28]. Although modern junctions are mostly designed and manufactured with microgrooves, research confirms that there is a limited understanding of how these microgrooves affect the junction performance, as evident in the inconsistent and often contradictory results reported by researchers. Some research studies confirm no significant role played by these microgrooves in the interface damage [29,30] but some others report higher/lower interfacial damage for these junctions [26][27][28]. There is also disagreement on the influence of these microgrooves on the pull/twist-off strength [24,28]. In addition, variations in the design of microgrooves are very evident in the junction batches, even those produced by the same manufacturer [26][27][28]. Taking into account the aforementioned points, the aim of this paper was to integrate the latest research findings in order to give an overview of the influence of microgrooves on the performance of the head-neck junctions. Hence, different approaches taken by various researchers together with their specific limitations and simplifications are presented and compared. The main results and potential reasons for contradictions are discussed in order to provide a more comprehensive picture of the microgrooved junctions, as well as their performance and longevity in actuality. The performance of the junctions is evaluated according to two design metrics categorized into two main subsections: stability and integrity; wear, corrosion, and material loss. Taper Junctions of Hip Implants The taper junction of modular hip implants consists of a femoral head assembled intraoperatively onto a male trunnion [5]. The two main components are usually manufactured from ceramic (alumina/zirconia), and Co28Cr6Mo, 316L stainless steel, and Ti6Al4V alloys [4,5]. There are various types of heads/trunnions on the market with different design characteristics (geometry, material, and surface finish) [25]. In terms of surface finish (which is the focus of this review), the engaged surfaces of the junction components can be machined smoothly or with microgrooves. Normally, the microgrooves are created on the trunnion surface, and the head taper is machined smoothly. Alternatively, the surfaces of both the head and the trunnion components are machined smoothly/with microgrooves. The design characteristics of these periodically created microgrooves (the amplitude and/or the wavelength) together with whether or not both the components are machined smoothly/with microgrooves could positively/negatively affect the lifespan of the taper junctions. Such an influence is reviewed and discussed below. Stability and Integrity The stability/integrity of a junction is usually selected to foretell the possibility of postoperative issues in hip arthroplasty. The stability of the taper junctions is compared using various metrics such as the contact situation (contact area and deformation), micromotion, pull-off forces, and twist-off moments [4,5]. There have been few studies with a key focus on investigating the influence of microgrooves on the stability of taper junctions. In some of these studies, this influence was combined with the influence of other variables such as the assembly force, trunnion length, and taper angle mismatch. In a recent study by Dransfield et al. [25], the contact situation was monitored by measuring the deformations of the microgrooves using a method called "roundness measurement". Impaction forces of 2, 4, and 8 kN were selected to assemble 27 mm CoCr heads onto Ti trunnions. The impaction forces were applied at three different angles: 10 • anterior, 10 • antero-proximal, and axial. The assembled junctions were then dismantled using an Instron testing machine with an axial tensile load. Of all the test batches, the junction assembled with 8 kN impaction force at 10 • antero-proximal had maximal dismantling force. The authors of the study argued that the monitoring of assembly force during the tests was quite challenging; therefore, the dismantling force did not fully represent the pull-off strength. Hence, they selected a ratio of dismantling force divided by the assembly force as a metric for evaluating the junction integrity. Having this ratio determined, they observed the axially assembled junctions as much stronger than those assembled off-axially. At the assembly force of 2 kN, the anterior junctions were stronger than the antero-proximal ones, while, at assembly forces of 4 and 8 kN, the latter was stronger than the former. Global compression of the microgroove amplitudes increased with an increase in the assembly force specifically for the axially assembled junctions (Figure 1). This finding is consistent with the philosophy behind the design and the creation of microgrooved trunnions. mations of the microgrooves using a method called "roundness measurement". Impaction forces of 2, 4, and 8 kN were selected to assemble 27 mm CoCr heads onto Ti trunnions. The impaction forces were applied at three different angles: 10° anterior, 10° antero-proximal, and axial. The assembled junctions were then dismantled using an Instron testing machine with an axial tensile load. Of all the test batches, the junction assembled with 8 kN impaction force at 10° antero-proximal had maximal dismantling force. The authors of the study argued that the monitoring of assembly force during the tests was quite challenging; therefore, the dismantling force did not fully represent the pull-off strength. Hence, they selected a ratio of dismantling force divided by the assembly force as a metric for evaluating the junction integrity. Having this ratio determined, they observed the axially assembled junctions as much stronger than those assembled off-axially. At the assembly force of 2 kN, the anterior junctions were stronger than the antero-proximal ones, while, at assembly forces of 4 and 8 kN, the latter was stronger than the former. Global compression of the microgroove amplitudes increased with an increase in the assembly force specifically for the axially assembled junctions (Figure 1). This finding is consistent with the philosophy behind the design and the creation of microgrooved trunnions. The coupling between assembly force and microgrooves was also reported by Matt et al. [24]. They evaluated the pull-off strength of a number of customized 12/14 Ti trunnions with different lengths (named standard (14.5 mm) and mini (6.5 mm)) paired with a 28 mm CoCr head. The axial impaction forces were selected in the interoperative range of 2 kN to 6 kN. Three groups of junctions were selected for the experiments: standard/smooth (microgroove amplitude of 7.5 µm), standard/grooved (microgroove amplitude of 15.5 µm), and mini/grooved. The results of this study showed an increase in the pull-off strength with an increase in the assembly force consistent with those reported by Dransfield et al. [25]. When the assembly force was less than 4 kN, the smoothed junctions showed significantly higher strengths compared to the microgrooved ones. At an assembly force of 6 kN, however, no significant change in strength with the surface finish was observed. For comparison, Figure 2 illustrates the difference between the pull-off strength of the junction groups considered in Matt et al. [24]. From this figure, one can see that, when the trunnion length was shorter and the microgrooves were present, the pull-off strength was higher than in other cases, possibly because of the surface microgrooves and the length of the trunnion. A shorter trunnion possibly increases the positive influence of the designed microgrooves, thereby increasing the integrity. The coupling between assembly force and microgrooves was also reported by Matt et al. [24]. They evaluated the pull-off strength of a number of customized 12/14 Ti trunnions with different lengths (named standard (14.5 mm) and mini (6.5 mm)) paired with a 28 mm CoCr head. The axial impaction forces were selected in the interoperative range of 2 kN to 6 kN. Three groups of junctions were selected for the experiments: standard/smooth (microgroove amplitude of 7.5 µm), standard/grooved (microgroove amplitude of 15.5 µm), and mini/grooved. The results of this study showed an increase in the pull-off strength with an increase in the assembly force consistent with those reported by Dransfield et al. [25]. When the assembly force was less than 4 kN, the smoothed junctions showed significantly higher strengths compared to the microgrooved ones. At an assembly force of 6 kN, however, no significant change in strength with the surface finish was observed. For comparison, Figure 2 illustrates the difference between the pull-off strength of the junction groups considered in Matt et al. [24]. From this figure, one can see that, when the trunnion length was shorter and the microgrooves were present, the pull-off strength was higher than in other cases, possibly because of the surface microgrooves and the length of the trunnion. A shorter trunnion possibly increases the positive influence of the designed microgrooves, thereby increasing the integrity. The influence of the microgrooves can also be improved by manipulating the parameters of the microgroove creation procedure. In this regard, the twist-off strength of the turned taper junction was strongly improved by the turn milling method in Döbberthin et al. [27]. It was shown that, as a function of the machining parameters such as the rotational speed and axial feed, the resulted topography on the trunnion surface (and, thus, its integrity) changes significantly (Figure 3a-c). Figure 3d compares the dismantling torques of the junctions with different topographies. From this figure, it can be implied that neither extreme high ("C") nor extreme low roughness ("A") resulted in an integrity as the moderate roughness ("B"). This shows the double-sided influence of the created microgrooves on the junction integrity which strongly depends on the design characteristics of the microgrooves. The published literature shows some studies in which the finite element (FE) approach was used to estimate the contact situation of the microgrooved junctions. Bechstedt et al. [31] observed significant changes in the contact situation of the microgrooved junctions after assemblage. They used the 2D axisymmetric finite element (FE) technique to evaluate the contact situation and the level of micromotions at the interface of a 36 mm CoCr/ceramic head on a Ti trunnion assembled with various forces of 0.5, 2, 4, and 8 kN. The microgrooves of the trunnion were modeled using sinusoidal periodic waves with an The influence of the microgrooves can also be improved by manipulating the parameters of the microgroove creation procedure. In this regard, the twist-off strength of the turned taper junction was strongly improved by the turn milling method in Döbberthin et al. [27]. It was shown that, as a function of the machining parameters such as the rotational speed and axial feed, the resulted topography on the trunnion surface (and, thus, its integrity) changes significantly (Figure 3a-c). Figure 3d compares the dismantling torques of the junctions with different topographies. From this figure, it can be implied that neither extreme high ("C") nor extreme low roughness ("A") resulted in an integrity as the moderate roughness ("B"). This shows the double-sided influence of the created microgrooves on the junction integrity which strongly depends on the design characteristics of the microgrooves. The influence of the microgrooves can also be improved by manipulating the parameters of the microgroove creation procedure. In this regard, the twist-off strength of the turned taper junction was strongly improved by the turn milling method in Döbberthin et al. [27]. It was shown that, as a function of the machining parameters such as the rotational speed and axial feed, the resulted topography on the trunnion surface (and, thus, its integrity) changes significantly (Figure 3a-c). Figure 3d compares the dismantling torques of the junctions with different topographies. From this figure, it can be implied that neither extreme high ("C") nor extreme low roughness ("A") resulted in an integrity as the moderate roughness ("B"). This shows the double-sided influence of the created microgrooves on the junction integrity which strongly depends on the design characteristics of the microgrooves. The published literature shows some studies in which the finite element (FE) approach was used to estimate the contact situation of the microgrooved junctions. Bechstedt et al. [31] observed significant changes in the contact situation of the microgrooved junctions after assemblage. They used the 2D axisymmetric finite element (FE) technique to evaluate the contact situation and the level of micromotions at the interface of a 36 mm CoCr/ceramic head on a Ti trunnion assembled with various forces of 0.5, 2, 4, and 8 kN. The microgrooves of the trunnion were modeled using sinusoidal periodic waves with an an amplitude of 11 µm and a spacing of 200 µm. For the head tapers, the amplitude and spacing were considered as 10 µm and 220 µm, respectively. The model was validated against experimental data for the contact situation. Once verified, the results for the three surface topographies with heights of 2, 1, and 14 µm showed the pivotal role played by the assembly load in altering the contact area, whereas the deeper microgrooves resulted in a smaller contact area (partly against the main philosophy behind the microgrooves). For the junctions with a CoCr head, all microgrooves were in contact even with the lowest assembly force; however, for the cases with a ceramic head, few microgrooves were in contact. This was mainly due to the deformation of both components and their possible interactions in the junctions with a metallic head. Overall, only few numbers of the microgrooves showed plastic deformation, and the level of plasticity increased with an increase in either the assembly force or the microgroove heights. Godoy et al. [32] used a 2D axisymmetric model with a sinusoidal pattern of microgrooves on a trunnion with an amplitude and period of 33 µm and 310 µm, respectively. The model included a roughness of 0.33 µm on the surface of the head tapers. They verified the FE results against experimental data from interferometry for 28 mm CoCr heads assembled onto 12/14 Ti trunnions with an impaction force of 6 kN. The results of microgroove deformations indicated a good agreement between the FE predictions and experiments such that the mean changes in the microgroove heights from FE and measurements were 1.40 µm and 1.23 µm, respectively. Furthermore, 76-89% and 91-100% of all the microgrooves were deformed according to the experimental and FE results, respectively, which is partly inconsistent with the findings in [31]. The study conducted by Godoy et al. [32] clearly showed the importance of the microgroove heights on the global deformations as reported in [31]. Plastic deformation was noted in the FE models at the tip of the microgrooves, as illustrated in Figure 4a. Figure 4b shows the degree of deformation in the microgroove in the distal third, middle third, and proximal third regions. Higher deformation in the proximal third was due to the positive mismatch angle between the trunnion and head taper. This shows a possible interaction between the design of the microgrooves and the taper angle mismatch. In addition to the taper angle mismatch, the deformation and contact pressure were indicated to be a function of the magnitude of assembly force in Gustafson et al. [33]. The change in assembly force from 4 kN to 12 kN changed the contact pressure (from 803.9 MPa to 964.8 MPa) and plastic strains (from 0.6% to 6%). However, the change in the number of hits from one to three did not significantly alter these parameters. The model in [33] was recently used in Gustafson et al. [34] to evaluate the influence of taper angle mismatch and microgroove pattern on the junction integrity. For the trunnion, four microgroove patterns were selected as follows: (amplitude, spacing) = (2, 30), (6,150), (11,200), and (14,200) µm. The head taper was modeled as either "ideal/flat" or with an amplitude and spacing of 2 µm and 50 µm, respectively. Taper angle mismatches of −0.2 • , −0.05 • , 0 • , +0.05 • , and +0.2 • were considered for the models. When comparing the contact area, the influence of trunnion microgroove pattern was the most important factor followed by the presence of microgrooves on the head taper. The taper angle mismatch did not show a significant effect on the contact area; however, its influence was observed in the maximal contact pressure. The presence of taper microgrooves and the taper angle mismatch played key roles in the plastic strain magnitudes and distributions, which is consistent with the findings in Godoy et al. [32]. Both the FE and the experimental results obtained in the aforementioned studies clearly emphasized the influence of various design parameters such as the assembly force, taper angle mismatch, and material couple on the level of effectiveness of the microgrooves in enhancing the junction integrity. In contrlast, there are some studies which highlighted the negative or neutral influence of the microgrooves on the junction integrity. Mueller et al. [35] conducted an extensive investigation on the influence of contact situation (proximal, distal, and full contacts), trunnion topography, head material (ceramic/metallic), and impaction force on the stability of the junctions. The stability was measured using a twist-off testing approach. Four surface topographies (smooth, symmetric rough, asymmetric rough, and very rough) were considered for the 12/14 Ti trunnion which were then coupled with 32 mm ceramic and CoCr heads in proximal contact with assembly forces of 1, 3, and 6 kN. They reported the level of assembly force as the most important parameter in determining the twist-off strength followed by the head material. For higher assembly forces and in the case of having CoCr heads, higher twist-off strengths were observed, which partly support the findings in [31]. Interestingly, no significant influence of the surface topography on the twist-off moment was reported. This result was also partly confirmed by Mai et al. [28], who found that the surface topography did not significantly change the stability of CoCr-Ti head-neck junctions (assembled with 3 kN). Three surface topographies named fine machined (FM), rough machined (RM), and furrowing (FU) (Figure 5a) were created on the surface of the 12/14 trunnions. Figure 5b shows the maximal (1754 N) and minimal (1465 N) dismantling forces for the furrowing and rough machined taper junctions, respectively. This figure partly shows the doublesided influence of the microgroove pattern as observed and reported in Döbberthin et al. [27] with somewhat similar assembly forces in [27] (4 kN) to those used in Mai et al. [28]. In a study by Falkenberg et al. [36], it was demonstrated that the presence of microgrooves (amplitude of 30 µm) did not produce any significant changes in the micromotions at the head-trunnion interface under various taper angle mismatches of 0.052°, 0.100°, and 0.134° and assembly forces of 2, 4, and 6 kN. Although the microgroove heights, taper angle mismatch, and the assembly force used in [36] are well comparable with those in [32,34], no significant contributions from the microgrooves were observed. Both the FE and the experimental results obtained in the aforementioned studies clearly emphasized the influence of various design parameters such as the assembly force, taper angle mismatch, and material couple on the level of effectiveness of the microgrooves in enhancing the junction integrity. In contrlast, there are some studies which highlighted the negative or neutral influence of the microgrooves on the junction integrity. Mueller et al. [35] conducted an extensive investigation on the influence of contact situation (proximal, distal, and full contacts), trunnion topography, head material (ceramic/metallic), and impaction force on the stability of the junctions. The stability was measured using a twist-off testing approach. Four surface topographies (smooth, symmetric rough, asymmetric rough, and very rough) were considered for the 12/14 Ti trunnion which were then coupled with 32 mm ceramic and CoCr heads in proximal contact with assembly forces of 1, 3, and 6 kN. They reported the level of assembly force as the most important parameter in determining the twist-off strength followed by the head material. For higher assembly forces and in the case of having CoCr heads, higher twist-off strengths were observed, which partly support the findings in [31]. Interestingly, no significant influence of the surface topography on the twist-off moment was reported. This result was also partly confirmed by Mai et al. [28], who found that the surface topography did not significantly change the stability of CoCr-Ti head-neck junctions (assembled with 3 kN). Three surface topographies named fine machined (FM), rough machined (RM), and furrowing (FU) (Figure 5a) were created on the surface of the 12/14 trunnions. Figure 5b shows the maximal (1754 N) and minimal (1465 N) dismantling forces for the furrowing and rough machined taper junctions, respectively. This figure partly shows the double-sided influence of the microgroove pattern as observed and reported in Döbberthin et al. [27] with somewhat similar assembly forces in [27] (4 kN) to those used in Mai et al. [28]. In a study by Falkenberg et al. [36], it was demonstrated that the presence of microgrooves (amplitude of 30 µm) did not produce any significant changes in the micromotions at the head-trunnion interface under various taper angle mismatches of 0.052 • , 0.100 • , and 0.134 • and assembly forces of 2, 4, and 6 kN. Although the microgroove heights, taper angle mismatch, and the assembly force used in [36] are well comparable with those in [32,34], no significant contributions from the microgrooves were observed. The results reported in the aforementioned studies indicated that the microgrooved junctions have either a positive or a neutral effect on the junction integrity. Changing the influence from neutral to positive is influenced by various design parameters. The experimentation of all the possible design shapes is not feasible in reality. As the FE approach has shown its capability in predicting the overall deformations of the microgrooves, it would be wise if these models are more developed to see what happens if the design variable would change in the practical regions using stochastic FE analyses similar to the study carried out for the smoothed junctions by Donaldson et al. [37]. It also seems that the roughness of the surface should not be low or high, and a moderate level is preferred; however, this also depends on the machining method in creating the microgrooves as in the studies of Mai et al. [28] and Döbberthin et al. [27], where the amplitude of around 7 µm resulted in different behaviors of the junction. The reason for the inconsistent findings of these studies could be that the microgrooves created on the head taper surface might change the interface behavior and subsequent effects. Some of these studies considered the roughness of the head tapers, whereas others did not account for this parameter. This requires more attention in experimental/numerical studies. Furthermore, some studies have investigated the influence of microgrooves on only the permanent deformations of the microgrooves, whereas, for a better comparison, one needs to compare the pull/twistoff strengths of the junctions. This is because the plastic deformations are not direct metrics for evaluating the junction integrity. Wear, Corrosion, and Material Loss The stability and integrity of taper junctions are evaluated by indicative metrics mainly in order to predict the durability of the junctions against the wear/corrosion damage mechanism occurring at the interface. Although useful and indicative, these metrics do not provide a comprehensive indication of the junction performance. The wear/corrosion phenomenon is a synergistic degradation process through which the mechanical abrasion, electrochemical repassivation/dissolution, and mechanical-electrochemical interrelations contribute to the total material loss at the interface. Therefore, in this section, the studies on the wear/corrosion of microgrooved junctions are reviewed, and their main results are presented. In this regard, an extensive cohort study was conducted by Arnholt et al. [30] through which 120 junctions were scored according to the Higgs-Goldberg method for determining fretting corrosion damage. The junctions were classified into two main groups, each containing 60 junctions. In one group, the mating trunnion was smooth (with an amplitude and wavelength less than 4 µm and 100 µm), while, in the other, the microgrooved trunnions (with an amplitude and wavelength more than 4 µm and 100 The results reported in the aforementioned studies indicated that the microgrooved junctions have either a positive or a neutral effect on the junction integrity. Changing the influence from neutral to positive is influenced by various design parameters. The experimentation of all the possible design shapes is not feasible in reality. As the FE approach has shown its capability in predicting the overall deformations of the microgrooves, it would be wise if these models are more developed to see what happens if the design variable would change in the practical regions using stochastic FE analyses similar to the study carried out for the smoothed junctions by Donaldson et al. [37]. It also seems that the roughness of the surface should not be low or high, and a moderate level is preferred; however, this also depends on the machining method in creating the microgrooves as in the studies of Mai et al. [28] and Döbberthin et al. [27], where the amplitude of around 7 µm resulted in different behaviors of the junction. The reason for the inconsistent findings of these studies could be that the microgrooves created on the head taper surface might change the interface behavior and subsequent effects. Some of these studies considered the roughness of the head tapers, whereas others did not account for this parameter. This requires more attention in experimental/numerical studies. Furthermore, some studies have investigated the influence of microgrooves on only the permanent deformations of the microgrooves, whereas, for a better comparison, one needs to compare the pull/twist-off strengths of the junctions. This is because the plastic deformations are not direct metrics for evaluating the junction integrity. Wear, Corrosion, and Material Loss The stability and integrity of taper junctions are evaluated by indicative metrics mainly in order to predict the durability of the junctions against the wear/corrosion damage mechanism occurring at the interface. Although useful and indicative, these metrics do not provide a comprehensive indication of the junction performance. The wear/corrosion phenomenon is a synergistic degradation process through which the mechanical abrasion, electrochemical repassivation/dissolution, and mechanical-electrochemical interrelations contribute to the total material loss at the interface. Therefore, in this section, the studies on the wear/corrosion of microgrooved junctions are reviewed, and their main results are presented. In this regard, an extensive cohort study was conducted by Arnholt et al. [30] through which 120 junctions were scored according to the Higgs-Goldberg method for determining fretting corrosion damage. The junctions were classified into two main groups, each containing 60 junctions. In one group, the mating trunnion was smooth (with an amplitude and wavelength less than 4 µm and 100 µm), while, in the other, the microgrooved trunnions (with an amplitude and wavelength more than 4 µm and 100 µm) were used. The trunnions and heads were made up of Ti/CoCr and CoCr alloys, respectively. The observation showed no significant difference in the maximum depth of material removal and fretting corrosion damage score between the two groups (for both the female and male tapers). The signs of damage were, however, more visible on the microgrooved tapers. Both groups showed signs of micromotions, fretting corrosion damage, and localized chromiumrich oxide layers, which were not influenced by the surface topography of the trunnions ( Figure 6). Conversely, Panagiotidou et al. [38] reported the surface topography as an important parameter affecting the wear/corrosion behavior of the CoCr-Ti head-trunnion interfaces. The CoCr heads were of 28 mm in diameter and mated with 12/14 Ti trunnions (rough/standard) for in vitro tests, through which a sinusoidal load oscillating between 0.1 kN and 3.1 kN was applied to the junction (immersed into PBS solution) for 10 million cycles at the frequency of 4 Hz. The R a roughness of head taper and rough trunnion were reported as 0.58 µm and 2.73-2.79 µm, respectively. After the in vitro tests, the surface roughness of the head tapers significantly increased where a rough trunnion was used. For the corrosion tests, two in vitro tested junctions were then loaded by a sinusoidal regime fluctuating between 0.1 and 1.5 kN with a frequency of 0.66 Hz for 1000 cycles. The corrosion tests included the open-circuit potential (OCP), potentiostatic tests at 200 mV, and a pitting scan. The results of these tests showed the fracture of the oxide layer (and, consequently, electrochemical repassivation) where a rough trunnion was used. Overall, the use of the rough trunnion exacerbated the crevice environment, resulting in more electrochemical reactions (drops in OCP, creation of potentiostatic current, and hysteresis loop in pitting scan). Therefore, the material loss in junctions with rough trunnions could possibly originate from the mechanical wear, corrosion, and their interrelations, whereas, in the junctions with standard trunnion, the role of mechanical wear seems to predominate the role of corrosion. This is somewhat consistent with the results found by Brock et al. [18], where rough trunnions represented higher volume loss rates (0.402 mm 3 vs. 0.123 mm 3 per 1 year). The diameter of the CoCr heads of this study [18] was between 36 mm and 63 mm, mated with either 11/13 or 12/14 Ti trunnions. Overall, the fretting corrosion damage in head tapers was higher than that in the trunnions [18]. Considering the role of material couple, Pourzal et al. [39] more extensively investigated the wear/corrosion damage in 269 head tapers and trunnions classified into CoCr-CoCr and CoCr-Ti headtrunnion junctions. Head diameters were between 28 mm and 50 mm, and the trunnions were of 12/14 and 14/16 proximal/distal diameters. Their results turned out interesting patterns for the damage in both material combinations. In CoCr-CoCr junctions, rougher trunnions resulted in lower damage scores (resulting from wear and corrosion) in head tapers compared to the smooth ones. For the CoCr-Ti junction, rougher head surfaces were associated with higher damage scores in both the head and the trunnion components, whereas increasing the roughness of the trunnion entailed lower damage scores in trunnion. Overall, it was observed that the damage scores of CoCr-CoCr junctions were higher compared to those of CoCr-Ti junctions. More distinct damage observed in CoCr/CoCr couples was related to the higher susceptibility of CoCr to different corrosion mechanisms. Higher fretting corrosion damage in CoCr has also been reported by Kop et al. [40] in both smooth and microgrooved devices; however, they raised the cold welding in the case of using Ti devices. The influence of the surface roughness for the two material combinations obtained in Pourzal et al.'s study [40] is illustrated in Figure 7. Form this figure and according to the results of Kop et al. [40], material combination is a key factor in determining the influence of the microgrooves on the damage severity at the interface. Figure 7c confirms the material transfer from the Ti surface to CoCr surface (the influence of material combination), which can then change the influence of the microgrooves on the junction performance. The contribution of roughness to higher material losses at the metal-on-metal junctions has been also reported by Hothi et al. [41] where they related the higher volume losses in Corail to their rougher and shorter trunnions (height and spacing of~11.5 µm and 0.2 mm) in comparison with those in S-ROM (height and spacing of~1 µm and 0.099 mm). However, the shorter trunnion was observed to offer a better integrity in Matt et al. [24]. Therefore, these two observations might be more related to the microgrooves. shows a general comparison of the surface roughness of the two groups considered by Hothi et al. [41]. OR PEER REVIEW 9 of 17 to offer a better integrity in Matt et al. [24]. Therefore, these two observations might be more related to the microgrooves. Figure 8 shows a general comparison of the surface roughness of the two groups considered by Hothi et al. [41]. (a) (b) to offer a better integrity in Matt et al. [24]. Therefore, these two observations might be more related to the microgrooves. Figure 8 shows a general comparison of the surface roughness of the two groups considered by Hothi et al. [41]. In addition to Hothi et al.'s study [41], the variations of the surface topography were also raised in [26]. In a retrieval study, Stockhausen et al. [26] reported considerable variations in the surface topographies of different designs for the taper junctions. This research study was conducted on 46 Ti trunnions with a 12/14 design. They studied the influence of stem topography mated with ceramic/metal heads on the severity of fretting corrosion damage. It was observed that stems mated with ceramic heads were less damaged (in the form of both fretting and corrosion) if they were coupled with a smoother trunnion, while, in the case of having a metallic head, there was no meaningful influence of the surface roughness on the intensity of the fretting and corrosion damage scores (Figure 9a). The scoring method was based on the approach proposed by Goldberg et al. [42] through which the damage intensity was classified into four main categories: no damage, mild, moderate, and severe, as illustrated in Figure 9b. The fretting and corrosion damage scores were determined using Cohen's kappa tests. (a) (b) Figure 9. (a) The influence of the surface roughness on the fretting and corrosion damage of the trunnions mated with ceramic and metallic heads in Stockhausen et al. [26]. Double asterisks and circles denote highly significant differences (p ≤ 0.01), and maximum/minimum damage, respectively (b) Four main categories for classification of the damage severity: no damage, mild, moderate, and severe, respectively, from left to right. In addition to Hothi et al.'s study [41], the variations of the surface topography were also raised in [26]. In a retrieval study, Stockhausen et al. [26] reported considerable variations in the surface topographies of different designs for the taper junctions. This research study was conducted on 46 Ti trunnions with a 12/14 design. They studied the influence of stem topography mated with ceramic/metal heads on the severity of fretting corrosion damage. It was observed that stems mated with ceramic heads were less damaged (in the form of both fretting and corrosion) if they were coupled with a smoother trunnion, while, in the case of having a metallic head, there was no meaningful influence of the surface roughness on the intensity of the fretting and corrosion damage scores (Figure 9a). The scoring method was based on the approach proposed by Goldberg et al. [42] through which the damage intensity was classified into four main categories: no damage, mild, moderate, and severe, as illustrated in Figure 9b. The fretting and corrosion damage scores were determined using Cohen's kappa tests. In addition to Hothi et al.'s study [41], the variations of the surface topography were also raised in [26]. In a retrieval study, Stockhausen et al. [26] reported considerable variations in the surface topographies of different designs for the taper junctions. This research study was conducted on 46 Ti trunnions with a 12/14 design. They studied the influence of stem topography mated with ceramic/metal heads on the severity of fretting corrosion damage. It was observed that stems mated with ceramic heads were less damaged (in the form of both fretting and corrosion) if they were coupled with a smoother trunnion, while, in the case of having a metallic head, there was no meaningful influence of the surface roughness on the intensity of the fretting and corrosion damage scores (Figure 9a). The scoring method was based on the approach proposed by Goldberg et al. [42] through which the damage intensity was classified into four main categories: no damage, mild, moderate, and severe, as illustrated in Figure 9b. The fretting and corrosion damage scores were determined using Cohen's kappa tests. (a) (b) Figure 9. (a) The influence of the surface roughness on the fretting and corrosion damage of the trunnions mated with ceramic and metallic heads in Stockhausen et al. [26]. Double asterisks and circles denote highly significant differences (p ≤ 0.01), and maximum/minimum damage, respectively (b) Four main categories for classification of the damage severity: no damage, mild, moderate, and severe, respectively, from left to right. [26]. Double asterisks and circles denote highly significant differences (p ≤ 0.01), and maximum/minimum damage, respectively (b) Four main categories for classification of the damage severity: no damage, mild, moderate, and severe, respectively, from left to right. In the recent study by Mai et al. [28] (detailed in Section 2.1), a series of in vitro fretting corrosion experiments were conducted to elucidate the influence of surface topography on the severity of the damage. As shown schematically in Figure 10a, an off-axial sinusoidal load oscillating between 300 N and 2500 N with a frequency of 4 Hz was applied to the junction immersed into an acidic solution enriched with chloride ions (pH of 2.9) for 5 million cycles. After completing the fretting corrosion tests, the junctions were dismantled. It was observed that the stability was maximal for the fine machined junctions (2660 ± 284 N) (named "consolidated junctions"), followed by furrowed ones (1925 ± 334 N), followed by rough machined ones (1253 ± 355 N) (Figure 10b). Consistent with the previous studies, the material loss increased with an increase in the surface roughness such that the maximal material loss occurred for the rough machined junctions followed by furrowed and fine machined samples (Figure 10c). Higher material losses in junctions with rough trunnions were related to the higher possibility of solution ingress into the interface. Metal-on-metal junctions were suggested to be used with smoother trunnions because the metal-on-metal junctions are more susceptible to corrosion as confirmed in Pourzal et al. [39]. Interestingly, a correlation was found between the dismantling force after the fretting corrosion tests and the material losses at the interface ( Figure 10d). Hence, a higher dismantling force results in less material loss. Comparing the results in Figures 5b and 10b, it can be seen that the influence of the surface topography on the dismantling force changed upon applying the cyclic tests. In the recent study by Mai et al. [28] (detailed in Section 2.1), a series of in vitro fretting corrosion experiments were conducted to elucidate the influence of surface topography on the severity of the damage. As shown schematically in Figure 10a, an off-axial sinusoidal load oscillating between 300 N and 2500 N with a frequency of 4 Hz was applied to the junction immersed into an acidic solution enriched with chloride ions (pH of 2.9) for 5 million cycles. After completing the fretting corrosion tests, the junctions were dismantled. It was observed that the stability was maximal for the fine machined junctions (2660 ± 284 N) (named "consolidated junctions"), followed by furrowed ones (1925 ± 334 N), followed by rough machined ones (1253 ± 355 N) (Figure 10b). Consistent with the previous studies, the material loss increased with an increase in the surface roughness such that the maximal material loss occurred for the rough machined junctions followed by furrowed and fine machined samples (Figure 10c). Higher material losses in junctions with rough trunnions were related to the higher possibility of solution ingress into the interface. Metal-on-metal junctions were suggested to be used with smoother trunnions because the metal-on-metal junctions are more susceptible to corrosion as confirmed in Pourzal et al. [39]. Interestingly, a correlation was found between the dismantling force after the fretting corrosion tests and the material losses at the interface ( Figure 10d). Hence, a higher dismantling force results in less material loss. Comparing the results in Figures 5b and 10b, it can be seen that the influence of the surface topography on the dismantling force changed upon applying the cyclic tests. Higher wear rates in the microgrooved junctions in comparison with the smoothed junctions were also observed in one FE study by Ashkanfar et al. [43]. In their study, a CoCr-Ti head-trunnion junction was assembled by 4 kN impaction force, and a distal contact with a mismatch angle of −0.05° was considered. The head was 36 mm in diameter, and it was mated with a 12/14 trunnion. Under the walking loads, the fixation of the microgrooved junction was lost after a number of cycles; therefore, the micromotions at the (c) The material loss at head taper for the three surface topographies. The asterisk denotes statistically significant differences (p ≤ 0.05) (d) The correlation between the material loss at the head taper and the dismantling force after the cyclic tests. Higher wear rates in the microgrooved junctions in comparison with the smoothed junctions were also observed in one FE study by Ashkanfar et al. [43]. In their study, a CoCr-Ti head-trunnion junction was assembled by 4 kN impaction force, and a distal contact with a mismatch angle of −0.05 • was considered. The head was 36 mm in diameter, and it was mated with a 12/14 trunnion. Under the walking loads, the fixation of the microgrooved junction was lost after a number of cycles; therefore, the micromotions at the interface escalated. The presence of ridges and their influence on the wear depth were also modeled by Zhang et al. [44] using a sub-modeling technique. It was shown that the wear depth in the sub-model was higher compared to its corresponding value in the global head-neck junction model. A recent FE study by Capitanu et al. [45] also confirmed higher wear rates for the microgrooved junctions consistent with those in [43,44]. In all FE simulations [43][44][45], the role of corrosion was neglected, and the total loss was assumed to originate from the mechanical wear only. Fretting corrosion damage, as a synergistic process, is believed to be significantly affected by the electrochemical corrosion, and this needs to be somehow included in future FE simulations to produce a better picture of the influence of microgrooves. The comparison of all studies above seems to signify a common message of higher volume losses for microgrooved junctions. In Section 2.1, it was observed that the stability and integrity of the junction are positively influenced by the microgrooves, while, in this section, the fretting corrosion of such junctions was more severe (except some research cases [29]). It should be noted that the studies reviewed in this section were mostly visual-based (except the FE and in vitro ones) through which the design parameters (such as the head size, trunnion flexural rigidity, and material combinations) were largely different, whereas, in Section 2.1, it was concluded that a small change in each of the design parameters changes the whole mechanical behavior of the junction. Furthermore, in some of the studies in this section, the loading history of the inspected junctions was not clear, and this may have again changed the overall conclusions. More specific studies are required to conduct one-to-one comparisons between the microgrooved and smoothed junctions to derive a more valid final conclusion. The FE study conducted by Ashkanfar et al. [43] showed higher fretting corrosion damage in microgrooved junctions; however, this needs more research as the design parameters are very specific and limited, while the variations among the designs are very large and well documented [26][27][28]46]. Discussion Surface topography is one of the key design parameters which significantly affects the performance of head-neck junctions. The surface topography is sometimes designed to purposefully enhance the junction integrity and its longevity [4,5,30]. These are commonly called microgrooved/ridged junctions. The mechanical performance of the microgrooved junctions versus smoothed junctions was recently raised as a research question [27,30,31,36,43]. Previous findings and reported results are contradictory; furthermore, there is no agreement on the microgroove geometry. This study was conducted to provide an overview of the latest findings of the microgrooved junctions, and it categorized the research studies according to two main metrics: stability and integrity; wear, corrosion, and material loss. According to this overview, some research studies support the main philosophy behind the creation of the microgrooves to enhance the junction integrity [24,25,27,31] while others report that microgrooved junctions reduce the integrity [28,36]. It seems that, using experimental and/or numerical approaches, most of the reviewed studies concluded that microgrooves have a positive effect on the integrity. However, this positive influence seems to strongly depend on other design parameters such as the taper angle mismatch [34], assembly force [24,25], trunnion geometry [18], and head size/material [31,36]. The interactive influence of these parameters was also noticed such that the influence of the microgrooves was significant in some cases and insignificant in others. This clearly shows a need for further research to provide more extensive analyses to find out the interaction of these parameters. The FE method has shown its capability in predicting the behavior of the microgrooved junctions [4,5,31,33,34,36]; hence, it can be used as a useful tool to explore the change in the design parameters and find out the possible interactions leading to a final change in the junction performance. This modeling procedure might be concluded with an optimal pattern for the microgroove geometry depending on the operational and geometrical constraints together with the material combination of the problem in hand. However, the FE models are time-consuming to complete specifically, where a 3D model is supposed to be used with the inclusion of other geometrical imperfections such as the degree of non-roundness and surface waviness. Furthermore, most of the models are limited to the taper junctions where the surface topography of the head taper is neglected, while this parameter can change the overall conclusions, as observed in previous research [31,47]. The FE models of microgrooved junctions are still in their infancy, and they do not accurately reflect what occurs in reality. In operation, the junction is typically assembled off-axially with head tapers for which the roughness is not negligible. Then, the junction undergoes cyclic loads including both the frictional forces and the moments of the physical activities [4,5,19,20] in the corrosive body medium. Some of these activities, together with higher body weights, might result in critical stress and strain fields, which may then change the influence of microgrooves on the integrity of the junction [4,5]. Furthermore, due to the cyclic action of the loads from physical activities, the junction needs to be analyzed progressively, and the process of interfacial damage needs to be encountered by the future FE models. The FE work completed by Ashkanfar et al. [43] addresses the mechanical wear at the interface of a microgrooved junctions; however, it does not include the head taper roughness and is limited to a geometrical and loading condition, such as that recently conducted by Capitanu et al. [45]. This study [45] also neglected the inclusion of head taper roughness into the modeling phase and was limited to the modeling geometry and material combination. The chapter of the microgrooved junctions is still open, and more research needs to be conducted in similar wear algorithms with possible inclusion of the electrochemical reactions at the interface. The inclusion of the electrochemical reactions at the interface was recently applied to a smoothed CoCr/CoCr head-neck junction by the authors [11]. In this algorithm, the mechanical and electrochemical wear equations were combined into a unique algorithm. It was concluded that the electrochemical reactions are responsible for almost 32% of the total material loss at the interface, and this percentage changes with various design parameters. The basic data for such a modeling procedure can be obtained by fundamental tribocorrosion studies in the ball-on-disc configuration. The role of mechanical and electrochemical reactions in the total tribocorrosion loss changes with various parameters such as the imposed potential [48,49], normal force (and, thus, contact pressure) [50,51], sliding distance and its frequency [52][53][54], material couple in contact [55][56][57][58], and the solution type and its acidity [59][60][61]. However, the design parameters of head-neck junctions including taper angle mismatch, head size, trunnion geometry, material couples in contact, and the solution acidity, together with the presence of proteins, could affect the tribological characteristics of the system, the governing potential, and the degree of mechanical and electrochemical damage processes. These complexities need to be comprehensively included in the experimental tests before the incorporation of the experimental data into the numerical models. In the presence of the microgrooves, the role of the mechanical and electrochemical reactions in the total material damage at the interface might be increased and/or decreased. This, together with the influence of the microgrooves on the gap opening (allowing body fluid ingress into the crevice-like geometry of the junction), needs to be addressed in future modeling studies. This modeling procedure might then generate a more conclusive comparison between microgrooved and smoothed junctions. As evidenced by the FE approach, the interaction of the parameters plays a pivotal role in determining the positive/neutral/negative influence of the microgrooves. Considering the retrieval studies, they were mostly associated microgrooved junctions (with various design parameters) with higher damage intensities. Although being indicative and useful, most of the retrieval studies conducted on the microgrooved junctions focused on a class of junctions with various geometrical parameters (e.g., head size and trunnion geometry), and they sometimes did not give details on the geometry of the microgrooves and/or the loading history of the junction. Keeping the strong interactions of the design parameters in mind, the microgrooved junctions need to be studied more meticulously with possible inclusions of the complexities in both the operational and the post-operational phases. More in vitro studies also need to be conducted in order to provide possible validations for the tribocorrosion-based FE algorithms in simplified oscillatory loading conditions. The validated FE models can then be reliably sophisticated with other parameters to predict the influence of different microgroove designs on the junction longevity and durability in reality. Funding: This research received no external funding.
2022-12-12T05:13:13.039Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "3dcefe467e74207095a1a3b23a10b1eb4a62fd45", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/23/8396/pdf?version=1669372291", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3dcefe467e74207095a1a3b23a10b1eb4a62fd45", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257056124
pes2o/s2orc
v3-fos-license
FTY720 in resistant human epidermal growth factor receptor 2-positive breast cancer The prognosis of patients with human epidermal growth factor receptor 2 (HER2)-positive breast cancer has considerably improved. However, no reliable treatment besides anti-HER2 strategies has been available. FTY720, a small-molecule compound used for treating refractory multiple sclerosis, has been reported to have beneficial effects against cancers. We therefore evaluated the efficacy of FTY720 in trastuzumab-resistant breast cancer cells and investigated the possible mechanism involved. This study evaluated morphological changes after FTY720 treatment. Antiproliferative WST-1 assays and LDH Cytotoxicity Assay Kits were used to determine the treatment effects of drugs, whereas Western blot analysis was used to evaluate protein expression. Apoptotic events were investigated through annexin V staining and TUNEL assays using flow cytometry. FTY720 was effective in trastuzumab-resistant breast cancer cell lines despite the presence of PIK3CA mutation. Studied on a xenograft mouse model, FTY720-treated groups had statistically significantly poorer HCC1954 xenograft growth in vivo compared with the control group. Our findings suggest that FTY720 can overcome resistance to trastuzumab therapy in patients with HER2-positive breast cancer, with FTY720 plus trastuzumab might offer even better efficacy in vitro and in vivo. Results Characterization of HER2-positive breast cancer cells. Five HER2-positive breast cancer cell lines were used herein. Apart from HER2 protein overexpression, BT-474 and BT-474-HR1 cells were also characterized by estrogen receptor expression (Fig. 1a) unlike the other three cell lines. BT-474-HR1 and MDA-MB-453 cells showed relatively low HER2 protein expression. We confirmed their HER2 gene amplification by conducting HER2 FISH in BT-474-HR1 and MDA-MB-453 cells. The former cell line showed a HER2 copy number of 25.50 and a HER2/CEP17 ratio of 6.89. The later cell line showed a HER2 copy number of 7.05 and a HER2/ CEP17 ratio of 2.04 ( Figure S1). BT-474 and SK-BR-3 cells were considered trastuzumab sensitive 23,24 , whereas MDA-MB-453 and HCC1954 cells were reported to be trastuzumab-resistant 12,22 . These cells were treated with trastuzumab at concentrations ranging from 0.5 to 16 μg/mL, showing different responses (Fig. 1b). PI3K mutations were the most frequent alterations detected among trastuzumab-resistant populations. Therefore, the three Although BT-474 cells were reported to contain PIK3CA mutations in exon 2 (K111N), we still considered BT-474 as a PIK3CA wild-type cell line based on a previous review 25 . Table 1 summarizes the characteristics of all cells utilized with items including estrogen receptor, HER2, PIK3CA gene, and sensitivity to trastuzumab. FTY720 is effective against trastuzumab-resistant breast cancer cells and triggers programmed cell death. Two trastuzumab sensitive and three trastuzumab-resistant cell lines were treated with FTY720 at concentrations ranging from 0.625 to 20 μM (Fig. 2a). Possible IC 50 values were between 5 and 10 μM for BT-474 cells and 2.5 and 5 μM for SK-BR-3 cells. These results were similar to previously published data 21 . For the three trastuzumab-resistant lines, IC 50 values were all between 5 and 10 μM. The morphology of BT-474-HR1, MDA-MB-453, and HCC1954 cells was determined after FTY720 treatment at concentrations of 0, 5, and 20 μM (Fig. 2b). FTY720 inhibited cell growth at 20 μM, though not much effect was observed at 5 μM. These findings correlated with the results of the antiproliferative assay. Moreover, cytoplasmic vacuolization and loss of integrity were observed in the cells after FTY720 treatment at the effective concentration. Considering that the presence of cytoplasmic vacuolization has been discussed in certain types of cell death 26 and FTY720 did induce apoptosis of mouse breast cancer cells 17 , we examined proteins involved in apoptosis and autophagy. Lysates of three trastuzumab-resistant cell lines were collected 24 h after mock, trastuzumab, or FTY720 treatment. Our results showed prominent overexpression of cleaved caspase-3, cleaved caspase-9, cleaved PARP, and LC3-II after FTY720 treatment (Fig. 3a, Figure S5a). Adding FTY720 has been suggested to turn on programmed cell death in these cells. We substituted BEZ235, a dual PI3K and mTOR inhibitor bearing the ability to induce apoptosis in breast cancer cells 27 , for FTY720 in the aforementioned protocols to determine whether the same trend for protein expression existed (Fig. 3b, Figure S5b). BEZ235 treatment upregulated the expression of cleaved caspase-3, cleaved caspase-9, cleaved PARP, and LC3-II. However, protein expression did not seem as prominent as that induced by FTY720. Both drugs were tested in HCC1954 cells at IC 50 on the same panel. Accordingly, FTY720 induced greater apoptotic and autophagic signaling compared with BEZ235 in terms of increased expression of cleaved caspase-3, cleaved PARP, and LC3-II (Fig. 3c). We further validated FTY720-induced apoptotic events using two methods. Cells were stained with annexin V and analyzed using flow cytometry (Fig. 4a, Figure S5c). The percentage of apoptotic cells increased substantially 45.6%, and 27.8% after FTY720 treatment, respectively. These results were confirmed by incubating HCC1954 cells with BEZ235. Accordingly, our results showed apoptotic trends similar to those with FTY720 ( Figure S2). However, the percentage of apoptotic cells was lower in HCC1954 cells compared with that in the other two cell lines. We hypothesized that FTY720 might not have reached its maximum effect in HCC1954 cells. Therefore, we attempted exposing HCC1954 cells to a FTY720 concentration higher than IC 50 or prolonging the incubation time with FTY720 from 24 to 48 h. Both of adjustments contributed to a higher percentage of apoptotic cells ( Figure S3), suggesting differences in the peak reaction timing among cells. Furthermore, TUNEL assays were used to detect apoptotic DNA fragmentation. All three trastuzumab-resistant cell lines demonstrated increased DNA fragmentation after exposure to FTY720 (Fig. 4b, Figure S4). FTY720 potentiates death of resistant cells through concurrent apoptotic pathway activation and autophagic pathway inhibition. An electron microscope was utilized to determine intracellular morphological changes in affected cells. HCC1954 cells treated with 20 µM FTY720 were collected and prepared 24 h later. These cells contained multiple folded layers of membranes and devoured organelles (Fig. 5a). Such structures were similar to autophagosomes surrounding a portion of the cytoplasm as described in the literature 28 and were implicated in the autophagic process. Lysates of FTY720-treated HCC1954 cells were also collected. P62 and LC3-II expressions increased with exposure time, a finding similar to that for bafilomycin A1-treated cells but not rapamycin-treated cells (Fig. 5b, Figure S6a). This is because autophagy inhibitors, such as bafilomycin A1, promote p62 accumulation, whereas autophagy inducers, such as rapamycin, gradually decrease p62 expression. Protein stability tests for p62 were then performed after FTY720, bafilomycin A1, or rapamycin treatment in HCC1954 cells. Following cycloheximide addition 3 h after exposure to the target drugs, protein translation was halted. Protein analysis was performed based on the scheduled timing. Trends in p62 stability were similar between FTY720-and bafilomycin A1-treated cells but again different from rapamycin-treated cells (Fig. 5c, Figure S6b). The expression of p62 messenger RNA in FTY720-treated HCC1954 cells was then examined using PCR. Fold changes in messenger RNA between 0 and 2 h did not significantly differ (Fig. 5d). The aforementioned findings suggested that FTY720-mediated changes in p62 protein expression were independent of protein translation. To confirm the role of FTY720 as an autophagy inhibitor, HCC1954 cells were co-treated with FTY720 and a known autophagy inhibitor to determine whether the antiproliferative effects of FTY720 could be restored. After co-treatment with 3-methyladenine and bafilomycin A1, our results showed that the FTY720-mediated antiproliferative effects were not restored by other autophagy inhibitors (Fig. 5e). We also elucidated the effects of apoptosis inhibition on FTY720-mediated antiproliferation. After HCC1954 cells were co-treated with FTY720 and one pan caspase inhibitor, Z-VAD-FMK, the FTY720-mediated antiproliferative effects were not restored (Fig. 5f). Through Western blot analysis, the cleavage of caspase-3 was halted by adding Z-VAD-FMK 1 h before FTY720 treatment. Accumulation of caspase-3 fragments with high molecular weight was observed, which could indicate blockage of the caspase-dependent pathway [29][30][31] . Within the same events, increased expression of p62 and LC3-II was noted after apoptotic pathway blockage (Fig. 5g). This suggested that autophagic rescue could not restore cell death following the effects of FTY720 treatment, which not only triggers apoptosis but also concurrently inhibits the autophagic pathway. FTY720 in combination with trastuzumab provides better outcomes against trastuzumab-resistant HER2-positive breast cancer cells in vitro and in vivo. Given the importance of the HER2 signaling pathway in HER2-positive breast cancer, HER2-directed therapies have still been the primary strategy employed to deal with progressive disease after trastuzumab treatment 5,7,13 . We examined whether adding trastuzumab could potentiate the effects of FTY720. Trastuzumab was dosed consistently at 2 μg/mL in each group and cell line, whereas the dosage of FTY720 was adjusted according to the IC 50 in each cell line. The results showed that co-treatment with FTY720 and trastuzumab significantly increased the antiproliferative effects compared with monotherapy ( Fig. 6a). We used similar designs and utilized LDH-Cytotoxicity Assay Kits to measure cytotoxic effects. Accordingly, adding trastuzumab significantly potentiated the cytotoxic effects www.nature.com/scientificreports/ of FTY720 in MDA-MB-453 cells. Although adding trastuzumab did not significantly potentiate the effects of FTY720 in BT-474-HR1 and HCC1954 cells, FTY720 by itself was still able to provide considerable cytotoxic effects (Fig. 6b). The aforementioned results suggested that continuous blockage in HER2-dependent signaling pathway was still beneficial despite failure of HER2-directed therapies. We examined protein expression after FTY720 and trastuzumab treatment. Notable downregulation of phospho-ERK was observed after FTY720 treatment in the three resistant cell lines (Fig. 6c, Figure S6c). There was no effect of trastuzumab on the expression of phopho-ERK but the presence of FTY720 caused statistically significant down-regulation of phopho-ERK in certain cell lines. The negative regulation of phospho-ERK might have been attributed to the opposing effects of protein phosphatase 2A (PP2A) activation 32,33 , with studies recognizing FTY720 as a PP2A activator 21,34 . We then investigated the in vivo effects of FTY720 by establishing a xenograft mouse model. Accordingly, HCC1954 cells were inoculated into BALB/c nude mice, which were then randomly assigned to four treatment groups: control, trastuzumab, FTY720, and FTY720 plus trastuzumab. Treatments were administered via intraperitoneal injection for 23 days, whereas the concentrations of each drug used are detailed in the "Heterotopic xenograft mouse model and in situ TUNEL assays" section. The trastuzumab group had modestly poorer tumor growth compared with the control group (Fig. 7a). In contrast, the FTY720 group had significantly poorer xenograft growth compared with the control group (P = 0.0093) and so did the FTY720 plus trastuzumab group compare with the control group (P = 0.0253). Two mice died during the treatment period, one in the control group and the other in the trastuzumab group. Xenografts were then harvested and prepared as tissue sections. To determine drug-induced apoptotic events in vivo, tumor slides were stained with Hoechst 33,342 dye followed by TUNEL assays and observed using fluorescence microscopy. Results showed prominent green signaling indicating apoptotic events in FTY720treated groups (Fig. 7b). In the FTY720 plus trastuzumab group, obvious apoptotic events were still observed even though these xenografts were much smaller than those in other three groups. Some green signal was not accompanied by blue stains indicating the presence of a nucleus. To determine whether false positive stains existed, the same tissue sections were stained with HER2 antibody outlining the structure of the membranes and then examined using fluorescence microscopy (Fig. 7c). Within areas without DAPI stains, we confirmed the co-existence between green and red signals, proving that these were not false positive apoptotic events. Discussion FTY720, which serves as an immunosuppressive agent, has been approved for the treatment of relapsing-remitting multiple sclerosis. Nonetheless, its ability to confer antineoplastic effects has been gradually discovered. From hematologic malignancies to several types of solid tumors, FTY720 has shown its potential role in anti-cancer treatments in vitro and in vivo through several postulated mechanisms [34][35][36][37][38] . The present study demonstrated that FTY720 could overcome resistance to trastuzumab in HER2-positive breast cancer cells. FTY720 induced prominent apoptotic events in trastuzumab-resistant breast cancer cells. Most importantly, FTY720 acted as an autophagy inhibitor in resistant cells and further potentiated its cytotoxic effects. Moreover, the combination of FTY720 and trastuzumab offered more potent effects than FTY720 alone based on our analysis of antiproliferative activity and cytotoxicity. The treatment effects of FTY720 plus trastuzumab had also been validated in a xenograft mouse model. Patients with HER2-positive breast cancer carrying PI3K mutations have been shown to have poorer prognosis 39 . Although PI3K/AKT/mTOR pathway activation contributes toward resistance to anti-HER2 treatment, drugs that offer significant clinical benefits have still been lacking. BEZ235, a potent dual PI3K and mTOR inhibitor, has been proven effective in trastuzumab-resistant cells 40 . Unfortunately, this drug has been abandoned because of its toxicity. The present study validated the efficacy of BEZ235 in three trastuzumab-resistant cell lines. Accordingly, protein analysis showed that BEZ235 induced not only apoptosis but also overexpression of autophagy-related proteins. However, after exposure to BEZ235 and FTY720 at IC 50 , more prominent Figure 5. FTY720 overcomes resistance to trastuzumab by influencing the regulation of apoptosis and autophagy. (a) HCC1954 cells were collected and prepared 24 h after FTY720 treatment. Cell morphology was assessed using an electron microscope. (b) Autophagy-related proteins, p62 and LC3-II, were analyzed after HCC1954 cells were treated with FTY720, rapamycin, and bafilomycin A1. (c) The protein stability test was performed using cycloheximide chase assays after treatment with the indicated drugs in HCC1954 cells. Lysates were collected every hour up to 5 h after cycloheximide treatment. (d) mRNA levels of p62 in HCC1954 cells were evaluated using reverse transcription quantitative real-time PCR 0 and 2 h after FTY720 incubation. (e) Cells were treated with 3-methyladenine or bafilomycin A1 with or without FTY720. (f) Cells were treated with the pan caspase inhibitor Z-VAD-FMK, FTY720, or a combination of both drugs. In the combination group, cells were pretreated with Z-VAD-FMK for 1 h followed by FTY720 treatment. (g) Lysates were collected and analyzed 24 h after FTY720 treatment. In the combination group, cells were treated with Z-VAD-FMK 1 h before FTY720. Numbers under cleaved caspase-3 section indicate the intensity of the blot pointed by the arrow. All experiments were repeated at least three times except for images captured using the electron microscope. During Western blot analysis, equal loading of proteins was verified using beta-actin, and numbers under each Western blot indicate the intensity of the protein relative to that at 0 h or of the control. During evaluation of antiproliferative effects, the percentage of WST-1 absorbance in cells was measured 72 h after drug incubation. Each plot indicates the mean value, whereas error bars indicate the standard error of the mean. The concentrations of FTY720, rapamycin, bafilomycin A1, 3-methyladenine, and Z-VAD-FMK were 10 μM, 100 nM, 1 nM, 20 μM, and 20 μM, respectively. P value: **P < 0.01 and ***P < 0.001. CHX: cycloheximide, F plus VAD: FTY720 plus Z-VAD-FMK, n.s.: not significant. www.nature.com/scientificreports/ overexpression of cleaved caspase-3, cleaved PARP, and LC3-II was observed with FTY720. This suggests that FTY720 better promotes apoptosis-or autophagy-related cell death. Unlike BEZ235, FTY720 has had acceptable toxicity profiles among humans. Moreover, the FTY720 dose selected in our mouse model (25 mg/kg/week) did not exceed those reported in multiple sclerosis or solid tumor animal models (ranged from 7 to 70 mg/kg/ week) 21,35,41,42 . These properties make FTY720 a potential candidate for HER2-positive human breast cancer trials. FTY720 induced prominent apoptotic signaling in trastuzumab-resistant cells, which was confirmed through Western blot analysis, annexin V staining followed by flow cytometry, and TUNEL assays. Previous studies had also shown that FTY720 contributed to the apoptosis of mouse breast cancer cells identified through electron microscopy 17 . Here, we utilized electron microscopy to determine apoptotic features in HCC1954 cells 24 h after FTY720 treatment. However, instead of typical features indicating apoptosis, our results revealed Figure 6. Effects of FTY720 in breast cancer cell lines, alone or in combination with trastuzumab. (a) The three cell lines were treated with DMSO, trastuzumab, FTY720, or FTY720 plus trastuzumab. The percentage of WST-1 absorbance was measured 72 h after treatment. (b) Cytotoxicity was determined using LDH-cytotoxicity assay kits 72 h after treatment. The cytotoxicity index was calculated, whereas the value of the control group treated with DMSO was adjusted to 1. (c) Lysates were collected and analyzed for phospho-ERK1/2, ERK1/2, and beta-actin 24 h after treatment. Equal loading of proteins was verified using beta-actin. Numbers under each blot indicate the intensity of the blot relative to the control. The concentration of trastuzumab used in all cell lines was 2 μg/mL. The concentrations of FTY720 were 7.5, 7.5, and 10 μM for BT-474-HR1, MDA-MB-453, and HCC1954, respectively. Regarding antiproliferative and cytotoxic effects, results were presented as mean ± SEM, and expresses were repeated at least three times. P value: *P < 0.05, **P < 0.01, and ***P < 0.001. F plus T: FTY720 plus trastuzumab. www.nature.com/scientificreports/ autophagy-associated structures, such as multiple folded layers of membranes and devoured organelles. Though such a result does not conclude the absence of apoptotic events, it may imply the various timings required to observe different kinds of programmed cell death. In fact, the findings captured by the electron microscopy did suggest that FTY720 was an autophagy modulator in trastuzumab-resistant breast cancer cells. Studies have shown that FTY720 acts mostly as an autophagy inducer in other types of cancer cells 38,43 . Therefore, we thoroughly examined the role of FTY720 in autophagy among HCC1954 cells. Similar to the other autophagy inhibitor, bafilomycin A1, exposure to FTY720 increased the expression of p62 and LC3-II with exposure time. This result definitely differed from that in HCC1954 cells treated with rapamycin, a well-known autophagy inducer. Besides, p62 protein stability analysis showed that changes in p62 expression after FTY720 treatment were similar to those with bafilomycin A1 but not rapamycin. Moreover, after co-treating HCC1954 cells with FTY720 and one autophagy inhibitor, our results showed that the antiproliferative effects of FTY720 in such cells could not be restored by adding the autophagy inhibitor. Notably, cell growth still could not be restored by adding a pan caspase inhibitor that works against FTY720-induced apoptosis in trastuzumab-resistant breast cancer cells. In some conditions, apoptotic events were not reversed by blockage of the caspase-dependent pathway given that the cell survival mechanism was shunted toward the autophagy-dependent pathway 44 . This 3 . Mice were treated with PBS, trastuzumab, FTY720, or FTY720 plus trastuzumab. Xenografts were harvested 24 h after the last treatment. Each plot indicates the mean increase in tumor volume from the first day of treatment, whereas error bars indicate the standard error of the mean. Differences between the four groups were analyzed using ANOVA with mixed-effect model. Differences in control versus FTY720 and control versus FTY720 plus trastuzumab were statistically significant. P value: *P < 0.05 and **P < 0.01. (b) Tumor tissue sections were evaluated using TUNEL assays (green) and also counterstained with Hoechst 33,342 dye (blue, indicating nuclei). They were then examined using fluorescence microscopy. The top three tissue sections with the most abundant green signals in each group were presented. (c) Using the same tissue sections, cell membranes were outlined using HER2-antibody conjugating secondary antibody (red) and then examined using fluorescence microscopy. Bar, 100 μm. F plus T: FTY720 plus trastuzumab. Scientific Reports | (2022) 12:241 | https://doi.org/10.1038/s41598-021-04328-y www.nature.com/scientificreports/ phenomenon was confirmed by the even greater overexpression of p62 and LC3-II after adding the pan caspase inhibitor to HCC1954 cells treated with FTY720. Given the ability to trigger apoptosis and inhibit autophagic pathways, FTY720 could block the escape mechanism of cells, which depends on autophagy during the activation of apoptosis. This led to prominent cell death even in trastuzumab-resistant cells carrying the notorious PIK3CA mutation. The crosstalk between apoptosis and autophagy may contribute to the resistance of anti-HER2 therapies in HER2-positive breast cancer 45 . For a drug inducing apoptosis and inhibiting autophagy, this is a rationale that we should elucidate the efficacy of FTY720 further in humans. Continuous blockage of HER2 protein with anti-HER2 monoclonal antibodies still plays an important role in patients whose diseases continue to progress after anti-HER2 therapies 46 . Again, this was confirmed in vitro and in vivo by adding trastuzumab to FTY720. Accordingly, our results showed that FTY720 plus trastuzumab was more efficient than FTY720 alone in most settings in terms of antiproliferative effects, cytotoxic effects, and the ability to control tumor growth. These effects could have been attributed to trastuzumab-related HER2 blockage and FTY720-mediated blockage of intracellular signaling activated by other tyrosine kinase receptors. The dual blockade more thoroughly inhibited the growth of tumor cells. In the xenograft mouse model, FTY720 alone provided significantly greater inhibition of HCC1954 xenograft growth compared with the control (PBS). FTY720 plus trastuzumab might even provide better effects than FTY720 alone in terms of mean tumor size reduction. However, this difference did not reach statistically significant (P = 0.0824), and it may probably be affected by the limited numbers of mice. Limited effects were observed in the trastuzumab-alone group, which might have been due to the antibody-dependent cell-mediated cytotoxicity (ADCC) of trastuzumab 47 . All these results highlight the benefit of combining FTY720 and trastuzumab for trastuzumab-resistant breast cancer. This combination regimen was also well tolerated in the mouse model considering that no death event was recorded in the combination group during the 4 week treatment period. FTY720 has been reported to mediate immunosuppression via inhibiting lymphocyte emigration from lymphoid organs 15 . This may raise the concern about the combination strategy with FTY720 and trastuzumab. Since ADCC is mainly dependent on innate immune cells such as NK cells, neutrophils, and macrophages 48,49 but not T lymphocytes, the FTY720-mediated immunosuppression will have little impact on ADCC. In addition, in our xenograft animal study using BALB/cAnN.Cg-Foxn1 nu /CrlNarl mice with the lack of T cells, we could still observe ADCC effects that are similar to the previous publication 47 . It is believed that the ADCC induced by anti-HER2 monoclonal antibodies in humans should be more prominent. Therefore, we believe FTY720 together with anti-HER2 monoclonal antibodies may provide the best chance for this drug to be developed in human studies. Conclusion The present study showed that FTY720 possesses the ability to overcome resistance to trastuzumab therapy in HER2-positive breast cancer with or without PIK3CA mutation. Through effects involving apoptosis and autophagy, FTY720 alone or in combination with trastuzumab caused death in different trastuzumab-resistant breast cancer cells with FTY720 plus trastuzumab offering the best efficacy. Considering that FTY720 has been used for treating refractory multiple sclerosis in humans, its safety profiles have been well established. Thus, our results suggest that FTY720 can be considered as a potential drug to be developed in early phase clinical trials for patients with HER2-positive breast cancer whose diseases are resistant to trastuzumab. Materials and methods Cell lines, cell culture, and reagents. BT All experiments were performed with mycoplasma-free cells. All cell lines have been authenticated using short tandem repeat profiling within the last three years. Trastuzumab, which was purchased from the pharmacy at National Cheng Kung University Hospital, was manufactured by Genentech (San Francisco, CA, USA) and diluted with phosphate-buffered saline (PBS). FTY720 was purchased from Luminescence Technology Corporation (Taiwan) and prepared with dimethyl sulfoxide (DMSO). BEZ235 and Z-VAD-FMK were purchased from Selleck Chemicals (Houston, TX, USA) and were both prepared with DMSO. Cycloheximide, bafilomycin A1, rapamycin, and 3-methyladenine were purchased from Merck KGaA and prepared with DMSO. HER2 fluorescence in situ hybridization (FISH). Cells were harvested by trypsinization, fixed with formalin, and embedded using paraffin for slide preparation. Formalin-fixed paraffin-embedded samples were then cut into 4 μm sections and placed on slides. These samples were further dehydrated by a xylene washing step followed by 100% ethanol. Slides were incubated with 0.2 N hydrochloric acid followed by distilled water wash then incubated 8-10 min with VP2000 protease solution (Abbott, Abbott Park, IL, USA) followed by 5 min with pretreatment wash buffer. Dehydration process was performed by increasing ethanol concentration (70%, 85% and 100%). HER2 and chromosome enumeration probe 17 (CEP17) probes (PathVysion HER2 DNA Probe PIK3CA gene sequencing. Exons 9 and 20 of the PIK3CA gene were analyzed through polymerase chain reaction (PCR) amplification of genomic DNA and direct sequencing of PCR products. Primers for PIK3CA exons 9 and 20 were as follows: exon 9: TTG CTT TTT CTG TAA ATC ATC T (forward) and CTG CTT TAT TTA TTC CAA TAG G (reverse); exon 20: CTC AAT GAT GCT TGG CTC TG (forward) and TGG AAT CCA GCG TGA GCT TTC (reverse). Sequencing was performed using an ABI 3500 Dx Genetic Analyzer. In vitro antiproliferative activity analysis. Cells were seeded at concentrations of 1 × 10 4 -3 × 10 4 cells/200 μL/well in 96-well plates for 24 h and treated with indicated agents for 72 h. After the treatments, the WST-1 proliferation assay was performed according to the manufacturer's instructions. Briefly, 10 µL of WST-1 reagent (Takara Bio Inc., Kusatsu, Shiga, Japan) was added into each well and incubated for 1 h. Results were determined by measuring the absorbance of the solution at a wavelength of 450 nm using a spectrophotometer. In vitro apoptosis analysis. To Reverse transcription quantitative real-time PCR. Total RNA was extracted using the single-step TRIzol method (Thermo Fisher, Waltham, MA, USA) according to the manufacturer's protocol. For reverse transcription PCR, the cDNA was synthesized from 0.05 μg of total RNA using the Reverse Transcription System (Promega, Fitchburg, WI, USA). P62 mRNA expression was measured through quantitative real-time PCR with SYBR green reagents (Thermo Fisher) using the StepOne Real-Time PCR System (Thermo Fisher). GAPDH gene expression was used as an endogenous control. Primers for GAPDH were AGGTC ATCCC TGAGC TGAAC GG (forward) and CGCCT GCTTC ACCAC CTTCT TG (reverse). Expression levels were calculated using the 2 −∆∆Ct ratio 51 . The following p62 PCR primers were used (Genomics, Taiwan): GCA CCC CAA TGT GAT CTG C (forward) and CGC TAC ACA AGT CGT AGT CTG G (reverse). Protein stability test. Protein stability was evaluated using cycloheximide chase assays. Briefly, cells were seeded for 24 h and treated with or without the indicated drugs, including FTY720, rapamycin, and bafilomycin A1, for another 3 h. The cells were then treated with cycloheximide with cell lysates being collected at indicated time points. p62 and beta-actin expressions were analyzed using Western blot analysis 52 . In vitro cytotoxicity analysis. BT-474-HR1, MDA-MB-453, and HCC1954 cells were seeded into 96-well plates and treated with DMSO (as control), trastuzumab, FTY720, or trastuzumab plus FTY720. Cytotoxicity was determined using LDH-Cytotoxicity Assay Kit II (Abcam, Cambridge, MA, USA) according to the manufacturer's instructions and quantified by measuring the absorbance of the solution at a wavelength of 450 nm using a spectrophotometer. The cytotoxicity index of each treatment group was calculated using the equation (test sample − low control) / (high control − low control). "Test sample" indicates the level of different groups; "low control" indicates the level of the reagent (minimal LDH-value); and "high control" indicates the level of the DMSO group (maximal LDH-value).
2023-02-22T15:43:28.547Z
2022-01-07T00:00:00.000
{ "year": 2022, "sha1": "41ce2f724ad35828b8fc268ff25bc7caf2095bd2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-04328-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "41ce2f724ad35828b8fc268ff25bc7caf2095bd2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
230437711
pes2o/s2orc
v3-fos-license
Reducing the Paging Overhead in Highly Directional Systems New Radio (NR) supports operations at high-frequency bands (e.g., millimeter-wave frequencies) by using narrow beam based directional transmissions to compensate high propagation losses at such frequencies. Due to the limited spatial coverage with each beam, the broadcast transmission of paging in NR is performed using beam sweeping, which takes multiple time slots. Thus, the paging procedure used in NR would substantially increase the downlink resource overhead of the network with directional transmissions. Such overhead would further increase as we move higher in the frequency bands, such as terahertz bands, which is being viewed as one of the potential candidates for future generation networks. Therefore, the NR based paging solution is infeasible for supporting highly directional systems. In this paper, we propose a novel minimal feedback enabled paging mechanism, which instead of using all the beams for paging transmissions, only activates sub-set of beams having one or more UEs under the coverage. UE presence indications are implemented to identify the correct set of beams to be activated. Our analytical analysis and simulations show that the proposed solution significantly reduces the downlink paging overhead compared to the NR based solution (e.g., more than 80% gain for a system supporting 64 number of beams at a UE density of 200 UEs per paging occasion) while incurring minimal energy cost at the UE side. I. INTRODUCTION New Radio (NR), the 3rd Generation Partnership Project (3GPP) Radio Access Network (RAN) for the fifth-generation (5G) cellular networks, has a flexible, scalable, and forwardcompatible design that supports a wide range of carrier frequencies, deployment options, and use cases. NR can deliver very high data rates, e.g., in the order of multi-gigabit per second, due to the availability of larger bandwidth enabled by operation at higher frequency bands where carrier frequencies up to 52.6 GHz are supported in Release 15/16, and an extension to 71 GHz is being considered in Release 17. Subsequently, the terahertz bands (e.g., 100 GHz to 10 THz) are being considered as a potential enabler of ultra-high data rates in beyond 5G or sixth-generation (6G) networks [1], [2]. Though higher frequencies offer large chunks of the radio spectrum, high propagation loss at these frequencies necessitates the use of high antenna gain with narrow beam based directional transmissions. In such directional systems, since the spatial coverage for each Transmission (TX) beam is limited, multiple beams are needed for transmitting DownLink (DL) common channels (e.g., system information, paging, etc.) to cover the entire cell area. In this paper, we focus on the paging procedure in these directional systems. To reduce UE's power consumption, a discontinuous reception (DRX) mechanism is used in NR (similar to LTE/LTE-A) [3]. A UE's Radio Resource Control (RRC) connection is released and UE either enters an RRC IDLE or INAC-TIVE state when there is no scheduled data. The network then uses paging transmissions to inform the UE about any incoming calls/data, system information change, Earthquake and Tsunami Warning System (ETWS) notifications, or Commercial Mobile Alert Service (CMAS). A Paging message is transmitted over all the cells belonging to the list of Tracking Areas (in Core Network (CN)-initiated paging) or Radio-Network Areas (in RAN-initiated paging) for which the UE is registered. The UE is then configured to periodically wake up once in every DRX cycle (also known as paging cycle [4]) to monitor if there is any paging message intended for the UE. In a directional NR system, a next-Generation Node B (gNB) covers the entire cell by transmitting the same paging message over all the supported beams via beam-sweeping. The number of concurrent high gain beams that a gNB can support may be limited by the cost and complexity of the utilized transceiver architecture. At high frequencies, the number of concurrent high gain beams supported in practice is much less than the total number of beams used to cover the cell area [5]. Therefore, paging transmissions take place over different time slots. As the carrier frequency increases, the number of beams required to cover the entire cell increases due to the higher beamforming gain required to overcome propagation loss limitations. This is a crucial challenge since the network's resource requirement for paging transmission would increase with the increase of the number of beams. For example, based on the analysis provided in [5] and [6], the resource requirement may exceed system capacity (>100%) to support high (e.g., 128 or higher) number of beams as the paging rate increases, making NR based paging procedure incapable of supporting highly directional systems. The paging resource overhead problem in directional systems has been considered in few earlier works [7], [8]. In [7], paging resource overhead is reduced using shorter UE IDs and the proposed mechanism is shown to achieve 15% gain in gNB power savings with 20% shorter UE IDs. Authors in [8] proposed the use of different paging cycles for nondelay and delay-sensitive UEs, a short paging cycle using only the sub-set of beams for the delay-sensitive UEs, and a long paging cycle using all the beams for non-delay sensitive UEs. However, the proposed solution can only be supported in RAN-initiated paging where gNB retains the UE context. In this paper, we present a minimal feedback enabled paging mechanism which can support both CN-initiated and RANinitiated paging. In the proposed solution, instead of using all the beams for paging transmissions, only a sub-set of beams are used (activated) based on UE(s) presence. In every paging cycle, beam activation is based on the history of UEs presence under its coverage where a UE presence is indicated using a minimal set of resources. A similar mechanism is also proposed in [5] where UEs are configured to send presence indications in every paging cycle incurring significant energy burden at the UE side in order to reduce DL paging overhead. Our approach curtails DL paging overhead significantly while at the same time minimizes the energy cost at the UE side. The main contributions of this paper are summarized as follows: • A proposal of a baseline paging solution that minimizes the UE power consumption associated with UE presence indications compared to the existing solution [5]. • An analytic model and derivation of an average number of active beams and UE presence indications over a defined duration for the baseline paging solution. • Further enhancements to the baseline paging solution proposal for more efficient UE power consumption associated with UE presence indications. • Extensive evaluation of the proposed solutions in terms of paging resource utilization and UE power consumption compared to legacy 3GPP NR solution and literature [5]. The remainder of the paper is organized as follows. Section II reviews the paging procedure for NR based directional systems. In Sec. III, the proposed baseline minimal feedback paging solution, its analytic model, and further enhancements to reduce UE power consumption are presented. The analytic model verification and evaluation is then presented in Sec. IV. The performance of the proposed solutions is evaluated in Sec. V. Finally, conclusions are summarized in Sec. VI. II. PAGING IN 3GPP NR BASED DIRECTIONAL SYSTEMS The network may configure a paging cycle with multiple Paging Occasion (PO)s depending on the paging load. UEs are configured by the network with the paging cycle length, the number of paging frames in a paging cycle, and the number of POs in a paging frame [4]. Using such configuration along with the associated UE ID, a UE determines the paging frames and the POs to be monitored [3]. In this paper, a group of UEs monitoring the same POs is referred to as a paging group. In 3GPP NR, in directional operations, each PO is a set of Physical Downlink Control Channel (PDCCH) monitoring occasions (PMOs) and can consist of multiple time slots (e.g., subframe or OFDM symbol) [3]. A paging transmission, which can be paging Downlink Control Information (DCI) consisting of either Short Message (for system information update, ETWS notification, and CMAS), scheduling information of the paging message over Physical Downlink Shared Channel (PDSCH), or paging message over PDSCH, is repeated in all DL TX beams by a gNB. To enable that, in each PO, every supported DL TX beam is allocated with at least one dedicated PMO. An example is shown in Fig. 1(a), where in each PO, paging DCI transmissions by a gNB are performed over all the supported DL TX beams (or over the PMOs associated with all the supported DL TX beams). The UEs are configured with the parameters needed to identify the association between the PMOs and the gNB's TX beams [3]. To determine the PMO needs to be monitored in a PO, a UE is required to determine the DL TX beam over which it would receive the paging information. Thus, beam searching is incorporated in directional operations, as shown in Fig. 1(a). The UE can use periodic Synchronization Signal Blocks (SSBs) transmitted by the gNB over all the supported DL TX beams, where the UE can wake up before its PO and measures the signal quality of SSBs from each of the gNB's TX beam to determine the best DL TX beam. If the UE also supports beamforming, it can use different RX beams to measure the signals from the gNB to identify the best RX beam as well. Note that in the remaining part of the paper, the term beam/beams refers to DL TX beam/beams. III. PROPOSED MINIMAL FEEDBACK ENABLED PAGING We introduce a newly defined concept of a set of active beams to mitigate the increasing resource overhead problem associated with paging transmissions in directional systems. For any PO, a gNB performs paging transmissions associated with the paging group of that PO only over a defined set of active beams. In order to identify the set of active beams, UpLink (UL) feedback transmissions, called Paging Activation Requests (PARs) herein, initiated by UEs in IDLE/INACTIVE state are required. To enable PARs, beam and Paging group specific time-frequency resources (e.g., one-to-one mapping between a resource and a beam for each Paging group) are allocated. These resources can be allocated in every paging cycle with time resources preceding those of the POs. PARs can be used by UEs to activate beams through simple illumination of associated resources and using energy detection at gNB. Hence, a UE indicates its presence to the gNB (where it is camped on) by sending a PAR over the allocated resource to the UE's best detected beam. On a successful energy detection over that resource, the gNB determines that there is at least one UE present under the coverage of the associated beam, and therefore activates that beam for the POs associated with the UE's paging group. Since gNBs utilize non UE-specific and energy-based detection to activate beams based on received PARs, only a minimal resource allocation is required for PARs, for which an example is given in Sec. V. As mentioned earlier, resources for PARs can be allocated in each paging cycle, however, sending PARs in every paging cycle can incur significant energy consumption at the UE. Therefore, we introduce the activation duration as another new concept. On the reception of a PAR, the gNB activates the associated beam for the activation duration, defined in terms of a number, N a , of Paging Cycles. An example procedure is shown in Fig. 1(b) for an activation duration of 2 paging cycles where paging DCI transmissions by a gNB, in each PO, are performed only over the PMOs associated with the activated beams based on received PARs. At the UE side, after sending a PAR for a beam, the UE does not need to send a request for the same beam at least for the next activation duration. A. Analytic Modeling of Proposed Baseline Scheme We develop an analytic model to provide an insight into the performance of the aforementioned proposed baseline paging mechanism. In DL, the proposed mechanism provides paging resource overhead reduction by activating only a sub-set of beams based on UE(s) presence compared to all the beams in the legacy NR system, therefore, we derive the average number of activated beams. Afterward, we derive the average number of PARs incurred by a UE which represents the additional UE's overhead associated with the proposed scheme. To determine the average number of dynamically and uniquely activated beamsn over any period of N a paging cycles for a specific cell of circular radius R c , we assume that UEs are distributed according to a homogeneous and stationary Poisson Point Process (PPP) with density λ (UEs/Cell), i.e., number of UEs follow a Poisson distribution and UEs' locations are uniformly distributed over the cell coverage. The average numbern can then be obtained as where n i is a random variable representing the number of uniquely activated beams in the i th paging cycle which are not activated in any of the j th , j ∈ {i + 1, i + 2, ..., N a } paging cycles. Due to the inter-dependency between the random variables n i , i ∈ {1, 2, ..., N a }, we consider the following recursive formula to obtain the expected values of n i ∀i as where u i is a random variable representing the number of UEs under the coverage of n u,i = B T X − Na j=i+1 n j of beams that are not activated at any of the paging cycles j ∈ {i+1, ..., N a } for a total number of B T X beams at the gNB and n u,Na = B T X . A conditional and a joint probability mass functions (PMFs) are required for the evaluation of (2) which can be expressed, respectively, as where the expression in (4) follows from the PPP distribution assumption, .., N a } can then be recursively evaluated according to the conditional and joint PMFs presented in (3) and (4). Then, for a specific paging group, we can express the total number of network resources utilized for paging in NR based solution as R D × B T X , where R D represents the total number of resources utilized per beam over the N a paging cycles. On the other hand, the total network resources utilized considering the proposed solution can be expressed asn×R D . To account for UL resources required to send PARs, we use R U to represent the number of resources required for a single PAR transmission and express the total number of resources over the N a paging cycles for the specific paging group as N a × R U × B T X . We can then compare the total number of required resources by the proposed scheme to the NR based solution using the gain factor Γ defined as As mentioned earlier, due to energy-based detection for PARs at the gNB and therefore the dominance of the DL resources R D over UL resources R U × N a , the gain factor Γ will majorly vary withn/B T x . Additionally, we can evaluate the cost of UE's power consumption incurred by the PAR transmissions associated with the proposed solution. The average amount of energy expended by a UE on PAR transmissions over a period of N a paging cycles will be linearly proportional to the average number of PAR transmissions (k). Since in the proposed solution a beam is activated for N a paging cycles following the detection of a corresponding PAR,k for a particular UE will depend on the number of occurrences the UE switches its beam, which can be determined as Note that, in the above calculation ofk, for simplicity of calculation, we assume that a UE can only keep track of activation duration of maximum one beam at a time, however, for performance evaluation using simulations in Sec. V, we will consider the scenario where a UE can keep track of the activation duration of multiple beams. B. Enhancements for Minimizing UE's Energy Expenditure In high-frequency bands, a highly directional system is required leading to higher levels of power consumption. For example, using the demonstrated 80 to 100GHz phased-array transceiver in [9], a transceiver that employs 64 antennas or more may result in an estimated transmitter power of 1.1W. Therefore, we introduce enhancements to the proposed baseline paging scheme that further minimizes the number of PAR transmissions required by a particular UE. Two of those enhancements include DL Indication of Active Beams and Monitoring Duration at UE. DL Indication of Active Beams: when a gNB activates (i.e., activating an inactive beam) or re-activates (i.e., reinitiating the activation duration counter of an active beam) one or more beams based on the received PARs, the gNB sends a DL message, over all of the currently active beams, containing information about the (re-)activated beams. This enables UEs to minimize PAR transmissions by tracking the active beams associated with their paging group. Therefore, when a UE transitions to a beam that has been recently activated as indicated by a received DL indication message of active beams, it does not need to send a PAR at least for the activation duration after receiving the corresponding DL indication message. Existing paging DCI message (with or without Short Message or/and scheduling information for paging message) can be used to send such DL active beam indications, e.g., a list of beam indices (e.g., associated SSBindices as used in 3GPP NR) of the (re-)activated beams or a bit-map with a length equal to the total number of supported gNB TX beams where the bits corresponding to (re-)activated beams may be set to '1', otherwise set to '0', can be used. Monitoring Duration: when a UE determines the need to activate a beam (e.g. transitions to a beam for which it has not received an indication in a DL active beam indication message), it may first monitor the beam for a configured monitoring duration which may be defined as an integer number, N m , of paging cycles. If the UE detects any paging DCI over that beam during the monitoring duration, then it determines that the beam has already been activated without the need to transmit a PAR. Otherwise, the UE transmits a PAR to request the activation of that beam. This empowers a UE with an ability to take advantage of other UEs' PAR transmissions that have already activated the desired beam. UEs with different mobility states may be configured with different values of monitoring duration, N m . The mobility states for an IDLE/INACTIVE state UE can be defined at the cell level, e.g. as a function of cell re-selection rate as defined in [3], or at a beam level, e.g. as a function of beam re-selection rate, for highly directional systems. A UE with high mobility (i.e., high beam-changing/re-selecting rate) may be configured with a lower value of monitoring duration as compared to another UE with low mobility. This is to avoid the situation, for example, when a UE moves fast enough such that the UE's best detected beam is changed within the monitoring duration. In this case, the UE may not get the opportunity to activate the desired beam if needed, which can subsequently incur significant paging latency for that UE. Paging latency for a particular UE can be defined as the total time from paging request arrival to the successful transmission of the paging message to the UE. Different paging solutions may be enabled by incorporating either or both DL indication of active beams and monitoring duration along with the activation duration concept described earlier in this section. In Fig. 2, a UE procedure is shown, where, both DL indication of active beams and monitoring duration are enabled. Next, we summarize the State of Art (SoA) and proposed paging solutions in Sec. III-C and compare the performance of the presented solutions in Sec. V. C. Summary of SoA and Proposed Paging Solutions In this section, we present the set of paging solutions that will be subject to the evaluation and comparison in Sec. V. These range from the legacy standardized [3] to SoA [5] to proposed solutions in III-B which are described as • Legacy: current standardized solution in 3GPP NR specification [3] described in Sec. II. • MADP: SoA solution in [5] that minimizes DL resources overhead, where a UE sends a PAR associated with its best beam in every paging cycle, and the gNB activates all the beams for which at least one PAR is received. IV. ANALYTIC MODEL VERIFICATION AND EVALUATION In this section, we verify the performance of analytic models developed in Sec. III-A against simulation. We also use the analytic model to evaluate the performance of the proposed baseline scheme in terms of the average number of activated beams for paging and PAR transmissions. For verification, we use Monte Carlo simulation to generate 10,000 random realizations of UEs distribution within the coverage of a cell of radius R c = 100m where each realization corresponds to a single paging cycle with UEs assumed to be stationary. Fig. 3(a) compares analytic model results to simulation and shows the average number of activated beams (n) over N a = 3 paging cycles, with different UE density (λ) and total number of supported beams (B T x ) per cell. As shown in the figure, the results with the analytic expressions in Sec. III-A closely match the simulation results. Further, as expected, the average number of beams activated by at least one UE increases with the number of UEs. We can also note that for a given UE density, as the total number of supported beams per cell increases (or equivalently as the beam's coverage area reduces), the average number of beams activated by at least one UE will also increase. Most importantly, it can be concluded based on the result shown in Fig. 3(a) that as long as the number of UEs within a cell coverage is being less than the total number of supported beams in that cell, the proposed baseline solution in Sec. III-A will require a significantly smaller number of beams to be activated for paging in contrast to the total number of supported beams in the legacy NR system. This can be translated into a significant resource utilization reduction compared to the legacy NR system. On the other hand, Fig. 3(b) shows the average total number of PARs generated by all UEs within a cell coverage over N a = 3 paging cycles according to the analytical model presented in Sec. III-A. Due to the lower probability of a UE staying under the coverage of the same beam over independent location distribution realizations that are associated with the increase in the number of beams, we note an increase in the average total number of PAR transmissions. Additionally, as one would expect, the average total number of PAR transmissions increases with the increase in UE density. Please note that these results are based on the analytical model where we assume that a UE can keep track of the activation duration of only one beam, once we relax this constraint and also enable the enhancements described in the Sec. III-B, we will show next in the simulation-based results in Sec. V that the average number of PAR transmissions can be significantly lower. V. PERFORMANCE EVALUATION In this section, we provide an extensive evaluation of performance for the paging solutions summarized in Sec. III-C using Monte Carlo simulations. A. Simulation Assumption In our simulations, we consider a system shown in Fig. 4(a) comprising a tracking area consisting of 16 gNBs in a dense urban scenario with an inter-site distance of 200m [10]. We further consider different UE densities defined as the number of UEs within a paging group (i.e., supported per PO). A UE density can then be translated into the number of UEs paged per second based on the selection of system parameters such as the paging cycle, number of paging frames per paging cycle, and number of POs per paging frame. Additionally, UEs are initially randomly dropped within the simulated tracking area where we assume that 40% of UEs are stationary, another 40% have low mobility (i.e., speed of 3km/hr), and the remaining 20% have high mobility (i.e., speed of 30km/hr); and random walk is considered as the mobile UEs' mobility model. The 3GPP FTP traffic model is considered to generate paging requests per UE according to a Poisson distribution with an average arrival rate (λ p ) of 1 packet per 60 seconds [11]. A system bandwidth of 400 MHz and sub-carrier spacing of 120 KHz are considered, as recommended by 3GPP for millimeter-wave frequency bands [12], which correspond to a total of 264 available RBs for any DL/UL transmission. A paging DCI is transmitted using Control Resource Set 0, which uses one OFDM symbol and 48 resource blocks (RBs) [13]. The PDSCH paging messages containing the paged UEs' IDs are assumed to have the following configuration: a 48 bit UE ID [3], QPSK modulation, and 0.37 code rate [14]. DL indication of active beams associated with MFEP-DLI and MFEP-MD solutions is assumed to be sent via a DCI with bit-map information as mentioned in Sec. III-B. The considered DCI occupies one OFDM symbol and a number of frequency domain resources (e.g., RBs) determined based on the total number of supported beams. According to 3GPP specifications [4], the considered DCI requires a total number of {6, 6, 6, 12, 24} RBs, corresponding to a total number of {108, 108, 108, 216, 432} coded bits, to support a total number of {16, 32, 64, 128, 256} beams, respectively. For the MFEP-MD scheme, different values of monitoring duration are configured for different UE mobility states, as mentioned in Sec. III-B. In our simulations, we consider two different configurations: MFEP-MD (4/2/0) and MFEP-MD (6/3/0) where the values (x/y/z) represent the monitoring duration defined in the number of paging cycles corresponding to UEs with {no, low, high} mobility states, respectively. For PAR transmissions, each transmission utilizes minimal time and frequency resources due to the energy-based detection considered at the gNB [5]. In our simulations, we consider a single resource element in the frequency domain and two OFDM symbols in the time domain which is the minimum number of symbols allocated for a random access preamble transmission in NR for millimeter-wave frequency bands such that interference to/from other control/data transmissions can be avoided. The remaining parameters considered in the simulations are provided in Table I. B. Simulation Results We first present the results for a total fixed number of 64 supported beams per cell while varying UE density. Fig. 4(b) shows the average number of network resources utilized (for both DL paging-related and UL PAR transmissions) per paging cycle per cell for different solutions. We note a significant reduction in paging-related resource utilization for the MADP and the proposed solutions compared to the Legacy solution. This gain is a result of the utilization of a much lower number of beams for paging transmissions compared to the all 64 beams in the Legacy system. We also note that all the proposed solutions utilize approximately the same amount of resources for paging as the MADP scheme for the simulated range of UE densities. For example, paging resource utilization by the MADP and the proposed solutions correspond to a range of 80-81% reduction in resources compared to the Legacy solution at a UE density of 200 UEs/PO. On the other hand, we show in Fig. 4(c) that our proposed solutions result in a much lower number of PAR UL transmissions compared to the MADP solution which makes the proposed solutions more favorable in terms of UE energy consumption. We also note that the number of PARs in the MFEP-DLI and the MFEP-MD solutions decrease further as the UE density increases which are a result of the utilization of DL indication of active beams and monitoring duration that allow UEs to benefit from the already activated beams by other UEs. We further note that the configured monitoring duration in the MFEP-MD solution, which forces a UE to first monitor a beam before deciding to transmit a PAR, enables to provide more reduction in DL resource utilization and UE energy consumption compared to the MFEP-DLI and the MFEP-AD solutions. The reduction in the number of PAR transmissions for the proposed solutions comes at a cost of either a slight increase in DL resource overhead for the MFEP-DLI solution compared to the MADP and the MFEP-AD solutions, i.e. due to the transmission of DL indications of active beams, or as discussed below, an increase in paging latency (discussed next) for the MFEP-MD solution compared to other solutions, i.e. due to the configured monitoring duration. We now show in Fig. 4(d) the latency incurred by the MFEP-MD solution. We first note that increasing the monitoring duration, i.e., MFEP-MD(6/3/0) vs. MFEP-MD(4/2/0), increases the paging latency experienced by UEs for simulated UE densities, but on the other hand results in more reduction of DL resource utilization and UE energy consumption as shown in Figs. 4(b) and 4(c). We then note that, for the considered arrival rate of paging requests (λ p = 1 packet per 60 seconds, i.e., 1/60), paging latency first increases with the UE density until a point when UEs can start taking advantage of the activated beams by other UEs. However, the upward trend at low UEs density vanishes as we increase the λ p to 1/3 as shown in Fig. 4(d). This suggests that the MFEP-MD solution may be preferred for a specific combination of UE densities and arrival rate of paging requests. Further, for the simulated range of UE densities, all other solutions do not experience paging latency. However, this might not be the case for a high enough UE density or/and arrival rate of paging requests when the network might not be able to page all the UEs at the paging cycle within which the associated paging requests arrive. Next, we present in Fig. 5 the results for a fixed UE density of 500 UEs/PO while varying the total number of supported beams. Fig. 5(a) shows that resource utilization for paging in the Legacy solution increases significantly and linearly with the increase in the number of supported beams, whereas for the MADP and the proposed solutions, the increase is slight and eventually the required resources saturate. Therefore, the resource utilization reductions using the MADP and the proposed solutions compared to the the Legacy solution becomes more prominent as the number of beams increases. We note that in general the amount of resources in the MADP and the proposed solutions are dependent on both the UE density and the number of beams required to cover the UEs within a specific cell. Subsequently and despite the fixed UE density, we observe a slight increase in the resource utilization by the MADP and the proposed solutions with the increase in the number of supported beams due to the reduction of the beam coverage area. An additional reason for the increase in resource utilization for the MFEP-DL and the MFEP-MD solutions is the increase in the total number of bits required to send a DL indication of active beams. Further, we note that there is a slight increase in the number of utilized resources by the proposed solutions compared to the MADP solution as the number of supported beams increases. This is again due to the reduction in beams' coverage areas, which subsequently lead to a higher rate of beam switching for the mobile UEs. The higher rate of beam switching results in scenarios when a beam remains unnecessarily activated due to the configured activation duration while there are not any UEs to be served under its coverage. Therefore, the beam activation duration should take into account both the mobility states of served UEs as well as the supported beam's coverage area, i.e. number of supported beams per cell. Another result of the increased beam switching rate for the mobile UEs with the increase in the total number of supported beams is shown in Fig. 5(b), where we note a lower reduction in the number of PARs for the proposed solutions compared to the MADP solution. However, it is still clear that the number of PARs transmitted by the proposed solutions is significantly lower than those required by the MADP solution, even with the high number (e.g., 256) of beams. For the performance of the MFEP-MD solution, a similar trend is observed compared to the other paging solutions as described earlier, where the higher reduction in the number of PARs is achieved at the cost of additional paging latency as shown in Fig. 5(c). We also note that the paging latency for the MFEP-MD solution increases as the number of supported beams increases and that the latency is usually lower for a lower monitoring duration. This is also a result of the increase in the rate of beam switching corresponding to the UEs' mobility and the beam coverage area, and the configured monitoring duration as described earlier. Therefore, monitoring duration configuration should take into account the expected beam switching rate. VI. CONCLUSIONS In this paper, we considered the resource overhead problem associated with paging in highly directional systems based on the legacy 3GPP NR solution. We proposed few variants to a minimal feedback based paging solution, which can significantly reduce the paging resource overhead for reasonable UE densities by activating the sub-set of beams that provide coverage to IDLE/INACTIVE UEs based on their presence indicated by PARs. The MADP [5] and the proposed solutions manifest more than 80% reduction in paging resources compared to legacy 3GPP solution in a system supporting 64 beams per gNB and at a UE density of 200 UEs/PO. On the other hand, the proposed solutions incur significantly lower energy consumption at the UE compared to the MADP solution. Finally, we note that the network's performance can be optimized to enable efficient paging resource utilization at reasonable energy consumption and paging latency experienced by a served UE through a careful choice of the activation and monitoring duration configuration. The choice of such configuration, which will be considered in our future work, should be dependent on the total number of supported beams per cell, the UE density, the paging requests arrival rate, and as well as the UEs' mobility states.
2021-01-05T02:15:40.604Z
2021-01-04T00:00:00.000
{ "year": 2021, "sha1": "94518bf9196b91b7c6ff4da6d8cb8b9c2a8d71a5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2101.01048", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "94518bf9196b91b7c6ff4da6d8cb8b9c2a8d71a5", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
51811288
pes2o/s2orc
v3-fos-license
Epigenetics and Diabetes: Current and Perspective Type 2 diabetes mellitus is a polygenic multifactorial disease characterised by hyperglycaemia and altered lipid metabolism due to impaired insulin secretion from pancreatic β-cells. Today, it is well established that combinations of non-genetic and genetic risk factors influence the susceptibility for Type 2 diabetes. While obesity, physical inactivity, and aging represent non-genetic risk factors for Type 2 diabetes, genome-wide association studies have identified more than 40 polymorphisms associated with an increased risk for the disease [3-7]. Introduction Diabetes is undoubtedly one of the most challenging health problems in the 21 st century. The estimated worldwide prevalence of diabetes among adults was 366 million in 2011; by 2030 this will have risen to 552 million. Type 2 diabetes is the predominant form and accounts for at least 90% of cases. 80% of people with diabetes live in low-and middle-income countries; the greatest numbers of people with diabetes are between 40 to 59 years of age [1,2]. Type 2 diabetes mellitus is a polygenic multifactorial disease characterised by hyperglycaemia and altered lipid metabolism due to impaired insulin secretion from pancreatic β-cells. Today, it is well established that combinations of non-genetic and genetic risk factors influence the susceptibility for Type 2 diabetes. While obesity, physical inactivity, and aging represent non-genetic risk factors for Type 2 diabetes, genome-wide association studies have identified more than 40 polymorphisms associated with an increased risk for the disease [3][4][5][6][7]. Recent studies show that epigenetic factors, including DNA methylation and histone modification, may affect the susceptibility for Type 2 diabetes [8]. Environmental susceptibility factors also contribute to the risk of developing Type 1 diabetes. From an epigenetic standpoint, the pathologic mechanisms involved in the development of Type 1 diabetes may include DNA methylation, histone modification, microRNA, and molecular mimicry. These mechanisms may act through regulating of gene expression, thereby affecting the immune system response toward islet beta cells [9]. This review will provide recent evidence from the literature supporting the immediate need for further investigation to uncover the power of epigenetics in the prediction, prevention and treatment of Type 1 and 2 diabetes. www.scidoc.org/ijdvr.php non-coding RNAs, including miRNAs. These epigenetic changes are potentially reversible and modulated by the environment, diet or pharmacological intervention ( Figure 1). DNA methylation is a genomic modification that can influence gene activity. It occurs almost exclusively at the cytosine of CpG dinucleotides, which tend to cluster in regions called "CpG islands". The primary function of DNA methylation is to actively silence genes and DNA regions in which transcription is not desired [10]. The modifications of the histones result in conformational changes of the chromatin that alter the access of promoters for transcription factors. These modifications, including acetylation, methylation, phosphorylation, and ubiquitination, alter the interaction between the histones, DNA and nuclear proteins, therefore affecting gene transcription and regulate gene silencing or expression [10]. A third mechanism involves the expression of short noncoding RNAs, whose expression can lead to translational silencing through the specific binding and eventual degradation of transcribed RNA. MicroRNAs (miRNAs) can also regulate DNA methylation and histone modifications [11]. Epigenetic Pathogenesis for Type 1 Diabetes Type 1 diabetes (T1D) is a complex autoimmune disease involving the interaction of numerous genes and environmental factors, which may be regulated by epigenetic mechanisms. It results from the immunemediated destruction of the insulin-secreting beta -cells that reside in the pancreatic Islets of Langerhans. A number of linkage and genome-wide association studies have identified the major risk loci for Type 1 diabetes. The DR3, DR4 and DQ2 susceptibility loci are located within the major histocompatibility complex (MHC, also referred to as the human leukocyte antigen, HLA) class II region and can account for more than ~40% of increased risk [12,13]. Despite the fact that genetic susceptibility and environmental factors have been implicated as key contributors to disease risk, it is not sufficient to explain the increased incidence of T1D in the world. Epigenetics may explain, at least in part, these phenomena. Epigenetic mechanisms including DNA methylation, post-translational modifications of histones and the activation of microR-NAs, could play a role in the initiation or progression of autoimmunity, or alter the target tissue in such a way as to increase the probability that it will be targeted by an autoimmune attack. T1D results from a T cell-mediated autoimmune attack on pancreatic β -cells. CD4 + T helper cells are major protagonists involved in β-cell autoimmunity. Abnormal global methylation of CD4 + T cells has been observed in patients with latent autoimmune diabetes in adults compared with healthy control subjects. Moreover, histones modifications associated with the insulin gene and other developmental regulators required for β -cell development could modify the maturation or function of β -cells resulting in predisposition for autoimmune diabetes. Additionally, aberrant gene expression could also result in the targeting of b -cells by the autoimmune response [14]. Epigenetic Pathogenesis for Type 2 Diabetes Epigenetic research into of Type 2 diabetes (T2D) is still a very young field. The role of epigenetic mechanisms in the etiology of these disorders and related metabolic abnormalities such as obesity, dyslipidemia, hypertension, and hyperglycemia is not well elucidated. Notably, epigenetic effects may also be affected by the environment, making them potentially important pathogenic mechanisms in complex multifactorial diseases such as Type 2 diabetes ( Figure 2). Important evidence for a role of epigenetic factors in the pathogenesis of T2D comes from a data-mining www.scidoc.org/ijdvr.php analysis of more than 12 million Medline records [15]. The study found that methylation and chromatin are top hits, implicitly related to T2D. The epigenetic of T2D is the interaction between gene activation and epidemiology, where gene activation can be in the form of DNA methylation, histone modification or RNA activation. This could be affected by different epidemiological factors, namely age, obesity, nutrition, physical activity and intrauterine environment [16][17][18]. Epigenetic mechanisms such as DNA methylation and histone modifications are increasingly considered to be important in phenotype transmission and the development of TD2. In differentiated mammalian cells, the addition of methyl groups to DNA occurs on cytosine residues, and these modifications are mostly established in the context of cytosine guanine dinucleotides (CpGs), a reaction that is carried out by various members of a single family of enzymes. DNA methylation is commonly associated with gene silencing and contributes to X chromosomal inactivation, genomic imprinting and transcriptional regulation of tissue-specific genes during cellular differentiation [16]. Although data mining analysis has suggested a role for epigenetic factors in the pathogenesis of Type 2 diabetes [19], there are only a limited number of studies that have examined epigenetic changes in target tissues from patients with Type 2 diabetes. Functional study, evaluating epigenetics in human T2D tissue concerns Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (also known as PGC-1a, and encoded by PPARGC1A), a transcriptional coactivator of mitochondrial genes involved in normal ATP-production and insulin secretion from the pancreatic beta cells, showed that the level of DNA methylation is increased in a promoter region of PPARGC1A in pancreatic islets from patients with T2D, as compared with islets from healthy human donors [20]. Moreover, a global analysis of DNA methylation in skeletal muscle revealed that people with a family history of T2DM have differential DNA methylation of genes involved in muscle function, insulin, and calcium signaling [21]. Furthermore, a link between histone modification and metabo-lism is evident from the observation that loss of histone demethylase (JHDM2A) function leads to obesity and decreased expression of metabolically relevant genes, including peroxisome proliferator-activated receptor alpha (PPARA) and uncoupling protein 1 (UCP1). Similar to DNA methylation, histone modifications also provide a molecular link between a sedentary lifestyle and the development of T2DM [19]. Clearly, the contribution of epigenetic regulation to the manifestation of metabolic disease remains to be completely described. Epigenetic Modifications and Diabetic Complications Diabetes and metabolic disorders are leading causes of microand macrovascular complications such as atherosclerosis, hypertension, nephropathy, retinopathy and neuropathy. One major event in the progression of diabetic complications is vascular inflammation with increased expression of inflammatory genes. Enhanced oxidative stress, dyslipidemia, and hyperglycemia have also been suggested to influence the development of diabetic complications [22]. Cardiovascular complications remain the major cause of morbidity and mortality in the diabetic population. It is increasingly appreciated that exposure to high glucose is the major factor leading to these complications. Recent studies have proposed that hyperglycemia may induce epigenetic modifications of genes involved in vascular inflammation. Such studies have led to the view that the transcriptional determinant, nuclear factor (NF)-ĸB, which is readily activated by hyperglycemia, plays a pivotal role in diabetic vascular complications [23]. Furthermore, NF-ĸB activation leads to the upregulation of molecules such as the chemokine, monocyte chemotactic protein (MCP)-1, and adhesion molecules such as vascular cell adhesion molecule (VCAM)-1, which have been extensively investigated in atherosclerosis [24]. Epigenetic mechanisms such as post translational modification of histones and DNA methylation also play central roles in gene regulation by affecting chromatin structure and function. Recent studies have suggested that hyperglycemia-induced DNA methylation changes that persist into the metabolic memory state. The Atherosclerosis was associated with global hypomethylation in vascular smooth muscle cells (VSMCs) of atherosclerotic lesions from humans [22]. In addition, several studies have implicated miRNAs in diabetes pathogenesis. However, the role of miRNAs in diabetes vascular complications is less studied. Evidence shows that miRNAs can affect the function of both endothelial cells (ECs) and vascular smooth muscle cells (VSMCs) relevant to vascular diseases [25][26][27]. Diabetic nephropathy In the diabetic nephropathy (DN), tubulointerstitial fibrosis, due to increased expression of extracellular matrix proteins such as collagens and fibronectins, is initiated and sustained by a number of different factors including the transforming growth factor-beta (TGFβ) family. This family of inflammation mediators is documented to be aberrantly expressed in metabolic memory, implicating TGF-β as a major mediator of epigenetic events in DN. [28,29] Diabetic retinopathy A role for epigenetic mechanism in the pathogenesis of diabetic retinopathy (DR) has been recently proposed. The first of these manuscripts examined the control of VEGF (significant in both the early and late stages of DR) by miR-200b. The second revealed that the activity of the matrix metalloproteinases MMP2 and MMP9 cause mitochondria DNA (mtDNA) damage and degradation of mitochondrial membranes in retinal capillary cells which in turn induces apoptosis of the same [30]. Epigenetics and Diabetes Treatment Significant advances in the treatment of Type 2 diabetes mellitus (T2DM) include the implementation of prevention efforts aimed at delaying progression of glucose intolerance to overt diabetes mellitus (DM) and the development of new classes of blood glucose-lowering medications to supplement existing therapies. While the current management approach for T2DM continues to encompass traditional drugs that focus on β-cell failure and/ or insulin resistance, newer agents that target other defects (eg, incretin deficiency/resistance) are increasingly incorporated [31]. However, evidence shows that current therapies based on these mechanisms are not fully efficacious in preventing complications, suggesting the need for the identification of novel therapeutic targets. In particular, it has been noted that some individuals with diabetes experience a continued progression of vascular complications even after glycaemic control subsequent to a period of prior hyperglycaemic exposure, a phenomenon termed 'metabolic memory' [32,33]. The studies showed that environment effects can induce epigenetic alterations. These alterations ultimately affect expression of key genes linked to the development of T2DM including genes critical for pancreatic development and β-cell function, peripheral glucose uptake and insulin resistance. Reversal of epigenetic mechanisms or 'epigenetic therapy' might unveil a critical window during which epigenetic therapeutic agents could be used as means to prevent the later development of a disease. Three specific epigenetic mechanisms currently under investigation include attempts to silence risk genes by enhancing the methylation of gene promoters or their downstream products; attempts to activate helpful genes by inhibiting an enzyme called histone deacetylase and small RNAs known as microRNAs (miR-NAs), which have emerged as a potential target for Type 2 diabetes therapies. Due to their highly conserved active domain, histone deacetylases (HDACs) have been extensively studied for the development of inhibitors. Most of the currently designed inhibitors fall into four broad classes, short-chain fatty acids (SCFAs), hydroxamates, benzamates, and cyclic tetrapeptides, which for the most part target the class I and class II HDACs [34]. Several inhibitors of the class III HDACs (Sirtuins) have now been synthesized and also show therapeutic potential. Additionally natural prodrugs which target histone deacetylases have also been isolated and include sulforaphane (SFN), diallyl disulfide (DADS), and resveratrol [35][36][37][38][39]. Given the emerging body of evidence linking aberrant miRNA expression with diabetes pathogenesis, it is becoming increasingly clear that these ncRNAs may have utility in either the management or treatment of this disease. The possibility of using miR-NA and/or siRNA to target diabetes has recently been extensively reviewed [40]. A greater understanding of miRNA target identification/validation, their roles in diabetes pathogenesis, and mechanisms of specific delivery will be required before they can be evaluated as a therapeutic modality. Conclusion Diabetes is multifactorial disease involving interactions between genetic and environmental factors. Alarming estimates indicate that the rates of diabetes and associated complications are rapidly increasing, and therefore additional strategies to curb these trends are needed. Epigenetics provides a mechanism which may explain the etiology of diabetes and the diversity of phenotypes in the general population. Epigenetics provides a mechanism which may explain the etiology of diabetes and the diversity of phenotypes in the general population. Although there is support for the role for epigenetics in the pathogenesis of diabetes and its complications, conclusive studies from human diabetes tissues are limited. The perspective of epigenetic control is slowly growing from the view that genomic imprints are irreversibly fixed to the notion that epigenetic DNA modifications can be rapid, reversible, and responsive to both environmental and lifestyle inputs, it may thus be possible to test epigenetic drugs as putative novel drugs for the treatment of diabetes and its complications.
2019-03-18T13:57:57.179Z
2015-06-04T00:00:00.000
{ "year": 2015, "sha1": "791064d7dee964a54995524c6b4da7069d15fb27", "oa_license": null, "oa_url": "https://doi.org/10.19070/2328-353x-1500019", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ed60f94565dc794d2d4669040f99c32bc207e977", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247023766
pes2o/s2orc
v3-fos-license
Contribution of Individual- and Neighborhood-Level Social, Demographic, and Health Factors to COVID-19 Hospitalization Outcomes Patient outcomes in COVID-19 vary from asymptomatic infection to death. Using data from 38 hospitals in Michigan, the social vulnerability index, a composite measure of social disadvantage, was used to compare the outcomes of COVID-19 in patients living in low-vulnerability ZIP codes to those of patients living in high-vulnerability ZIP codes. Outcomes examined included acute organ dysfunction, organ failure, invasive mechanical ventilation, intensive care unit stay, mortality, and discharge disposition. D isparities in COVID-19 incidence and outcomes related to patient characteristics (such as race or ethnicity) and geographic areas (such as neighborhoods) are well known (1)(2)(3)(4). For example, in a recent systematic review, Black and Hispanic populations were found to experience disproportionate burdens of COVID-19 infection, hospitalization, and overall mortality (4). We previously found that U.S. counties with higher levels of social vulnerability or disadvantage-based on socioeconomic status, housing, and other factors-experienced greater COVID-19 incidence and mortality (3). Although we know that where a person lives affects their health, the interplays between individual-and neighborhood-level social, demographic, and health factors to COVID-19 outcomes are complex and understudied for hospital-based outcomes (5). Understanding the contributions of these domains to health outcomes is important for public health and health care policy. Previous studies that have sought to understand disparities in COVID- 19 health outcomes have been limited to cross-sectional or cohort studies of patients at a single health care system (6) or to ecological studies analyzing population-level as opposed to patient-level data (4). Cohort studies from multiple health care systems are uncommon, and those that are published have not been able to disentangle the contributions of a patient's individual clinical factors from neighborhood contextual effects on COVID-19 outcomes. In addition, less is known about what factors influence disparities in COVID-19 outcomes, whether related to greater exposure to COVID-19 infection, greater susceptibility to infection after exposure, or differential access to care (4). The social vulnerability index (SVI), developed by the Centers for Disease Control and Prevention, provides an aggregate measure of neighborhood social factors known to affect public health crises, including disease outbreaks (7). Because it has been used frequently by public health authorities to investigate populations at higher risk for COVID-19, SVI represents an ideal tool with which to examine how social factors may or may not contribute to COVID-19 outcomes (7)(8)(9). Therefore, we used data from a multihospital cohort and ZIP code-linked SVI to quantify METHODS We performed a pooled cross-sectional study using data from patients hospitalized at 38 Michigan hospitals participating in a statewide collaborative quality improvement registry called MI-COVID19. Details regarding the MI-COVID19 registry (funded by the Blue Cross Blue Shield of Michigan/Blue Care Network of Michigan) have been previously published (10). This study was deemed "not regulated" by the University of Michigan Institutional Review Board (HUM00179611). In brief, trained abstractors collected data by reviewing patient medical records using a structured template. Patients were included in the study if they had either a positive COVID-19 test result during or up to 21 days before the hospital encounter; a negative COVID-19 test result during or up to 21 days before the hospital encounter with symptoms of cough, dyspnea, or fever or a discharge diagnosis of COVID-19 in the medical chart; or strong clinical suspicion of COVID-19 infection that was documented but could not be confirmed via testing because of logistic constraints. Patients were excluded if they were pregnant, were younger than 18 years, left against medical advice, entered comfort care or hospice within 3 hours of the hospital encounter, or had a length of stay greater than 120 days during the index encounter or if the patient discharge was within the 60-day follow-up window of a previously recorded or abstracted admission. Sixty days after discharge, abstractors reviewed the medical records of patients to collect data on clinical events, including readmission (to the index hospital or any hospital viewable in the medical record) and postdischarge death. For this analysis, we excluded any patients who tested negative for COVID-19, who were discharged with an unconfirmed diagnosis of COVID-19, whose ZIP code was not within the state of Michigan, or who had a nonresidential ZIP code (for example, post office box). In addition, 144 patients from 12 participating hospitals with fewer than 25 patients with COVID-19 in the registry, classified as low-volume hospitals, were excluded from the main analyses. However, sensitivity analyses were performed including these patients, as noted in the following discussion. Figure 1 presents sample inclusion and exclusion criteria. COVID-19 Hospitalization Outcomes Our main COVID-19 outcomes included development of acute organ dysfunction, development of organ failure, use of invasive mechanical ventilation, intensive care unit stay, in-hospital death, and discharge disposition. Patients were classified as having acute organ dysfunction using the Centers for Disease Control and Prevention's Adult Sepsis Event definition as follows: acute renal dysfunction (creatinine level greater than 1.5 times baseline among patients without preexisting end-stage renal disease, where baseline is the lowest creatinine level during hospitalization); acute hematologic dysfunction (platelet count <100 Â 10 9 cells/L, with ≥50% decrease compared with baseline); and acute liver dysfunction (total bilirubin >34.2 μmol/L [>2.0 mg/dL], with ≥50% increase compared with baseline). Patients were classified as having acute organ failure if they died during hospitalization or received at least 1 of the following therapies: heated high-flow nasal cannula, noninvasive ventilation (bilevel positive airway pressure or continuous positive airway pressure), invasive mechanical ventilation, dialysis or renal replacement therapy, or vasopressor support. Neighborhood Social Disadvantage Clinical data abstracted from patient charts (for example, patient characteristics, intensive care unit status, clinical characteristics) were merged with the SVI to understand how neighborhood factors influenced COVID-19 outcomes. Developed by the Centers for Diseases Control and Prevention, the SVI provides a composite measure of community susceptibility to adversities in the face of health shocks and includes 4 subindices: socioeconomic status, household composition and disability, racial or ethnic minority status and language, and housing type and transportation (7). See Appendix Table 1 (available at Annals.org) for component measures for each subindex. The index is a percentile rank, ranging from 0 to 1, with higher values indicating greater social vulnerability or disadvantage. We transformed SVI reported at the census tract level into ZIP code level using a population-weighted average within each ZIP code. We hypothesized that patients from ZIP codes with higher SVI (that is, greater neighborhood disadvantage) would have poorer COVID-19 hospital outcomes. Thus, if neighborhood disadvantage effects on COVID-19 hospitalization outcomes are independent of individual patient clinical risk factors (for example, age, comorbid conditions), we would anticipate that SVI would be associated with poorer outcomes even after controlling for patient factors. Covariates Individual-level patient covariates included demographic characteristics (age, sex), baseline clinical characteristics (11), with the exception of creatinine level, owing to greater than 10% missingness of this variable in our sample. Statistical Analysis Descriptive statistics were used to describe the patient cohort living in a ZIP code with an SVI rating in the highest quartile compared with all others. To determine whether COVID-19 hospitalization outcomes were related to neighborhood SVI, mixed-effects logistic regression models were fit for each of the outcomes using melogit in Stata (StataCorp). The composite SVI and its subindices were included as a continuous variable in separate models to avoid multicollinearity. Our primary models controlled for time using a categorical variable corresponding to the COVID-19 surges in Michigan and clinical patient factors associated with COVID-19 outcomes in addition to a hospital-level random intercept to account for within-hospital correlation. To disentangle the individual effect of patient ZIP code SVI from the cluster-level effect of hospitals, hospital-level mean SVI exposures were included in all models. Postestimation predictive margins were used to estimate the absolute risk for each outcome ("baseline" percentage) for a patient living in a ZIP code with an overall or subindex SVI score of 0.5 and the change in risk associated with an increase in the index by 0.25 (percentage point change for an increase of 1 quartile in the SVI). To ensure rigor, sensitivity analyses were conducted by repeating the analyses in a subsample excluding patients admitted through hospital transfer, and the full sample including the patients from a low-volume hospital and transferred patients. Additionally, we also repeated the analysis using a logistic regression model with cluster robust standard errors in the main analytic sample excluding patients from low-volume hospitals. We estimated E-values as the degree of association or confounding, on the relative risk (RR) scale, between an unobserved variable and the outcome and between that variable and SVI, that would have to be present to explain away the differences in outcomes associated with SVI. The estimated confounding RRs, from 1.8 to 2.5, suggest that an unmeasured confounder not already represented by observed covariates would need to be moderate or large to produce these significant associations. We could not identify such large confounders, and our results are robust to this source of bias. All analyses were performed in SAS, version 9.4 (SAS Institute), and Stata, v16 (StataCorp), with a set at 0.05. Role of the Funding Source The groups funding this research had no role in the design, conduct, or analysis of data for this manuscript. They also played no role in the authors' decision to submit the manuscript for publication. RESULTS Data from 2678 patients with COVID-19 who were hospitalized between March and December 2020 were available. After exclusion criteria were applied, data from 2309 patients were included in the analysis. The distribution of the overall SVI index (median, 0.50; range, 0.04 to 0.96) and the 4 subindices, socioeconomic status (median, 0.48; range, 0.04 to 0.90), household composition and disability (median, 0.61; range, 0.07 to 0.99), minority status and language (median, 0.51; range, 0.08 to 0.92), and housing type and transportation (median, 0.55; range, 0.07 to 0.95) suggest significant variation in neighborhood social disadvantage. Similarly, the hospital mean SVI exposure for the overall SVI index (median, 0.48; range, 0.20 to 0.81) and the 4 subindices, socioeconomic status (median, 0.45; range, 0.28 to 0.74), household composition and disability (median, 0.63; range, 0.18 to 0.77), minority status and language (median, 0.51; range, 0.31 to 0.68), and housing type and transportation (median, 0.50; range, 0.19 to 0.79) also showed wide variability between hospitals in our sample. Appendix Table 2 (available at Annals.org) shows the within-hospital variation among the participating hospitals along with the distribution of patients living in high-and low-vulnerability ZIP codes in the hospitals included in the analysis. Patients living in high-vulnerability ZIP codes were younger, were more often Black or Hispanic, had more comorbid conditions, and more frequently had Medicaid insurance than patients in lower vulnerability ZIP codes ( Table 1). These patients from high social vulnerability ZIP codes differed in pulse oximetry findings (5.4% in high-vulnerability vs. 3.5% in low-vulnerability ZIP codes had oxygen saturation ≤80%) and respiratory rate on admission (58.8% vs. 59.5%, respectively, had abnormal respiratory rates ≥20 breaths/min), compared with patients from other ZIP codes. Patients from high-vulnerability ZIP codes also more frequently were treated in the intensive care unit (29.0% vs. 24.5%), received mechanical ventilation (19.3% vs. 14.2%), and were discharged to home (62.1% vs. 60.1%). Compared with patients from low-vulnerability ZIP codes, those from high-vulnerability ZIP codes also had higher rates of acute organ dysfunction (51.9% vs. 48.6%), organ failure (54.7% vs. 51.6%), and in-hospital death (19.4% vs. 16.7%) in these unadjusted data. Association Between Neighborhood Social Disadvantage and COVID-19 Outcomes In mixed-effects regression analyses adjusting for individual patient clinical characteristics, time period, and mean hospital SVI exposure, a patient's neighborhood SVI was associated with receipt of mechanical ventilation, development of acute organ dysfunction, and development of acute organ failure. For example, a patient living in a ZIP code with an SVI of 0.5 such as Ludington, Michigan (a small harbor town in Northern Neighborhood-Level Factors and COVID-19 Hospitalization ORIGINAL RESEARCH Michigan), was estimated to experience an absolute risk for mechanical ventilation of 14.7%, acute organ dysfunction of 48.8%, and acute organ failure of 52.1% ( Table 2). In comparison, a patient living in a ZIP code in inner-city Detroit with an estimated increase in SVI by 0.25 (1 quartile above) had an increase in the risk for mechanical ventilation by 2.1 percentage points, acute organ dysfunction by 2.8 percentage points, and acute organ failure by 2.8 percentage points ( Table 2). Investigation of SVI subindices showed that patients living in a ZIP code with higher socioeconomic status Likewise, patients living in ZIP codes with higher household and disability subindex scores also experienced greater risk for developing acute organ dysfunction (D risk = 3.3 percentage points) and acute organ failure (D risk = 3.3 percentage points); whereas patients living in ZIP codes with higher minority status and language subindex scores had greater risk for developing acute organ failure (D risk = 3.0 percentage points). The association of hospital SVI exposure on outcomes showed no significant association with COVID-19 outcomes across all models (Appendix Table 3, available at Annals.org). Sensitivity Analysis In a sensitivity analysis, we excluded all patients who were transferred from another hospital (n = 130); in another sensitivity analysis, we included patients from low-volume hospitals that were not included in the main analyses. The association between a patient's SVI and COVID-19 outcomes was attenuated in these models compared with our main specification (Appendix Tables 4 and 5, available at Annals.org). The population transferred to another hospital was notably more severely ill than the baseline population (Appendix Table 4). Additional sensitivity analyses in the full sample, which included patients from low-volume hospitals and patients who were transferred from another hospital, did not show any major differences from the study findings (Appendix Tables 6 and 7, available at Annals.org). An alternative analytic approach of logistic regression models with À0.6 (À3.2 to 2.0) 0.3 (À2.9 to 3.5) ICU = intensive care unit; SVI = social vulnerability index. * Absolute risk and change in risk are estimated using postestimation predictive margins after fitting a mixed effects logistic regression. All models control for hospital-level mean SVI exposures and patient-level clinical covariates, including age, sex, Charlson score, respiratory rate range on admission, pulse oximetry range on admission, and time period of hospital admission (March through May 2020, June through August 2020, and September through December 2020) in the analytic sample. For example, the absolute risk for organ failure for a patient living in a neighborhood with an overall SVI score of 0.5 is 52.1%. An increase in SVI score from 0.5 to 0.75 increases the risk for organ failure by 2.8 percentage points. † P < 0.05. ORIGINAL RESEARCH cluster robust standard errors also did not show any significant variation from our main findings, demonstrating the robustness of our methods. DISCUSSION In this multihospital study of patients hospitalized for COVID-19, we found that persons living in neighborhoods with greater social vulnerability were more likely to receive mechanical ventilation, experience acute organ dysfunction, and develop acute organ failure. These associations remained significant after adjustment for patient demographic and clinical characteristics, suggesting that much of the neighborhood social disadvantage effects we observed were independent of important individual-level factors related to patients' age and preexisting comorbid conditions. The association between patient ZIP code social vulnerability and COVID-19 hospitalization outcomes also remained significant after adjustment for hospital social vulnerability "case mix," suggesting that patients' neighborhood social disadvantage influences outcomes more than variation across hospitals caring for patients from high-versus low-vulnerability areas. Taken together, these findings suggest that patients' neighborhood social disadvantage affects hospital outcomes, including the need for mechanical ventilation and severity of organ dysfunction. Our findings shed important light on the various contributors to racial and ethnic disparities in outcomes after COVID-19 hospitalization. Whether these disparities are driven by greater exposure risk due to housing, transportation, or other factors; greater susceptibility to infection after exposure; patients' underlying medical conditions; or differential access to care such that some people delay seeking care and consequently present to the hospital sicker remains unclear (4). Although several prior studies (including those performed by our group) (1-4) have found that patient race or ethnicity and social vulnerability are associated with higher overall COVID-19 mortality, we found no significant association between neighborhood social vulnerability and in-hospital mortality in this analysis (3,5,(12)(13)(14). This observation echoes the conclusion of a recent systematic review by Mackey and colleagues (4) who (despite disparities in overall mortality) also reported no association between race, ethnicity, and case-fatality rates among those confirmed to have COVID-19. Our findings instead suggest that patients from socially vulnerable neighborhoods may present to the hospital in a sicker state, leading to more intensive care in the hospital. However, we find that once patients were hospitalized, neighborhood social factors did not influence outcomes of mortality and discharge disposition. Our study adds to a growing literature examining the impact of structural racism on COVID-19 outcomes (15)(16)(17). For example, a recent study from Minnesota found that persons belonging to racial or ethnic minority groups had higher COVID-19 mortality rates than White persons, related to living in less advantaged neighborhoods as well as to higher residual mortality even when living within the same level of neighborhood disadvantage (18). Thus, both the Minnesota study and our Michigan study point to the importance of neighborhood-level disadvantage in COVID-19 outcomes, but the Minnesota study also supports the notion that systemic and structural inequalities experienced by persons in racial and ethnic minority groups cannot be elucidated by neighborhood contextual factors alone. Rather, policymakers must consider both individual social risks, such as poorquality and segregated housing and difficulty accessing care, and neighborhood social risks, such as poor transportation networks, when devising strategies to mitigate the impact of COVID-19 in specific populations. Attention to these "upstream," prehospital aspects of health quality and health care delivery may offset "downstream" outcomes following hospitalization for COVID-19. Our study has limitations, including a focus on hospitalizations in 1 state and the observational nature of the data. As well, potential missing documentation in chart abstraction and data reflecting trends related to changing COVID-19 variants remain a threat to inference. In addition, our study focuses on hospitalized patients and thus does not capture data from outpatient or postacute care sources, which may influence overall associations. Despite these limitations, our study has important strengths, including a focus on type of care received during hospital admissions, not just rates of admission as examined in other studies (2)(3)(4)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28). Further, we add rigor by expanding from studies of single health care systems to a multihospital statewide cohort. By integrating data on individual patient clinical factors with neighborhood-level social disadvantage factors, we are able to understand not only aspects such as exposure to SARS-CoV-2 necessitating admission but also access to and experiences of health care once COVID-19 is suspected or diagnosed. In conclusion, our findings demonstrate that hospitalized patients with COVID-19 from more socially vulnerable neighborhoods are more likely to present with greater illness severity and require more intensive treatment, but once hospitalized, they experience no differences in hospital mortality or discharge disposition. Policymakers should target more socially vulnerable neighborhoods to improve access to COVID-19 testing, treatment, and vaccination, as well as to identify and address social needs to ameliorate disparities in COVID-19 health outcomes.
2022-02-23T06:23:21.991Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "d6ac452164ee5604e60f502ed24b5218b0fe15ac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACP", "pdf_hash": "b07221c35b2adc6659c39d0c30f4f78f51bfcf86", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203039395
pes2o/s2orc
v3-fos-license
Acid-mediated tumor invasion as a function of nutrient source location Cancer cells have an altered metabolism that increases acid production driving to an extracellular p H significantly lower than normal. This leads to normal cell death, and extracellular matrix degradation allowing the formation of an interstitial gap between cancer and healthy cells. In this work, we present a mathematical model to study the interstitial gap formation and evolution considering a tissue with a non-uniform nutrient distribution. Our results indicate that the interstitial gap onsets at the region with highest nutrient consumption. Due to the gap formation, cancer cells near the interface have more nutrient and space availability. This induces cancer cell reproduction and migration toward the nutrient source. Our simulations suggest a strong correlation between gap size and the distance to the nutrient source. Although we do not find a correlation between tumor growth speed and gap size, our results indicate a high risk of metastasis for tumors that develop an interstitial gap, emphasizing the importance of gap detection as a hallmark for cancer invasion. I. INTRODUCTION It is well known that cancer cells have an altered metabolism.Warburg was the first who observed that cancer cells use the glycolytic pathway, rather than oxidative phosphorylation, despite sufficient oxygen supply [1].This phenomenon is known as the "Warburg effect".Although anaerobic glycolysis is very inefficient, since the adenosine triphosphate (ATP) production per molecule of glucose is significantly lower compared to the normal oxidation pathway [2], the acid-mediated tumor invasion hypothesis considers the Warburg effect as a cancer cell advantage.This is because the Warburg effect increases acid production in tumor cells driving to an extracellular pH significantly lower than normal [3].This leads to normal cell death, and extracellular matrix degradation, that would enhance cancer cell migration and invasion capabilities. Gatenby and Gawlinski were the first who studied the acidmediated invasion hypothesis considering a reaction-diffusion differential equation system [4].They found that an interstitial gap between cancer and healthy cells is established for aggressive tumors.This initial model was the starting point for more generalized models considering, for instance, the early stages of cancer growth [5], cooperative interaction between tumor and stromal cells [6], and acid-mediated tumor cell death [7].In general, the presence of a gap at the tumor-host interface is related with aggressive tumors and this gap is a consequence of high acidity levels.However, Ref. [6], states that, in some cases, increasing tumor acidity may prevent tumor invasion.To understand how this tumor-host interface is related with tumor progression and the lactate production has a huge diagnostic potential. In general, most of the mathematical models consider homogeneous environment or well-mixed cells [8].However, * silvia.menchon@unc.edu.ar the effect of inhomogeneities may change tumor shapes and invasion capabilities [8][9][10][11].In this work, we generalized our previous model [12] including new rules to introduce acid production.One of the key characteristics of our approach is the non-uniform nutrient distribution, since we consider only one blood vessel at the bottom of the network.Our results suggest that an interstitial gap is generated along the tumor-host interface and its size is strongly correlated with the distance to the nutrient source.Although we do not find a strong correlation between gap size and speed of growing, the presence of a gap suggest highly aggressive tumors in terms of larger metastatic spread probabilities, because there is a clear privilege direction of growing toward the blood vessel, even if there is enough free space in other regions of the tumor. II. THE MODEL We consider the cancer growth model presented in Refs.[12,13] for describing avascular tumor growth.In this model the tissue is represented by a network in which each point is associated with a volume element that contains many cells, nutrient molecules and excess of H + ions.Healthy, cancerous, and dead cells coexist at each node point, their concentrations being denoted by h( i, t ), c( i, t ), and d ( i, t ), respectively.The extra-cellular matrix degradation is represented by the presence of cell free space, which we denote by e( i, t ); since for each node, the total concentration is considered to be uniform and normalized, the normalization equation is generalized to h( i, t ) + c( i, t ) + d ( i, t ) + e( i, t ) = 1.Cancer cells produce an excess of H + ions while they consume nutrients due to their aerobic glycolytic metabolism.The acid diffuses in the tissue with diffusion coefficient α W and has a constant rate of degradation, d W .The excess H + ion concentration at the i-node is denoted by W ( i, t ).For simplicity, we consider a single critical nutrient which diffuses through the tissue, with diffusion coefficient α .We call it free nutrient and denote its concentration at the i-node by p( i, t ).Free nutrient is absorbed by the healthy cells at rate γ 0 . The rules governing cancer growth are as follows: R1.Feeding.Free nutrient is absorbed by cancer cells and converted into bound nutrient.The absorption rate is proportional to p( i, t ) at low free nutrient concentrations and it saturates to a constant value, γ as , at high concentrations (see Ref. [14]).We model the absorption rate by This rule only modifies free and bound nutrient concentrations as follows: where q( i, t ) is the bound nutrient concentration and τ the time step. R2. Consumption.Bound nutrient is consumed by i-node cells at the rate where the denominator c( i, t ) has been included in the exponent because each cell can consume only its own bound nutrient.In order to model the effect of an aerobic glycolytic metabolism, an excess of H + ions is produced with rate γ W β( i, t ), when cancer cells at the i-node consume nutrients.Thus q( i, t ) →q( i, t ) − τ β( i, t )c( i, t ), R3. Cancer cell death.If the average amount of bound nutrient per cell, q( i, t )/c( i, t ), is below a given threshold Q D , a fraction r D c( i, t ) of cancer cell dies.Cancer and dead cell populations are modified by where (x) is the Heaviside's step function and r D is a constant. R4. Healthy cell death.If the excess H + ion concentration is above a given threshold Q W , a fraction r W h( i, t ) of healthy cells dies.We assume that they die by activating their apoptotic pathways; thus, they can be absorbed, [15], and cell free space is generated by increasing e( i, t ).The corresponding equations are where r W is a constant.R5.Mitosis.If the average amount of bound nutrient per cell is above a given threshold Q M (Q M > Q D ), the concentration of cancer cells may increase up to r M c( i, t ) depending on the space availability, where r M is a constant.If e( i, t ) = 0 we should recover the Refs.[12,13] mitosis rule.If e( i, t ) = 0, new cancer cells fill the available free space before replacing healthy cells.Thus, when mitosis rule has to be applied at the i-node, cell free space is reduced by a fraction If e( i, t ) < r M c( i, t ), cancer cell concentration may still increase up to r M c( i, t ) − e( i, t ) by replacing healthy cells.Thus, a fraction, of healthy cells is transformed into cancer cells.If e( i, t ) r M c( i, t ) the last term in the above equation is zero and f ( i, t ) = 0; it means that the cell free space available at node i has been enough to contain all the new cancer cells.If e( i, t ) < r M c( i, t ), the second term in the f ( i, t ) expression is zero and f ( i, t ) takes the minimal value between h( i, t ) and r M c( i, t ) − e( i, t ).The equations after applying mitosis rule are as follows: R6. Migration.If the average amount of free nutrient per cell, p( i, t )/c( i, t ), is below a migration threshold, P D , cancer cells at the i-node migrate to its neighbor nodes.If there is cell free space availability at the destination node, this can be filled by cancer cells.However, we still assume that healthy cells may be eliminated when cancer cells arrive, since healthy cells are less mobile and aggressive than cancer cells, in such a way that the total cell concentration is preserved, as in Refs.[12,13].In other words, if there is no enough free nutrient, cancer cells at the i-node migrate to its neighbor nodes with migration coefficient α if either cell free space or healthy cells are present at the destination node.The corresponding equations for this rule are where α( i, t ) = α (P D c( i, t ) − p( i, t )), is the spatial discretization, and healthy cell concentration may be computed from the conservation equation.Cancer cell migration depends not only on the local diffusivity but also on the state of the target site. R7. Mobility.Since the presence of cell free space reduces the pressure, we assume that a fraction of cancer cells may invade the cell free space at their neighbor nodes.Thus, a node with cell free space could be filled out with cancer cells with a probability p f by choosing a random number of cancer cells, r c , from a neighbor node, that is also chosen randomly.The random number r c is a number between 0 and min(e( i, t ), c( i , t )), where i-and i -nodes are neighbor nodes.The parameter p f is also related with cancer invasion, in the sense that the greater p f the more invasive the tumor.Thus where r is a random number between zero and one and the i -node is only one of the i-node nearest neighbors, that has been chosen randomly.Rule R7 differs from rule R6 because the last considers cancer cell migration due to a low nutrient concentration, while rule R7 only takes into account local space availability. As we show, implementation of these rules generates a set of nonlinear difference equations.Iteration equations for the evolution of free nutrient and excess H + ion concentrations are given by respectively. In the next section we detail initial and boundary conditions, as well as, the implementation of this model.In this work, we consider an inhomogeneous distribution of free nutrient.This allows us to analyze cancer cell behavior with different local scenarios and since excess H + ion concentration is correlated with the consumption of cancer cells, its production would also be affected by local conditions. III. RESULTS In this work we represent the tissue of interest by a twodimensional grid (N × M), with lattice constant and node points i = (i ; j ), with i = 0, 1, . . ., N and j = 0, 1, . . ., M. The nutrient is supplied by a single capillary vessel situated at the lower edge of the lattice.The nutrient concentration in the blood vessel is constant, p((i ; 0), t ) = P 0 , with i = 0, 1, . . ., N. Periodic boundary conditions are used for the At t = 0 a cancer seed is placed at the center of the lattice, and the tumor evolution starts.At t = 0 a tumor starts growing at the center of a completely healthy tissue.On the basis of our previous work [12] the temporal discretization was chosen to be τ = 0.001h.Cell growth and division are regulated by the cell cycle.The cellcycle duration is approximately 12 h in exponentially growing monolayer cultures [16].Therefore, we inspect the threshold Q M every 12000 steps.We also implement rules R3-R7 every 12000 steps. Our two-dimensional grid represents a slab of tissue of size 2 cm × 1 cm, and we take N = 600 and M = 300.The values for most of the parameters that are related with nutrient diffusion, cancer and healthy cell consumption, cancer cell death, mitosis, and migration, were already discussed in Ref. [12].Based on Refs.[4,17], the parameter values corresponding to α W and d W were taken as 5 × 10 −6 cm 2 /s and 10 −4 /s, respectively.The parameter γ W was estimated in order to obtain concentration of W similar to those reported in Ref. [18].Although Table I summarizes the reference values of the parameters used in most of the simulations, we have also explored the effects of variations in some of them.Most of the parameters related with cancer growth without considering acid production were fixed, and we focus on the effects of changing the parameters related with cancer cell migration as well as acid diffusion and production.In particular, α, α W , γ W , and p f are specified for each figure. Figure 1 shows snapshots of growing tumors, including live and necrotic cells, 90 days after seeding for p f = 0.5, α = 8.3 × 10 −8 cm 2 /h, and (a) α W = 5 × 10 −6 cm 2 /s, γ W = 0.7; (b) α W = 5 × 10 −6 cm 2 /s, γ W = 0.9; and (c) α W = 2 × 10 −6 cm 2 /s, γ W = 0.7.At a given time, the gap between cancer and healthy cells will be formed by all the nodes for which neither healthy nor cancer cells are present; those nodes are represented by black points in panels (d), (e), and (f).In other words, each of the lower panels of Fig. 1 show the gap between cancer and healthy cells for the corresponding upper panel.If we fix all the parameters and only increase γ W , the tumor produces more acid, thus healthy cells are more affected.On the other hand, if all parameters are fixed, but α W decreases; acid diffusion is slower.Thus, there is a local increasing in the acid concentration.In this sense, simulations with a larger γ W or smaller α W have the same effect: in both cases the local concentration of W increases.Due to the asymmetric distribution of nutrient, there is more local nutrient availability on the lower half of the tissue; and thus more local W production.This is the reason why the gap starts at the bottom of the tumor.Depending on W production and diffusion the gap could surround all the tumor, but still, in general, it does not show an uniform width.In this model, nutrient and space competitions play an important role; when healthy cells are killed, there is more space and nutrient availability.In particular, cancer cells near the gap can consume and migrate more than those in the same conditions, but without considering acid production.Our simulations indicates that tumors clearly show a privilege direction of growing toward the blood vessel, even if the gap is present around all the tumor boundary.In order to analyze this, we define the mean tumor radius as R = r = (1/N ) N i=1 r i , where the sum is over the N nodes at the tumor edge and r i is the distance from node point i to tumor center of mass.We also define the minimal (maximal) distance from the tumor to the vessel as j min ( j max ), where j min ( j max ) is such that c((i ; j min ), t ) = 0 [c((i ; j max ), t ) = 0] for some i, and c((i ; j ), t ) = 0 for all i and j < j min ( j > j max ).With the same procedure, i min and i max are also defined.In order to compare results with and without acid production, the sub-index zero will be used to identify realizations without considering acid production.Figure 2(a) shows the ratio between R and R 0 versus time for the realizations that have been shown in Fig. 1(a) dotted FIG. 2. Ratio between tumor radii with and without considering lactate production, (a).Ratio between minimal (black), and maximal (gray), distance to the blood vessel with and without considering lactate production (b).Ratio between i min (black) and i max (gray), with and without considering lactate production (c).For all plots, dotted, dashed, and solid lines correspond to realizations that have been shown in Fig. 1(a), (b), and (c), respectively.FIG. 3. Interstitial gap size versus the minimal distance from the tumor to the blood vessel.Realizations are associated with three groups, the first one has the lowest maximum and it corresponds to α W = 5 × 10 −6 cm 2 /s, γ W = 0.7; the curves in the middle correspond to α W = 5 × 10 −6 cm 2 /s, γ W = 0.9; and the third group, corresponding to the highest maximum, includes realizations with α W = 2 × 10 −6 cm 2 /s, γ W = 0.7.For all plots, the following p f values has been considered: 0.25 (solid lines); 0.5 (dashed lines); 0.75 (dash-dotted lines); 1 (dotted lines); and the following α values, as well: α ≡ 8.3 × 10 −8 cm 2 /h (black lines); 2 × α (dark gray lines); 3 × α (gray lines); and 4 × α (light gray lines).line; (b) dashed line; and (c) solid line.A significant increment is observed in all the cases reported here.Figure 2(b) shows the respective ratios between j min and j min,0 (black); and j max and j max,0 (gray); and Fig. 2(c) is an equivalent plot, but considering the horizontal direction.This figure indicates that the presence of acid production clearly increases the cancer cell migration toward the blood vessel [Fig.2(b)], implying an increased risk of metastasis.Furthermore, tumor becomes narrower in the presence of acid production [Fig.2(c)]. As we have expressed above, the gap width is not uniform, and, due to the tumor sprouting in the upper region, it would not be convenient to define an average gap width considering the whole boundary.Our interest is focused on the lowest area of growing, because this is related with the risk of having metastasis.Furthermore, the mainly difference between tumor progressions with and without considering acid production is the front of propagation toward the nutrient source.In order to define an average gap width, denoted by , we just consider the nodes with i ∈ [i cm − a, i cm + a], where i cm is the horizontal position of the tumor center of mass, and a = (i max − i min ) × 0.1.For those nodes, the vertical distance from the last node with at least one cancer cell to the first node with at least one healthy cell is taken; and is defined as the average of those distances.Figure 3 shows versus the minimal distance from the tumor to the vessel, ( j min ), for different realizations.Figure 3 shows clearly three groups, the first one has the lowest maximum and it corresponds to α W = 5 × 10 −6 cm 2 /s, γ W = 0.7 [as in Fig. 1(a)]; the curves in the middle represent the second one, which corresponds to α W = 5 × 10 −6 cm 2 /s, γ W = 0.9 [as in Fig. 1(b)]; and the third group, corresponding to the highest maximum, includes realizations with α W = 2 × 10 −6 cm 2 /s, γ W = 0.7 [as in Fig. 1(c)].For all these groups there are realizations considering the following p f values: 0.25 (solid lines); 0.5 (dashed lines); 0.75 (dash-dotted lines); 1 (dotted lines); and α = α ≡ 8.3 × 10 −8 cm 2 /h (black lines); 2 × α (dark gray lines); 3 × α (gray lines); and 4 × α (light gray lines).Parameters p f and α were defined in rules R7 and R6, they represent the invasion probability and the cancer cell migration coefficient, respectively.Although experimental data are in correlation with the first group, the study of the second and third groups becomes interesting because the effects, that are already present in the first group, are amplified.Variations with P D were not included because they were not relevant (P D was defined in rule R6, cancer cell migration takes place if the local amount of free nutrient is less than P D ).When tumors start growing there is no gap.Once cancer cells consume nutrients, the level of H + ion concentration increases and healthy cells start dying.The onset of gap increases the nutrient availability, because dead healthy cells do not consume; this also increases cancer cell consumption and thus W local concentration.This positive feedback increases nutrient flow from the vessel to the tumor and contributes to the gap growing.However, after a while, nutrient availability is not enough to maintain all the new cancer cells, thus cancer cell migration become more active and the gap size starts decreasing.Once the active front is near to the vessel, the absorbent condition becomes more relevant and local W concentration is reduced due to the loss of the H + ion through the vessel.When cancer cells arrive to the blood vessel, gap size turns zero by definition.Figure 3 shows that, although the gap size depends on p f and α, the main influence in the gap evolution is given by acid production and diffusion.In general, for each group, the larger the cell migration coefficient, α, the smaller the gap.On the other hand, the lower the healthy cell resistance, the larger the gap. IV. CONCLUSIONS In this work we used a mathematical model to describe the formation and evolution of a gap between cancer and healthy cells due to an excess of H + ion concentration.Our model is based on space and nutrient competitions and considers a nonuniform distribution of nutrients.The gap onsets in the region with more consumption by cancer cells, depending on tumor activity, as well as, acid production and diffusion properties, the gap could surround the whole tumor-host interface.Our results suggest that although the gap itself is weakly correlated with intrinsic cancer cell migration properties and healthy cell resistance, its presence indicates an increased cancer cell mobility toward the regions with more nutrient availability. In our model all cancer cells have the same migration coefficient.However, their effective mobility depends on the local conditions.Nutrient availability triggers reproduction; and thus, more consumption.The increase in nutrient consumption produces more acid; as a consequence, healthy cells die generating more space and nutrient availability for cancer cells.This feedback increases nutrient flow toward tumors and the activity of already active regions of tumors inducing cancer cell migration in those sectors.The combination of these key factors drives tumor evolution.Comparing these results with those for tumor evolutions without considering the excess of H + ions on the tissue, we can state that acid production increases the migration bias toward the nutrient source and thus, also increases the probability to invade by metastasis. In this work, we consider that the network bottom line represents a blood vessel.With this scenario, the closer to the bottom, the more aggressive the tumor in the sense of metastasis.Our results indicate that acid production is associated with a faster propagation of the lowest front.If we analyze the minimal distance from the tumor to the vessel as a function of time (after a few weeks of seeding), we find that is linear, indicating a constant growth velocity (non-shown results).However, the gap size is not constant as a function of time.Even more, it is easy to find similar growth velocity for very different gap size evolutions.Most of the models, that consider acid-mediated tumor invasion, obtain a constant gap size for a given tumor dynamics, and according to this, correlation between growing speed can be analyzed.The main difference with our findings is that gap size is not constant and, for a given tumor dynamics, it depends on the distance to the nutrient source location.In particular, our model suggests that there is no strong correlation between the gap size and the tumor growth velocity as in Ref. [4], but it mainly depends on the distance to the nutrient source and the acid production.In this sense, it would be difficult to obtain information about growing speed from gap images.However, the presence of a gap can be consider a hallmark, because it indicates tumors with a high risk of metastasis; and if the gap is present in just a part of the tumor-host interface, this region should be associated with the more active tumor area, and it would also indicates the direction of growing. TABLE I . Numerical values of computational parameters.Absorbing conditions are also considered for cancer cell and excess H + ion concentrations at the lower boundary, due to the presence of the blood vessel.Initially we consider a healthy tissue with stationary nutrient distribution.
2019-09-17T02:45:54.157Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "260e48a1943b1d8a3640c38cb105c0b5e5cce3ef", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/124211/2/CONICET_Digital_Nro.03d2c44d-da71-4f37-86a5-efc03663a719_A.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "10d0244e9a70bcb9d387c69b34802aea61fd164a", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
3219575
pes2o/s2orc
v3-fos-license
Cdk5 phosphorylates non-genotoxically overexpressed p53 following inhibition of PP2A to induce cell cycle arrest/apoptosis and inhibits tumor progression Background p53 is the most studied tumor suppressor and its overexpression may or may not cause cell death depending upon the genetic background of the cells. p53 is degraded by human papillomavirus (HPV) E6 protein in cervical carcinoma. Several stress activated kinases are known to phosphorylate p53 and, among them cyclin dependent kinase 5 (Cdk5) is one of the kinase studied in neuronal cell system. Recently, the involvement of Cdk5 in phosphorylating p53 has been shown in certain cancer types. Phosphorylation at specific serine residues in p53 is essential for it to cause cell growth inhibition. Activation of p53 under non stress conditions is poorly understood. Therefore, the activation of p53 and detection of upstream kinases that phosphorylate non-genotoxically overexpressed p53 will be of therapeutic importance for cancer treatment. Results To determine the non-genotoxic effect of p53; Tet-On system was utilized and p53 inducible HPV-positive HeLa cells were developed. p53 overexpression in HPV-positive cells did not induce cell cycle arrest or apoptosis. However, we demonstrate that overexpressed p53 can be activated to upregulate p21 and Bax which causes G2 arrest and apoptosis, by inhibiting protein phosphatase 2A. Additionally, we report that the upstream kinase cyclin dependent kinase 5 interacts with p53 to phosphorylate it at Serine20 and Serine46 residues thereby promoting its recruitment on p21 and bax promoters. Upregulation and translocation of Bax causes apoptosis through intrinsic mitochondrial pathway. Interestingly, overexpressed activated p53 specifically inhibits cell-growth and causes regression in vivo tumor growth as well. Conclusion Present study details the mechanism of activation of p53 and puts forth the possibility of p53 gene therapy to work in HPV positive cervical carcinoma. Background p53, a major tumor suppressor or guardian of the genome is mutated, deleted or inactivated in various cancers [1][2][3][4]. Almost all human papillomavirus (HPV) infected cancer cells contain wild-type p53. p53 is nonfunctional as HPVE6 protein abrogates its function either by ubiquitin-dependent and independent degradation [5], by inhibition of acetylation or by repressing p53-dependent downstream molecular pathways [6]. Though, E6 associates with p53 for its degradation [4]; there are contradictory reports on the inhibition and activation of p53 pathways by E6 [7,8]. Ectopic expression of p53 in cancer cells lacking p53 or harboring mutant and/or abrogated wild-type p53, have contrasting effects on cell-fate. In p53 null cancer cells, p53 overexpression causes cell cycle arrest and apoptosis [9]. However, in virus infected cells harboring wild-type p53, overexpression of p53 does not induce cell cycle arrest and apoptosis [10]. Till date there are only three reports describing the consequences of p53 overexpression in HPV-positive cells and results obtained leave ample scope for debate [10][11][12]. Disparity among these reports may be due to differences in adenoviral multiplicity of infection. Taken together, the role of p53 overexpression in HPV-positive cells remains obscure. In HPV-positive cells, E6 works at different hierarchal levels in p53 pathway. It degrades p53, p21 and Bax causing impairment in cell cycle arrest/apoptosis [13,14] and making p53 activation more difficult. With recent developments in efficient gene delivery systems and the prospect of gene therapy making a come-back [15] it's likely that p53 based therapy may become a reality [2]. p53 executes its tumor suppressor activity by triggering cell cycle arrest and apoptosis. However, the factors that facilitate selection between cell cycle arrest and/or apoptosis are not well-understood. It has been reported that p21 is most important transcriptional targets of p53 for causing cell cycle arrest [16] and p53 executes apoptosis through Bax transcription [17]. To study the role of p53 in E6-positive cells, we developed a novel isogenic HeLa cells with Tet-Onregulated p53 expression. Tet-On system exhibits tighton/off regulation and is devoid of pleiotropic effects. Moreover, rapidly high induction levels are achievable and the inducer, doxycycline (Dox), is wellcharacterized. p53 overexpression does not promote cell cycle arrest and apoptosis in HeLa cells. We demonstrate that protein phosphatase 2A (PP2A) controls p53 functions and its inhibition activates p53, causing cell cycle arrest/ apoptosis in vitro and tumor growth inhibition in vivo. Interestingly, cyclin dependent kinase 5 (Cdk5) regulates p53 phosphorylation essential for its activation. Taken together, we propose that non-genotoxically overexpressed p53 can be activated by inhibiting its dephosphorylation in HPV-positive cervical cancer cells. This strategy may be of therapeutic importance in p53 associated gene therapy [18][19][20]. Plasmids and transfection pC53-SN3 and pG13CAT were a kind gift from Dr. Bert Vogelstein, John Hopkins, Baltimore, MD. p53 fragment of pC53-SN3 was sub cloned in BamH1 site of pTRE and renamed as pTREp53. pG13CAT contains 13 repeats of p53 consensus binding site inserted in the 5' end to polyomavirus basal promoter linked to CAT reporter gene. Cells were co-transfected with 2 μg of pG13CAT and 0.5 μg of pEGFPC1 which serves as an internal control for transfection. Bcl-2 fragment from pRc/CMVBcl-2 (kind gift from Dr. S. Soddu, Regina Elena Cancer Institute, Italy) was excised by HindIII and cloned into pTRE2 to obtain pTRE2Bcl-2. Cells were transfected with either 2.0 μg (for 35 mm plate) or 0.5 μg (for 96 well plate) plasmid by Lipofectamine2000 transfection reagent as per manufacturer's instructions. Clonogenic-survival assay Cells (500) were treated with indicated concentrations of Dox, OA or Cdk2/5 inhibitor based on the experimental design and incubated for 48 h. Cells were further grown for 21 days and thereafter colonies on the plate were stained with crystal-voilet. Electrophoretic mobility shift assay (EMSA) To visualize the DNA-binding activity of p53 in nuclear extracts of HTet23p53, HTet26p53, HTet43GFP and HeLa cells, EMSA was performed. After treatment with Dox, cells were harvested for the preparation of cytoplasmic and nuclear fractions by using nuclear extraction kit as per manufacturer's instructions (Chemicon, Billerica, MA). Nuclear lysates were incubated for 45 min at 4°C and cleared by centrifugation at 15,000 ×g for 15 min at 4°C. Equal amount of nuclear proteins were used for the binding reaction. Complementary oligonucleotides containing the sequences corresponding to putative p53 binding site (forward, 5′-GAACATGTC-TAAGCATGCTG-3′; reverse, 5′-CAGCATTCTTAGA-CATGTTC-3′) were annealed and 5′-end-labeled with 2 micro curie (ΔCi) [γ-32 P] ATP using 10 U of T4 polynucleotide kinase (Invitrogen) for 90 min. Binding reaction was carried out in a final volume of 20 μl consisting of 10 mM Tris.HCl (pH 7.5), 50 mM NaCl, 1 mM DTT, 1 mM EDTA, 2.5% glycerol, 1 μg deoxyinosinic deoxycytidylic acid [poly(dI-dC)], 300 ng BSA, 5 μg nuclear extract, and 2 μl of [γ-32P] labeled oligonucleotide probe. Reaction mixtures were incubated for 20 min at room temperature. Samples were resolved on a native polyacrylamide gel. Gel was dried under vacuum at 80°C for 45 min by gel dryer (Bio-Rad) and DNA-protein complex were visualized by autoradiography. Chloramphenicol acetyl transferase assay Cells were co-transfected with pG13CAT and pEGFPC1 expression vector using Lipofectamine2000 as described in transfection section. After 18 h post-transfection, p53 was induced with Dox for 48 h with or without PFTα pretreatment for 1 h. CAT assay was performed as described earlier [2] except that the reaction time was reduced to 30 min at 37°C. Spots were quantified by phosphoimager (Bio-Rad). GFP intensity was directly measured from the cell lysates to check or correct for equal transfection efficiency as well to normalize the reporter activity. The fluorescence intensity of GFP in equal amount of lysate was measured by fluorimeter (Fluoroskan Ascent FL, Fisher Scientific) with excitation at 485 nm and emission at 510 nm. SiRNA transfections Cells were transfected with 100 nM control or p53 siRNA using Lipofectamine2000 [21]. Eighteen hour post-transfection, Dox was added with or without OA and further incubated for 48 h. Thereafter, western blot or MTT assay was performed. To knock-down PP2A and Cdk5; Cdk5 siRNA was transfected 12 h prior to PP2A siRNA transfection and then incubated with Dox for 48 h. Immunoprecipitation and Chromatin-immunoprecipitation (ChIP) assay After indicated treatment cells were lysed in RIPA buffer. Equal amount of protein (400 μg) was taken and lysates were pre-cleared with 50 μl protein A/G-plus agarose for 30 min. Fifty microgram lysates were run as input. Agarose beads were pelleted and supernatant was incubated with p53 specific antibody overnight at 4°C. Fifty microliter protein A/G-plus agarose was added in antibody-antigen complex with gentle shaking for 4 h at 4°C. The protein A/G-plus was separated by centrifugation at 4,000 rpm. Target and its associated proteins were disrupted and resolved on SDS-PAGE. The expression of Cdk5 and p53 was detected by western blotting. For chromatin-immunoprecipitation assay cells or homogenized tumors which were earlier fixed with 1% para-formaldehyde for 15 min, were lysed with 500 μl of lysis buffer Mitochondrial and cytosolic fractionation HTet26p53 cells were swelled in ice-cold hypotonic HEPES buffer [10 mM HEPES (pH 7.4) 5 mM MgCl 2 , 40 mM KCl, 1 mM PMSF and protease inhibitor cocktail] for 30 min and centrifuged at 1500 rpm to pellet the nuclei. The resulting supernatant was centrifuged at 10,000 rpm to pellet mitochondrial fraction. Supernatant was used as cytosolic fraction and mitochondrial pellet was washed with PBS twice. This pellet was lysed in mitochondrial buffer [10 mM MOPS (pH 7.4), 1 mM EDTA, and 4 mM KH 2 PO4, 1% NP-40, protease inhibitor cocktail] and centrifuged at 12,000 rpm for 30 min. Immunostaining Cells grown on Labtek chamber slides were treated with Dox for 48 h and processed for immunofluorescene study as described earlier [21]. Primary antibody against p53 (1:50) was added and incubated for 2 h at room temperature. Following incubation, cells were washed 5 times. Fluorescein isothiocyanate (FITC) or Rhodamine conjugated secondary antibodies (1:100) were added and incubated for 1 h at room temperature. After five washes, vectashield mounting medium containing DAPI was added and slides were examined by a confocal microscope (LSM510, Carl Zeiss, Germany). For mitotracker deep red staining, after indicated treatments cells were incubated with 200 μM of mitotracker dye for 20 min. These were then fixed and processed for immunofluorescence study by incubating with a Bax specific primary antibody and FITC conjugated secondary antibody. Slides were mounted with DAPI containing medium and images were acquired in confocal microscope. Terminal deoxynucleotidyltransferase dUTP nick end labeling (TUNEL) staining was performed as per manufacturer's protocol (BD) except the reaction time was increased to 3 h at room temperature. Cells were washed twice with binding buffer and PI solution was added. Slides were washed, mounted and observed under confocal microscope (META, Carl Zeiss). Tumor growth HTet23p53 or HTet43GFP cells (5 × 10 6 ) in 100 μl PBS mixed with 100 μl matrigel were injected s.c. into 4-6 week-old female NOD/SCID mice (Jackson Laboratories). Total 12 mice were injected with HTet23p53 cells on the right flank and 4 mice were injected with HTet43GFP cells on both the flanks. Out of two groups, one was fed on 500 ng/ml Dox in drinking water. Tumor development was monitored. After tumor-size reached to 5-10 mm in diameter, OA (40 pg/mice) was administered at the tumor site. Tumor-sizes were measured weekly by digital Vernier Caliper (Sigma) and tumor volume was calculated by formula V = [1/2 × (large diameter) × (small diameter) 2 . MTT assay, FACS analysis and western blotting For methylthiazole tetrazolium (MTT) assay, 7,500 cells were treated with Dox, OA and/or Cdk2/5 inhibitor as per experimental requirement and assayed for cell survival. For western blotting following indicated treatments, cells were washed thrice with ice-cold phosphate buffered saline (PBS) and lysed in ice-cold lysis buffer (50 mM Tris-Cl, pH 7.5, with 120 mM NaCl, 10 mM NaF, 10 mM sodium pyrophosphate, 2 mM EDTA, 1 mM Na 3 VO 4 , 1 mM PMSF, 1% NP-40 and protease inhibitor cocktail (Roche Diagnostics, Penzberg, Germany). Equal amount of protein was resolved on a polyacrylamide gel. Where ever possible blots were stripped by incubating the membranes at 50°C for 30 min in stripping buffer (62.5 mM Tris-Cl pH 6.7, 100 mM mercaptoethanol, 2% SDS) with intermittent shaking. Membranes were washed thoroughly with TBS and reprobed with required antibodies. Otherwise gels run in duplicates were probed for the desired proteins by western blotting and then compiled. For FACS analysis cells were plated at a density of 5 × 10 5 cells in 35 mm plates and allowed to adhere for 24 h. Cells treated as per experimental requirement were harvested by trypsinization and processed for flow cytometric analysis. The fluorescence of propidium iodide (PI) was measured through a 585 nm filter in a flowcytometer (FACS Calibur, BD) for 10,000 cells. Data were analyzed using cell quest software (BD). Details of these are as published earlier [21,22]. TUNEL staining To detect apoptotic cells APO-DIRECT TUNEL assay kit (BD) was used followed by flow cytometric analysis as per the manufacturer's instructions with some modifications. Cells were incubated in DNA-labeling solution for 2 h at 37°C and analyzed by FACS Calibur (BD). PI stains total DNA and FITC conjugated dUTP stains apoptotic cells. Reverse-Transcription-PCR Total RNA from the cells or tumor samples was extracted using TRIzol™ reagent and PCR was performed as described [21] with following primers; p53 ( Statistical analysis Statistical comparisons are made using student's paired t-test using SPSS10.0 (SPSS Inc., IL) and P-value < 0.05 was considered significant. Development and screening of HeLaTet-On p53 inducible cell-system Seven out of 24 p53 transfected clones (HeLaTet-On-p53 21 to 44) and nine out of 12 GFP transfected clones (HeLaTet-On-pBIEGFP 41 to 52) exhibited induction in the presence of Dox (see Additional file 2A and 2B). Two clones HeLaTet-On-p53-23 S and HeLaTet-On-p53-26 S (represented as HTet23p53 and HTet26p53) along with HeLaTet-On-BIEGFP-43 (represented as HTet43GFP) with low-leaky and high regulatory expression were selected for further studies. Growth properties of clones for 6 days were similar to parental HeLa cells (see Additional file 3A). Also, protein concentration did not alter between the clones and the parental cells (see Additional file 3B). Dox upto 2000 ng/ml was non-toxic (see Additional file 3C). Tight-regulation of p53 expression was confirmed by addition of 100 and 1000 ng/ml of Dox. p53 expression was induced in response to Dox in a dose-dependent manner ( Figure 1A). Also, GFP protein expression was tightly-regulated ( Figure 1B). As E6 downregulation induces cell-death, E6 mRNA levels in p53 and GFP expressing clones as well as in parental HeLa cells was detected by RT-PCR. No alteration in e6 expression following treatment with Dox was observed ( Figure 1C). p53 localization and nuclear retention is essential for execution of its transcriptional and tumor suppressor activities. However, in cancer cells wild-type p53 is sequestered in cytoplasm by various molecules which prevent its functioning [23]. p53 induced in response to Dox in a dose dependent manner is predominantly localized in the nucleus (green represents p53 staining and blue represents DAPI for DNA stain in the nucleus) (see Additional file 4A and 4B). No alteration in p53 protein expression was detected in Dox treated HTet43GFP (red-staining) and parental HeLa cells (green-staining) (see Additional file 4C and 4D). In HTet43GFP cells GFP protein expression (green-staining) is tightly-regulated by Dox (see Additional file 4C). p53 overexpression does not cause cell cycle arrest or growth inhibition in HeLa cells even though it possesses DNA binding activity PI staining for the cell cycle analysis depicted no alteration in cell cycle phases in p53 overexpressing cells as compared to HTet43GFP or HeLa cells (Figure 2A). Long-term consequence of p53 overexpression was investigated by clonogenic-survival assay. Almost equal numbered and sized colonies were formed by p53 overexpressing HTet23p53 and HTet26p53 cells ( Figure 2B). As Dox-induced p53 was localized in the nucleus, its in vitro DNA-binding activity by electrophoretic mobility shift assay (EMSA) and in vivo transcriptional activity by chloramphenicol acetyl transferase (CAT) reporter gene was evaluated. Increased binding of p53 to its consensus sequence in HTet23p53 and HTet26p53 but not in HTet43GFP and HeLa cells after Dox addition was detected ( Figure 2C). Also, there was increase in CAT activity in p53 overexpressing HTet23p53 and HTet26p53 cells and no increase was detected in HTet43GFP and HeLa cells ( Figure 2D). Specificity of CAT activity was confirmed by PFTα treatment. Activation of p53 by inhibition of phosphatase To inhibit the phosphatase, okadaic acid (OA), a potent and specific inhibitor of PP1A and PP2A was utilized. Inhibitory-effect of OA for PP1A and PP2A is concentration-dependent. Inhibitory-concentration (IC 50 ) for PP2A is 0.1-10 nM and for PP1A it is 50-1000 nM [24]. We used 5 nM OA to specifically inhibit PP2A and this concentration inhibits almost 100% of phosphatase activity in the cells [25]. Dose-dependent growth inhibition (2.5 nM OA caused 25% whereas 5 nM caused 60%) was observed in p53 overexpressing HTet23p53 and HTet26p53 as compared to HTet43GFP and HeLa cells ( Figure 1A). OA alone did not significantly affect cell-survival ( Figure 3A). Decrease in colony number and size in p53 overexpressing OA treated cells (HTet23p53 and HTet26p53) was observed as compared to HTet43GFP cells ( Figure 3B). To confirm that indeed p53 specifically inhibits cell-growth in the presence of OA, p53 siRNA was transfected which decreased the levels of overexpressed p53 as compared to transfection with Ctrl siRNA (Figure 3C inset). Silencing p53 reduces cell-death by 30-35% in p53 overexpressing cells as compared to HTet43GFP cells or Ctrl siRNA transfected cells ( Figure 3C). Activated p53 executes its anti-proliferative action through cell cycle arrest and apoptosis by specific promoter recruitment To evaluate whether growth-inhibition is caused by cell cycle arrest or apoptosis, cell cycle analysis was performed. Approximately 10% increase in S phase and 30% increase in G2 phase in p53 overexpressing cells as compared to GFP expressing cells was observed ( Figure 4A). p21, a p53 transcriptional target is a dominant effecter molecule that causes cell cycle arrest. Its transcript level increased significantly following OA treatment in p53 overexpressing HTet23p53 and HTet26p53 cells as compared to GFP expressing cells. Under similar experimental conditions no change in HPV18E6 mRNA was observed ( Figure 4B). TUNEL assay using FACS analysis indicated that 40% cells were apoptotic in p53 overexpressing HTet23p53 and HTet26p53 cells treated with OA in comparison to 8% in OA treated HeLa cells (see Additional file 5). Though, DOX treatment increases p53 transcript as well as protein levels, OA treatment does not lead to further enhancement in p53 transcript levels in HTet23p53 and HTet26p53 cells ( Figure 4B and 4C). Interestingly, OA treatment significantly increases p53 protein levels (Figure 4C). Finally, ChIP assay was performed to ascertain whether in p53 overexpressing OA treated cells p53 is recruited on the promoter of its effecter genes. Results obtained indicate that indeed in p53 overexpressing cells treated with OA, p53 occupies both p21 and bax promoters ( Figure 4D). Inhibition of Cdk5 following PP2A inhibition promotes cell survival The importance of kinases involved in the activation of p53 by OA treatment was explored by utilizing specific pharmacological inhibitors. Pre-treatment with a specific Cdk2/5 inhibitor increases cell survival whereas ERK inhibition by U0126 did not have any impact on survival ( Figure 5A) of p53 overexpressing and OA treated HTet23p53 and HTet26p53 cells as compared to HTet43GFP and HeLa cells. To confirm the functional importance of PP2A and Cdk5, corresponding siRNAs were transfected into the cells. Cdk5 siRNA significantly decreased Cdk5 protein levels ( Figure 4B upper left panel). Transfection with PP2A siRNA decreases its protein levels whereas p53 protein increases in addition to inhibiting cell survival ( Figure 4B upper right panel). Interestingly, siRNA mediated knockdown of Cdk5 promotes survival of p53 overexpressing HTet23p53 and HTet26p53 cells as compared to HTet43GFP cells (Figure 5B). Cdk5 inhibition by its inhibitor causes significant increase in number and colony-size of p53 overexpressing HTet23p53 and HTet26p53 cells inspite of being treated with OA ( Figure 5C). This result indicates that activation of p53 is dependent on the functional level of Cdk5. In p53 overexpressing cells OA treatment causes increase in apoptotic population which diminishes in the presence of Cdk5 inhibitor, as detected by TUNEL immunofluorescence staining (Figure 5D). Finally, to prove that stabilization and activation of overexpressed p53 protein is dependent on the functionality of Cdk5, cells treated with OA acid were also exposed to Cdk2/5 inhibitor. Treatment with OA increases the levels of overexpressed p53 whereas, addition of Cdk2/5 inhibitor diminishes it ( Figure 6A). Neither OA nor Cdk2/5 affects the level of Cdk5 protein per se. However, the level of p35 protein decreases in the presence of OA and addition of Cdk2/5 reverts back to the basal level ( Figure 6A). Finally Cdk5 activity was confirmed by increased phosphorylation level of Cdk5 tyrosine 15 residue following OA treatment ( Figure 6B compare lane 2 vs lane 1) which was diminished by Cdk2/5 inhibitor ( Figure 6B compare lane 3 vs lane 1). p53 executes apoptosis through mitochondrial pathway Bax and Bcl-2 levels were detected to determine the involvement of mitochondrial pathway. Though Bax was upregulated following OA treatment, its upregulated status was reverted back in the presence of Cdk2/5 inhibitor in HTet23p53 and HTet26p53 cells overexpressing p53 ( Figure 7A). Complementary to Bax upregulation, Bcl-2, which heterodimerizes and interferes with Bax homodimerization, was downregulated and its level was normalized back to basal expression in the presence of Cdk2/5 inhibitor, in p53 overexpressing OA treated cells ( Figure 7A). No alterations in Bax and Bcl-2 were observed in HTet43GFP cells. Further, to confirm that Cdk2/5 inhibitor actually inhibits apoptosis; PARP was detected by western blot. Cleavage of PARP into p85 peptide was detected only in p53 overexpressing HTet23p53 and HTet26p53 cells ( Figure 7A). Immunofluorescence studies revealed increased mitochondrial localization of Bax in p53 overexpressing OA treated cells, which was diminished by Cdk2/5 inhibitor ( Figure 7B). HeLa cells served as control for these studies. In p53 overexpressing OA treated cells decreased mitochondrial cytochrome-C (Cyt-C) and increased cytosolic levels were observed ( Figure 7C). Finally, to ascertain the mitochondrial apoptosis, Bcl-2, was ectopically expressed (Figure 7D inset). As expected significant decreased apoptotic cells were detected in p53 overexpressing OA treated HTet23p53 and HTet26p53 cells in comparison to vector alone transfected or in HTet43GFP and HeLa cells ( Figure 7D). Cdk5 interacts to phosphorylate p53 Under genotoxic stress conditions activation of p53 is achieved by phosphorylation at Ser20 and Ser46 residues [ 26,27]. To explore that in p53 overexpressing OA treated cells Cdk5 plays an important role, phosphorylation status of Ser20 and Ser46 was detected in the presence or absence of OA. Phosphorylation at Ser20 and Ser46 residues of overexpressed p53 increased significantly in OA treated cells, whereas in the presence of Cdk2/5 inhibitor phosphorylated forms diminished ( Figure 8A). Under identical experimental conditions no increased phosphorylation was detected in HTet43GFP cells. Finally, to ascertain whether Cdk5 associates with p53 to cause its phosphorylation, co-immunoprecipitation experiment was performed by immunoprecipitating p53 with its specific antibodies and this immuno-complex was probed with Cdk5 antibody by western blotting as described in materials and methods section. Interestingly, Cdk5 was detected in immuno-complex isolated from p53 overexpressing OA treated HTet26p53 cells. In the presence of Cdk2/5 inhibitor this interaction was reduced ( Figure 8B). Activated and not overexpressed p53 inhibits tumor growth To validate that these in vitro findings have in vivo implications also, HTet23p53 or HTet43GFP cells were administered in NOD/SCID mice and monitored weekly for tumor growth. Up to three weeks after implanting cells tumors grew identically in mice supplemented with or without Dox. Thereafter, tumor growth was rapid in mice injected with HTet23p53 cells and treated with OA without being supplemented with Dox. Similarly, in mice injected with HTet43GFP cells, tumors grew rapidly in those treated with OA and supplemented with or without Dox. Interestingly, in mice injected with HTet23p53 cells and treated with OA,in addition to being supplemented with Dox, tumor growth was significantly retarded ( Figure 9A). Reduced tumorgrowth is reflected in differences in size and weight of the excised tumors ( Figure 9B). Tumor samples were analyzed to ascertain the involvement of stabilized p53 and also for the activation of its downstream growth inhibitory factors. In tumor samples from OA treated mice bearing HTet23p53 cells, p53 and bax protein levels were higher ( Figure 9C) and these did not increase in tumors of HTet43GFP cells. p53 transcript and protein levels were higher in HTet23p53 cells derived tumors from mice supplemented with Dox and, levels were not enhanced further by OA treatment (Figure 9D). These results clearly indicate that the stabilization of p53 protein also occurs in in vivo tumors. Conclusively, the ChIP assays performed on lysates of tumors excised from mice provided with Dox in water, with or without OA treatment revealed enhanced promoter occupancy of activated p53 on p21 and bax promoters in vivo ( Figure 9E). Discussion This study highlights the activation of overexpressed p53 and its effect on cell cycle arrest and apoptosis in HPV-positive HeLa cells. Under stress conditions p53 is stabilized by phosphorylation and acetylation at serine/ threonine/tyrosine and lysine residues respectively. The serine phosphorylation at residues 6,9,15,20,33,37,46,315 and 392 plays a crucial role depending upon the nature of stress thereby causing cell cycle arrest and/or apoptosis [26,27]. Unlike stress condition wherein p53 induction promotes cell cycle arrest or apoptosis, this study demonstrates that p53 overexpression in HPV-positive cells does not induce cell cycle arrest or apoptosis though; it is reported to do so in other cancer cell types [16,17,28]. The reason for this difference could be inhibition of cellular machinery necessary for performing critical posttranslational modifications which are required for sequence specific promoter selection of the genes responsible for the induction of cell cycle arrest or apoptosis by HPV [5,29]. Equilibrium between phosphorylation and dephosphorylation of a protein like p53 is essential for its normal functioning in the cells. Therefore, conditions causing shift in the equilibrium between phosphorylated and non-phosphorylated states will dictate the functionality of a protein and subsequently the cells fate [30,31]. Protein phosphatases inactivate p53 by dephosphorylating it. Very recently Lu et al., reported that PP2A inhibition also decreases p53 protein and its phosphorylation at Ser15 through activation of its negative regulator MDM2 [32]. In contrary, we herein demonstrate that inhibition of phosphatase stabilizes and activates overexpressed p53 probably because of impairment in functional MDM2 pathway in HPV-positive cells [33]. Phosphorylation of p53 at specific serine residues is essential for the induction of cell cycle arrest and apoptosis. Under stress conditions p53 is phosphorylated at Ser20 located in the transactivation domain [26], thereby stabilizing and triggering downstream pathways. Ser46 phosphorylation, located in the DNA-binding domain of p53 plays a crucial role in sequence specific DNA-binding required for the induction of cell cycle arrest and apoptosis [34]. In this study, we confirm that phosphorylation at these residues fully restores p53 functionality and induces cell-death even under non-stress conditions. Stress-induced p53 is stabilized and activated by various kinases such as ATM, ATR, Chk1, HIPK2 and Chk2 by phosphorylation [26,27,34,35]. However, very HTet23p53 cells were treated as described in (A) and processed for mitochondrial and cytoplasmic fractionation. Western blotting was performed with cytochrome-C antibody. (D) HTet23p53 cells transfected with pTRE or pTREBcl-2 plasmids were treated with 500 ng/ml of Dox for 48 h and processed for western blotting with Bcl-2 specific antibody. β-Actin served as a loading control (inset). HTet23p53, HTet26p53, HTet43GFP and HeLa cells were transfected with pTRE or pTRE2Bcl-2 plasmids and treated with Dox with or without OA and further incubated for 48 h. Cell-viability was determined by MTT. Bars represent variation within the wells of an experiment done twice (± S.E.). *Represents P < 0.01. little is known about the kinases that phosphorylate p53 under non-stressed conditions. Cdk5 was originally discovered in HeLa cells [36] and its functional role as p53 upstream kinase has been documented in neuronal cells [37]. Involvement of Cdk5 in growth of breast and prostate cancers cells has been reported [38][39][40]. Recently, we reported that Cdk5 transactivates p53 in breast cancer cells under positive regulation of ERK following carboplatin treatment [40]. Cdk5-inhibition promotes survival of p53 expressing cells. As PP2A-inhibition restores the ability of overexpressed p53 to promote cell-death, the upstream kinase that phosphorylates overexpressed p53 under non-stress conditions was investigated. In the present study we demonstrate that p35, a Cdk5 activator levels diminishes following inhibition of PP2A and simultaneous increase in the levels of more sustainable Cdk5 activator p25 following p35 cleavage [41]. Thus, increased level of Cdk5 activator (p25) may facilitate Cdk5-mediated phosphorylation of overexpressed p53, which causes cell-growth inhibition. The decreased level of p35 protein in HTet43GFP cells does not cause cell-growth inhibition because of unavailability of its substrate (in this model overexpressed p53). Though, Cdk5 plays an important role in activating overexpressed p53, as such it is not involved in the proliferation of parental HeLa cells per se in spite of the fact that E6 expression leads to increase in Cdk5 protein expression. p53 executes its apoptotic function through intrinsic or extrinsic pathways [42,43]. To further confirm the pathway involved, we investigated Bax, an important transcriptional target of p53 involved in promoting intrinsic mitochondrial apoptosis. Bax translocates to mitochondrial outer membrane causing MOMP and releases cytochrome-C into cytosol. Cells lacking Bax or those overexpressing Bcl-2 are profoundly resistant to a broad range of Figure 8 Cdk5 associates with p53 to phosphorylate at Ser20 and Ser46. (A) HTet26p53 and HTet43GFP cells 12 h pretreated with Cdk2/5 inhibitor were treated with Dox in the presence or absence of OA for 48 h and processed for western blotting with p53, pSer20 and pSer46 antibodies. (B) HTet26p53 and HTet43GFP cells treated as mentioned in (A) were processed for immunoprecipitation with p53 antibody. p53 and Cdk5 immunoblot was done. (C) Model for Cdk5-mediated p53 phosphorylation and activation. Cdk5 phosphorylates p53 which is dephosphorylated by PP2A. Inhibition of PP2A promotes phosphorylation of p53 at Ser20 and Ser46. Activated p53 is recruited on p21 and Bax promoter to execute cell cycle arrest and apoptosis. Cdk5 phosphorylation is not dependent on ERK activation but on an unknown kinase(s). apoptotic stimuli, including chemotherapeutic drugs treatment and serum starvation [17]. In HPV-positive cancers Bcl-2 overexpression and Bax degradation by E6 facilitates cancer progression [14]. Here, we demonstrate that upregulated Bax translocates to mitochondria upon PP2A-inhibition in p53 overexpressing cells which is dependent on Cdk5 activity. Thus, only phosphorylated p53 triggers Bax transcription to increase its levels and cause apoptosis. In addition, the cell cycle arrest caused by inhibition of PP2A in p53 overexpressing cells may be dependent on transcriptional upregulation of p21 gene. Collectively these data also provide evidence for reactivation of E6 disrupted p21 and Bax pathways in HPV positive cells. Finally, we propose that Cdk5 interacts with p53 and phosphorylates Ser20 and Ser46 residues. Phosphorylation restores the ability of overexpressed p53 to specifically bind on p21 and bax promoters ( Figure 5C). These findings provide novel insight into the regulation of p53 transactivation functions and propose PP2A to be a key player in modulating p53 functionality. The phosphorylated status of specific residues may be involved in promoter selection and this proposition needs further investigations. Also, this is the first report which provides mechanism for functional activation of p53, and details the essential modifications necessary for nongenotoxically overexpressed p53 to be able to execute its tumor suppressor functions in HPV-positive cells. Moreover, activation of overexpressed p53 without targeting viral oncogenes may have implication in the treatment of virus infected carcinomas. The efforts towards the newer approaches to target p53 pathway and usefulness of reactivation of p53 pathways in treatment of cancers are encouraging. Therefore, these findings could have therapeutic importance for the treatment of cervical cancers as well as other cancers types in which p53 is functionally abrogated. Figure 9 Activated p53 inhibits tumor-growth by transcriptional activation of its downstream pathways. (A) HTet23p53 (n = 12 mice) or HTet43GFP (n = 4 mice bearing 2 tumors each) were divided in two groups and one group was given water supplemented with Dox. After tumor size reached to 5-10 mm in diameter, OA was administered to all mice. Tumor-growth was measured weekly and average tumor volume was plotted (+S.E.). *Represents P < 0.05. (B) Tumor image and weight after mice were sacrificed. (C) Western blotting of p53 and bax and (D) RT-PCR for p53 was performed in HTet23p53 and HTet43GFP tumors samples from mice with or without access to Dox in the presence and absence of OA. β-actin loading control. (E) Tumors samples from mice given Dox with or without OA treatment were processed for ChIP assay. Input and eluted DNA was used for RT-PCR with p21 or bax promoter primers.
2014-10-01T00:00:00.000Z
2010-07-31T00:00:00.000
{ "year": 2010, "sha1": "25a3b9b2586057c2c6f0bec67f013e0b815510af", "oa_license": "CCBY", "oa_url": "https://static-content.springer.com/esm/art:10.1186/1476-4598-9-204/MediaObjects/12943_2010_719_MOESM1_ESM.PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e31299dec3a30a6c15edbbe4d9466e666451b715", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258182344
pes2o/s2orc
v3-fos-license
Gut distress and intervention via communications of SARS-CoV-2 with mucosal exposome Acute coronavirus disease 2019 (COVID-19) has been associated with prevalent gastrointestinal distress, characterized by fecal shedding of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA or persistent antigen presence in the gut. Using a meta-analysis, the present review addressed gastrointestinal symptoms, such as nausea, vomiting, abdominal pain, and diarrhea. Despite limited data on the gut–lung axis, viral transmission to the gut and its influence on gut mucosa and microbial community were found to be associated by means of various biochemical mechanisms. Notably, the prolonged presence of viral antigens and disrupted mucosal immunity may increase gut microbial and inflammatory risks, leading to acute pathological outcomes or post-acute COVID-19 symptoms. Patients with COVID-19 exhibit lower bacterial diversity and a higher relative abundance of opportunistic pathogens in their gut microbiota than healthy controls. Considering the dysbiotic changes during infection, remodeling or supplementation with beneficial microbial communities may counteract adverse outcomes in the gut and other organs in patients with COVID-19. Moreover, nutritional status, such as vitamin D deficiency, has been associated with disease severity in patients with COVID-19 via the regulation of the gut microbial community and host immunity. The nutritional and microbiological interventions improve the gut exposome including the host immunity, gut microbiota, and nutritional status, contributing to defense against acute or post-acute COVID-19 in the gut–lung axis. . Introduction The coronavirus disease-19 (COVID-19) first occurred in 2019 and is now a worldwide pandemic with more than 15 million deaths (1). Typically, the presence of gastrointestinal signs or symptoms during COVID-19 has been associated with approximately 35-50% of COVID-19 cases. In a meta-analysis examining 4,243 patients, the pooled prevalence of gastrointestinal symptoms was 17.6% (2). Frequently observed gastrointestinal symptoms include anorexia, diarrhea, vomiting, and abdominal pain (3). With increasing COVID-19 severity, gastrointestinal symptoms were more apparent (4). The pathogenesis of COVID-19, including gastrointestinal symptoms, remains elusive, despite tissue-specific immunofluorescence detection of SARS-CoV-2 binding to a specific receptor such as angiotensin-converting enzyme 2 (ACE2), predominantly expressed in the gastrointestinal tract (5,6). Numerous cohort studies have reported that patients with COVID- 19 and gastrointestinal . Clinical symptom-based association between viral infection and gastrointestinal adverse outcomes First, we evaluated the clinical evidence using the literaturebased symptoms of gut distress in patients with COVID-19. The literature search for this association was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. To address the clinical association between SARS-CoV-2 infection and gut distress, we performed the meta-analysis by collecting studies reporting the gastrointestinal symptoms or clinician-observed features in patients using laboratory-confirmed methods. To obtain an evidence-based minimum set of items according to the PRSIMA guideline, the gastrointestinal symptom-based case-control studies were selected from PubMed and LitCovid (n = 244), ScienceDirect (n = 759), and Google (n = 140). After de-duplication, all unique citations were independently screened by reviewers. In particular, articles that failed to meet established inclusion criteria were excluded by screening titles and abstracts, scrutinizing, and the consensus decision-making. We included studies with adequately available data on both control and case groups, but excluding case reports and studies of patients with symptoms other than gastrointestinal symptoms or underlying diseases such as cancer, autoimmune disease, and metabolic diseases. Finally, eight articles were evaluated in the meta-analysis ( Figure 1). The selected articles covered events in 14,188 patients, comprising 2,800 COVID-19positive patients and 11,388 control patients from five countries, including the USA, Portugal, China, Italy, and Australia. For efficient data extraction, we combined symptoms of "abdominal pain" and "abdominal distension" into the more prevalent and widely reported symptoms of "abdominal discomfort". Where studies reported one symptom "or" another (e.g., nausea or vomiting), we extracted the prevalence of both. We extracted grouped symptoms (e.g., any gastrointestinal symptoms) without further description or definition, rather than using the sum of all gastrointestinal symptom data to prevent data overlapping between symptoms. The pooled prevalence of each symptom was estimated using the Metaprop package and the variance was normalized using a random-effects model such as Freeman-Tukey arcsine transformation of the prevalence. Statistical heterogeneity was assessed by I2, the proportion of total variation due to interstudy heterogeneity. Owing to the high levels of heterogeneity (96.0-98.1%) among studies, additional subgroup analysis, meta-regression, or sensitivity analysis could clarify the underlying causes behind high heterogeneity between studies. The Newcastle-Ottawa Scale may afford an alternate tool for assessing the quality of casecontrol studies in meta-analyses (20). Taken all symptoms and prevalence, all pooled OR (95% CI: 1.04-2.24) indicated notable positive associations between COVID-19 and gut distressassociated symptoms despite the heterogeneity between studies. Based on the literature-based assessment of the clinical outcomes, we further evaluated the pathological processes and mechanisms of the lung-gut communications in patients with COVID-19. . Viral entry and translocation into the gut-lung axis . . Airway entry and reverse translocation to gut Coronaviruses are enveloped single-stranded RNA viruses characterized by club-like spikes projecting from their surfaces, with a remarkably large RNA genome. The SARS-CoV-2 genome encodes four major structural proteins: spike (S), nucleocapsid (N), membrane (M), and envelope (E), each of which is essential for composing the viral particle (21). Phylogenetic analysis of the complete genome sequence of SARS-CoV-2 revealed that the new virus shares 89.1% nucleotide sequence identity with SARSlike coronaviruses detected in bats (22). ACE2, the functional Moon . receptor of SARS-CoV-1 and SARS-CoV-2, plays a crucial role in the pathogenesis of COVID-19, as it allows viral entry into human cells (23). Similar to SARS-CoV-1, the viral S protein of SARS-CoV-2 binds to ACE2 as a cellular receptor. Importantly, SARS-CoV-2 is more pathogenic, partly owing to its 10-to-20-fold increased binding affinity for ACE2 (24). This binding leads to viral host cell entry, in parallel with S protein priming by the host cell protease, transmembrane serine protease 2 (TMPRSS2). The S glycoprotein contains two functional domains: an S1 receptor-binding domain (RBD) and a second S2 domain that mediates the fusion of viral and host cell membranes (25). The SARS-CoV-2 S protein initially binds to the ACE2 receptor on the host cell through the S1 RBD. The S1 domain is shed from the viral surface, allowing the S2 domain to fuse with the host cell membrane. This process depends on the activation of the S protein by cleavage at two sites (S1/S2 and S2') via the proteases furin and TMPRSS2. Furin-induced cleavage leads to conformational changes in the viral S protein, exposing the RBD and S2 domains. TMPRSS2-mediated cleavage of the SARS-CoV-2 S protein facilitates the fusion of the viral capsid with the host cell to permit viral entry (26). Exposure of the RBD in the S1 protein subunit results in an unstable subunit conformation; thus, during binding, this subunit undergoes conformational rearrangement between two states, known as the up-and down-conformations. The down-state transiently hides the RBD, whereas the up-state exposes the RBD but temporarily destabilizes the protein subunit (27)(28)(29). Within the trimeric S protein, only one of the three RBD is present in an accessible conformation for binding with the ACE2 receptor. ACE2 is detected in the nasal and bronchial epithelial cells. In addition to the upper respiratory tract, ACE2 is abundantly expressed on the surface of alveolar type II pneumocytes, which co-express several other genes involved in the regulation of viral reproduction and transmission, including TMPRSS2. Type II pneumocytes are well-known to produce surfactants, maintain their self-renewal ability, and exert immunoregulatory functions. Importantly, these cells share the same basement membrane as the closely juxtaposed capillary endothelial cells, which also express high ACE2 levels. Therefore, type II pneumocytes, along with the neighboring capillary endothelium, could be primary sites for SARS-CoV-2 entry, resulting in damage to the alveolocapillary membrane with reactive hyperplasia of type II pneumocytes. As type II pneumocytes are known targets of viral entry and replication, this may lead to a vicious cycle of persistent alveolar wall destruction and repair, eventually culminating in progressive, severe diffuse alveolar damage. Upregulated ACE2 expression has been documented in the airways of patients with chronic respiratory disease who are smokers, which, together with disturbed ciliary movement and abnormal mucus viscosity, may increase disease vulnerability (30). However, clinical evidence indicates that smoking does not necessarily lead to increased vulnerability (31). Recently, a healthy human donorbased evaluation suggested that the virus could exploit goblet and ciliated cells in the nasal epithelia as entry portals, a plausible primary infection site (32). Considering the variant-mediated adverse outcomes, Omicron is known to cause relatively mild symptoms compared with other variants of concern. The Omicron variant can enter epithelial cells through different binding proteins such as cathepsins and display lower replication competence than other variants (33), potently contributing to attenuated severity of the clinical outcomes. Airway particles, including viral particles, are entrapped in the airway mucosa and cleared via mucociliary transport. However, the clearance system can be damaged following SARS-CoV-2 infection via dedifferentiation of multiciliated cells and subsequent attenuation of cilial movement, as shown in a reconstructed human bronchial epithelium model (34). As guardians of the airway, alveolar macrophages can play crucial roles in removal via phagocytosis or translocation from the peripheral lung to the larynx, with subsequent passage through the gut and fecal excretion (35). In addition to gastrointestinal translocation from the airway, the virus can enter the water and food supply systems directly, ultimately reaching the gastrointestinal tract in humans (36, 37). Viral particles that successfully reach the alveolar vasculature or translocate into the gut can systematically affect extra-airway tissues including the gut if they escape the immune system in circulation. . . Vascular translocation and circulation of SARS-CoV- ACE2 receptors are also expressed in endothelial cells. It remains unknown whether vascular derangements in COVID-19 can be attributed to endothelial cell involvement mediated by the virus. Intriguingly, SARS-CoV-2 can directly infect engineered human blood vessel organoids in vitro (38). In this in vitro experiment, to verify the possibility of COVID-19 transmission through the endothelial tissue, the authors used human capillary organoids from induced pluripotent stem cells infected with SARS-CoV-2 (39). Notably, human recombinant secretory ACE2 could inhibit infection in organoids mimicking human capillaries with CD31 and PDGFR. An initial study has suggested that the SARS-CoV-2 S protein can bind to CD147 on the cell surface and subsequently enter blood cells, such as platelets and megakaryocytes. Megakaryocytes and platelets actively take up SARS-CoV-2 virions, possibly through an ACE-2-independent mechanism. Based on in vitro antiviral tests, meplazumab, an anti-CD147 humanized antibody that blocks the interaction between the S protein and the CD147 cell surface receptor, could significantly inhibit viral cell entry into circulation. CD147 is a SARS-CoV-2 surface entry receptor, leading to inflammation and thrombosis, which differs from the common cold coronavirus. Moreover, given that elevated blood sugar levels could upregulate CD147 expression, diabetes could be a potential risk factor for poor prognosis in patients with COVID-19 (40). Vasculature-translocated surviving viral particles are available for the secondary tissue infection and subsequent inflammatory outcomes in the gut. . . Gut entry via fecal-oral transmission Owing to intestinal viral RNA shedding, there have been growing concerns that SARS-CoV-2 could be transmitted via the fecal-oral route, given that viral RNA has been detected in patient stool samples (41). It has been suggested that the presence of gastrointestinal symptoms is a likely indicator of viral RNA in the stool (2,42). In contrast, studies have failed to establish a statistically significant correlation between viral RNA and increased gastrointestinal symptom intensity (41,43). However, it has been suggested that stool samples may be positive for viral RNA even when the virus is undetectable in respiratory samples (2,44). It is well-established that viruses can enter the gut, but most cannot survive in the digestive tract, owing to the low pH of gastric fluid and the harsh intestinal environment comprising bile and digestive enzymes. Therefore, no infectious virus was recovered from the fecal samples of patients with COVID-19. Although stool is unlikely to contain infectious viruses (45), confirmative assessments are warranted to comprehensively establish the risk of fecal-oral transmission during infection and its significance in the food system (46). Theoretically, SARS-CoV-2 directly invades the gastrointestinal epithelium through ACE2 receptor. ACE2 is highly expressed in the esophageal upper and stratified epithelium, as well as in absorptive enterocytes derived from both the ileum and colon (5). In approximately 50% of COVID-19 cases, viral RNA was detected in fecal samples, even in the absence of gastrointestinal tract manifestations and after clearance of respiratory infection, thereby suggesting an asymptomatic SARS-CoV-2 infection in the gut and the possibility of fecal-oral transmission (47). However, considering the limited data available, a fecal-oral transmission route clarifying enteric symptoms in patients with COVID-19 is yet to be proposed. Moreover, it is also challenging to rationalize that SARS-CoV-2, as an enteric virus, passes through the stomach and reaches the intestine to infect the intestinal cells. For successful infection via fecal-oral transmission, the virus must overcome biological barriers, such as stomach acid and intestinal bile salts after ingestion. Coronavirus can undergo complete inactivation at pH 2.26 and 4.38 at 37 • C (48). Although the virus can survive under wet or dry conditions for up to 3 days, it was found to survive at pH 2.2 for up to 1 h only at high concentrations (49). Bile salts are one of the various mechanisms that mediate host defense, exerting detergent action against the lipid layer integrity of infectious agents (50). SARS-CoV-2 contains an outer lipid-containing membrane and is an enveloped virus (23). Bile acid is known to be effective against viruses with lipoproteins, but envelope-deficient variants are resistant to its detergent action. Considering all the evidence, in addition to the airway viral infection, the oral ingestion of surviving viral particles contributes to the gastrointestinal distress. /fpubh. . . Impact of SARS-CoV-on mucosal defense . . SARS-CoV--mediated gut barrier distress The gut is divided into several anatomical barriers, each of which plays a vital role in serving as a barrier against foreign materials, such as pathogens and other noxious stimuli. The mucus layer is the first line of defense, composed of mucus, antibodies, and other antimicrobial factors (51). It functions as a physical barrier protecting epithelial cells from microbes (bacteria, fungi, and virus) and large molecules, such as food particles (52). The second layer, beneath the mucus layer, comprises highly glycosylated proteins, glycocalyx, lining the epithelial cell surface. These cell membrane-bound glycoproteins, such as the mucus layer, act as a physical barrier that prevents pathogenic microorganisms from communicating with the gut epithelial cellular monolayer and invading the submucosal tissues (53). The epithelial cell barrier is another defense mechanism against gut microbes and luminal antigens via modulation of the epithelial junctional molecules or transmitting danger signals to the underlying mucosal immune system while facilitating the transport of nutrients and water (54). Epithelial cells have pattern recognition receptors (PRRs), such as Toll-like receptors (TLRs), which allow the recognition of microbial antigens. Enterocytes (or intestinal epithelial cells) are the most common cell type in the mucosal epithelial layer, accounting for 90% of cells (55). Enterocytes are well-known absorption sites and important components of the gut barrier. Gut epithelial cells can also interact with SARS-CoV-2 through the highly expressed ACE2 (24, 56, 57). SARS-CoV-2 has been shown to infect intestinal organoids (58). Furthermore, TMPRSS2, which is also highly expressed in enterocytes in the ileum and colon (57, 59), reportedly participates in priming the SARS-CoV-2 S protein and facilitates viral entry into cells (24). Accordingly, ACE2 and TMPRSS2 are promising targets for intervention against SARS-CoV-2 (60) despite limited evidence on the efficacy of blockers targeting the two proteins (61). Intestinal viral infections may damage the epithelial barrier. For example, Middle East respiratory syndrome-related coronavirus was shown to disrupt the gut epithelial barrier in an animal model (62). Mechanistically, SARS-CoV infection can lead to the redistribution of the PALS1 protein, a tight junction protein, and subsequent disruption of epithelial integrity in the gut and lungs. Moreover, SARS-CoV-2 RNA and viral nucleocapsid protein were persistent in mucosal tissues and cells, including the gut epithelium and CD8 + T cells of patients with inflammatory bowel disease nearly 7 months after SARS-CoV-2 infection (63). Consistent with the airway infection, the Omicron variants showed reduced levels of cytotoxicity-and damage-associated markers in infected gut organoids, compared with the wild type virus and delta variants (33). In contrast, delta variant-infected mini-gut exhibited active clustering of infected gut cells and relatively high levels of the replication efficacy. Since active invasion by the Omicron variant was extremely scarce and lumen-restricted in the gut model, the variant is not assumed to affect the submucosa parts. Therefore, different strains may have different relative tissue tropisms and invasiveness, potently leading to strain-specific clearance rates and clinical symptoms in the gut. Fecal SARS-CoV-2 RNA has been detected in 50% of patients experiencing gastrointestinal symptoms, such as abdominal pain, nausea, and vomiting, within the first week after diagnosis (64). In particular, 12.7% (8.5-18.4%) of subjects displayed persistent fecal shedding of SARS-CoV-2 RNA even after 4 months of diagnosis, without ongoing shedding of oropharyngeal SARS-CoV-2 RNA. Although the above-mentioned study failed to link mucosal viral antigens with the severity of acute COVID-19, it is necessary to address the roles of mucosa-persistent antigens in mucosal defense, recurrence, and disease progression as post-acute sequelae of COVID-19 (PASC). After acute COVID-19, most patients with inflammatory bowel disease presented persistent presence of SARS-CoV-2 antigens in their gut mucosa, irrespective of inflammation levels, potentially contributing to PASC symptoms (63). Despite the lack of mechanistic evidence, it has been proposed that SARS-CoV-2 may increase intestinal permeability, potentially by damaging enterocytes and the epithelial layer (65), necessitating further molecular investigation. . . Mucosal and systemic innate immunity to SARS-CoV- Coronaviruses are known to cause airway damage and lead to pneumonia with imbalanced and hyper-immune responses (22). Increased proinflammatory cytokines and lymphocytopenia have been associated with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection (66). An unbalanced immune response and excessive inflammatory cytokine secretion, known as a "cytokine storm, " have been associated with disease severity and worse prognosis in patients with COVID-19, including multiorgan failure (67,68). Of 197 patients, approximately 34.5% presented neutrophilia (69), which is known to trigger ARDS and sepsis growth in patients with COVID-19. Secondary hemophagocytic lymphohistiocytosis (SHLH), an underrecognized hyperinflammatory syndrome, could also be a significant factor in the development of COVID-19, given that SHLH can cause hypercytokinemia-related fatal and fulminant multiorgan failure (70). SARS-CoV-2 can spread via respiratory droplets, contact, and the fecal-oral route. Viral replication commences in the nasopharynx and upper respiratory tract and continues through the lower respiratory tract and gastrointestinal mucosa (5). Monocytes, macrophages, and dendritic cells (DCs) can serve as primary hallmarks of SARS-CoV-2 infection, given that they link innate and adaptive immunity and play an important role in the antiviral response (71-73). Although the precise correlation between DCs and SARS-CoV-2 in the mucosa has been poorly explored, SARS-CoV-2 accelerates the activation of PRR-linked signaling, including NLRP3 inflammasome or occasionally leads to the cytokine release syndrome (CRS) via robust production of proinflammatory mediators, such as interleukin (IL)-6, granulocyte-macrophage colony-stimulating factor, IL-1β, and tumor necrosis factor (TNF)α during the CRS (74). Therapeutic agents, such as anti-IL-6R, which can target macrophage-related activity, could be crucial interventions against the cytokine storm that occurs during severe SARS-CoV-2 infection (33). In addition to the phagocytic system, natural killer (NK) cells have been associated with a severely poor prognosis of SARS-CoV-2 infection in the presence of functional exhaustion. Among the various cytokines produced during early severe COVID-19, interferon (IFN)-α expression markedly correlated with the severity of COVID-19 (75,76). According to single-cell transcriptomic analysis based on two COVID-19 cohorts, IFN-α directly suppressed IFN-γ production by NK cells (76). Moreover, exhausted NK cells reportedly express CD94/NK group 2 member A(NKG2A), which functions as an inhibitory receptor that reduces the production of CD107a, IFNγ, IL-2, granzyme B, and TNF-α. Therefore, improving NK cellmediated defense might be a promising defense mechanism during early severe cases of SARS-CoV-2 infection (77, 78). Active NK cells recognize viral infection and transmit death signals into the infected cells in the mucosa. Moreover, NK cells may facilitate mucosal phagocyte-induced viral clearance via production of antivirus cytokines including type I interferons. However, exhausted NK cells would fail to defend against SARS-CoV-2 in the mucosa. (81)(82)(83). An increased neutrophil-to-lymphocyte ratio and elevated levels of IL-6 can indicate poor prognosis and disease severity. Increased serum levels of proinflammatory cytokines, such as IL-6, IL-7, IL-1β, IL-2, and IL-10, can induce a cytokine storm and cause serious damage, more destructive than the coronavirus itself. Elevated proinflammatory cytokine levels have been linked to viral sepsis, respiratory failure, shock, and even death if severe (84). Therefore, addressing lymphopenia and cytokine storm could prevent severe complications associated with coronavirus. In addition to direct infective actions of SARS-CoV-2, respiratory virus-responsive mucosal and systemic acquired immune responses would affect the disease progression in the extra-airway tissues. Cytotoxic CD8+ T cells directly neutralize infected cells or CD4+ T cells initiate a humoral response by cooperating with B cells (79, 80). During severe SARS-CoV-2 infection, lymphopenia is accompanied by a marked reduction in CD4+ T and CD8+ T cells, along with elevated neutrophil counts Following the appearance of COVID-19 symptoms, the antibody response increases after 4-8 days, and IgM becomes predominant (85), followed by 10-18 days of persistent IgA and IgG production. IgA is crucial in mucosal defense by neutralizing SARS-CoV-2 and weakening the inflammatory risk (86). The antigen can attach to intestinal epithelial cells or microfold (M) cells, followed by transport into lymph nodes and IgA-secreting B cell activation in the lymphoid tissue (87, 88). Considering SARS-CoV-2, the antigen amount and quality critically impact neutralization. Antibodies should be specific to the S protein and must be detected in the serum for 2-3 weeks post-infection (89,90). Human convalescent serum transfer has been proposed as a potential strategy to prevent and treat severe cases of COVID-19, with its therapeutic value documented in several clinical trials (84, [91][92][93][94]. An important challenge in overcoming COVID-19 is viral elimination from the mucosa through antibodyassociated shedding. Given that infectious agents trigger mucosal immunity (95), mucosal vaccination could be a promising strategy to evoke IgA antibodies at both the mucosal surface and the systemic immune system (96). Importantly, mucosal vaccination may facilitate IgA-virus complex formation in the mucosa of respiratory and intestinal tissues (97). As current modes of COVID-19 vaccination are predominantly based on systemic antigen exposure, efficient strategies are needed to develop promising mucosal vaccination against continuously evolving SARS-CoV-2. . Involvement of gut microbial community in SARS-CoVpathogenesis Following initial lung infection, SARS-CoV-2 invades the gut mucosal immune barrier, directly impacting the intestinal physiology. Moreover, intestinal tissue damage may facilitate gut dysbiosis. It has been reported that commensal microbiota in the lung and gut can counterbalance viral infection by modulating immune responses in a homeostatic manner (98,99). For instance, viral infection-induced changes in pulmonary tissues and other microenvironments may alter the structure and function of the gut microbiota (98). In a mouse model, seasonal influenza infection of the respiratory tract increased the number of Enterobacteria in the gut microbiota and decreased the number of Lactobacillus and Lactococcus (99). Furthermore, intestinal dysbiosis has been associated with increased mortality following respiratory infections, probably due to deregulated airway immune responses. Inflammatory dysbiosis of the gut microbiota and epithelial damage reportedly enhance ACE2 levels, increasing the risk of SARS-CoV-2 infection in the gastrointestinal tract, as well as dissemination to other sites via circulation (5, 100). . . Microbiota-linked prediction of adverse outcomes Various studies have revealed how SARS-CoV-2 infection can alter gut microbiota and its association with adverse outcomes in humans. In particular, viral infection-altered gut communities were shown to be associated with inflammatory status in patients with COVID-19. Serum-based proinflammatory biomarkers positively correlated with increased levels of some consortia, including Ruminococcus gnavus, during viral infection, whereas Clostridia was negatively correlated (101). Moreover, disease severity could be correlated with the abundance of Coprobacillus, Clostridium ramosum, and Clostridium hathewayi (102). It has been reported that approximately 50% of patients with COVID-19 display stool positivity for SARS-CoV-2 even in the absence of gastrointestinal manifestations and after recovery of respiratory SARS-CoV-2 infection (47), indicating the presence of persistent gut infection. Based on viral infectivity prediction using metagenomic analysis of the fecal SARS-CoV-2 genome, patients with COVID-19 demonstrate an increased functional capacity for nucleotide and amino acid biosynthesis and carbohydrate metabolism (47). An in-depth assessment demonstrated an evident correlation between viral infection signatures and the enrichment of gut pathogens, including Collinsella aerofaciens, Collinsella tanakaei, Streptococcus infantis, and Morganella morganii, even in the absence of gastrointestinal manifestations (47). Although the Omicron variant is known to cause relatively mild symptoms with marginal invasiveness in humans and gut models, all SARS-CoV-2 variants of concerns remarkably disrupted the mouse gut microbiota (103). Surprisingly, the Omicron variant infection led to longlasting instability in the gut microbiota and a notable depletion in Akkermansia muciniphila, even in the absence of severe lung pathology. In addition to host markers or disease severity, the fecal viral footprint was notably associated with dysbiosis-linked alterations in gut bacterial communities, paving the way for novel diagnostic tools for potent relapse or chronic adverse outcomes in post-COVID or long-term COVID conditions, potently with differential responses to SARS-CoV-2 variants. In addition, SARS-CoV-2 infection can alter the gut virome community. Although patients with COVID-19 presenting reduced abundance exhibit an under-representation of RNA virus and multiple bacteriophage lineages (DNA viruses), they have notable gut enrichment of environment-derived eukaryotic DNA viruses, mainly including crAs-like phages, Myoviridae, and Siphoviridae families, even after of 30 days of symptom resolution (104,105). Viral genes involved in bacteriophage integration, DNA repair, metabolism, and virulence are predicted to contribute to host stress and inflammation; however, some viral consortia are inversely associated with blood levels of proinflammatory proteins, white cells, neutrophils, and disease severity (104,105). These resident enteric viruses maintain a low level of immune stimulation and are responsible for protective and regulatory effects in the intestine (106). However, given the limited data on the effects of viral composition on microbiota composition and activity during SARS-CoV-2 infection, advanced interkingdom associations need to be addressed to improve the integrated prognosis and intervention against adverse outcomes in patients with post-COVID or long COVID. . . Microbiota-based probiotic counteraction against infection In patients with COVID-19, reduced beneficial commensals were directly correlated with disease severity and complications (107). It is speculated that a decline in probiotic intestinal microbiota would fail to effectively control excessive proinflammatory immune reactions, leading to the subsequent progression of SARS-CoV-2 infection. Considering the immunomodulatory cytokine production in response to beneficial commensal bacteria, the abundance of Lactobacillus species decreased in correlation with anti-inflammatory IL-10 levels during SARS-CoV-2 infection (108). Therefore, serum IL-10 can be employed as a diagnostic indicator to assess disease progression and severity in high-risk patients with COVID-19 (108). Moreover, disease severity is inversely correlated with the abundance of Faecalibacterium parusnitzii, an anti-inflammatory bacterium (102) and subjects with low levels of viral infectivity features presented a relatively high abundance of short-chain fatty acid-producing beneficial bacterial communities, including Parabacteroides, Bacteroides, Alistipes, and Lachnospiraceae, even in the absence of gastrointestinal manifestations (47). Furthermore, several gut immune-modulating commensal bacteria, including Faecalibacterium prausnitzii, Eubacterium rectale, and bifidobacteria, were inversely associated with levels of proinflammatory mediators, tissue injury markers (lactate dehydrogenase, aspartate aminotransferase, and gammaglutamyl transferase), and disease severity (109). Accordingly, these immune-modulating bacteria can potentially counteract proinflammatory and toxic insults during viral infection, providing novel insights into interventions against adverse outcomes during PASC conditions. Patients with PASC tended to display high levels of Ruminococcus gnavus and Bacteroides vulgatus and low levels of Bifidobacterium pseudocatenulatum and Faecalibacterium prausnitzii (110). Considering the inflammatory states due to reduced levels of probiotic commensal community, patients with COVID-19 are speculated to be remarkably susceptible to infection by opportunistic bacteria, such as Klebsiella pneumoniae, Streptococcus, and Ruminococcus gnavus, particularly during the hospitalization period (102). Likewise, patients with PASC were found to be markedly susceptible to nosocomial gut pathogens, such as Clostridium innocuum and Actinomyces naeslundii (110). These opportunistic bacteria can potentially trigger the production of proinflammatory cytokines, such as IFN-γ and TNF-α (102). Overall, the reduced abundance of probiotic gut bacteria can be associated with severe inflammatory responses via the excessive production of proinflammatory cytokines and severe complications in high-risk patients with COVID-19. Therefore, remodeling or supplementation with beneficial microbial communities are promising interventions against the gut mucosal distress in patients with . E ects of nutritional status on susceptibility to COVID-. . Association of nutritional deficiency with disease severity during viral infection Considering the gastrointestinal involvement in SARS-CoV-2 infection, dietary components, including nutrients, bioactive natural products, and probiotics, were assumed to contribute to immune regulation in response to viral infections. In the French NutriNet-Santé cohort study assessing 7,766 adult patients with anti-SARS-CoV-2 antibodies, dietary intake of vitamin C, vitamin B9, vitamin K, fibers, and fruit vegetables was associated with lower susceptibility to SARS-CoV-2 infection, whereas dietary intake of calcium and dairy products did not contribute to the infection risk (111). The beneficial effects of vitamin C have been welldocumented in various in vitro and in vivo studies. Exposure to high doses of vitamin C can induce antiviral actions against various viruses (112). In clinical trials, treatment with a high dose of intravenous (IV) vitamin C decreased vasopressor requirements and improved mortality in patients with septic shock (113). In addition to intervention against non-communicable chronic diseases via regulation of inflammation and complications, various dietary components, including vitamin C treatment, can contribute to the supportive clinical management of infectious diseases, such as COVID-19 (114). Moon . In addition to vitamin C, multiple lines of evidence suggest a potential link between vitamin D and SARS-CoV-2 infection (115)(116)(117)(118). Vitamin D is an essential lipid-soluble nutrient absorbed from dietary sources in the proximal small intestine, contributing to skeletal management, intestinal calcium absorption, and immune regulation (119). Although vitamin D deficiency was associated with respiratory distress in patients hospitalized for pneumonia (120), the association between low vitamin D intake and disease severity in COVID-19 cases remains poorly explored (121). A retrospective cohort study revealed that vitamin D deficiency status was positively associated with an increased COVID-19 risk (115). Another retrospective case-control study assessed the possible influence of vitamin D status on disease severity in hospitalized patients with COVID-19 (116). Serum 25-hydroxyvitamin D (25OHD) levels were lower in hospitalized patients with COVID-19 than those in population-based controls, and these patients presented a higher prevalence of vitamin D deficiency (116). Severe vitamin D deficiency (based on a cut-off of ≤10 ng/dL) was noted in 24.0% of patients in the COVID-19 group when compared with 7.3% in the control group (117). Another study by the University of Florida revealed that patients with vitamin D deficiency were five times more likely to be infected with COVID-19 than those without deficiency after adjusting for age groups (118). Taken together, dietary status, such as vitamin D deficiency, may present a risk factor for COVID-19 susceptibility and severity ( Figure 2). Moreover, the association of the amount, duration, and interval of nutrient intake with disease severity and prevalence needs to be examined. In addition, specific pathophysiological mechanisms of dietary factor-linked protection should be examined to clarify adverse outcomes in patients. . . Nutritional intervention against gut defense deterioration during viral infection Vitamin D may counteract gut distress by improving the mucosal and epithelial barriers. Vitamin D supplementation and activation of its nuclear receptor (vitamin D receptor [VDR]) can improve epithelial barrier integrity by enhancing the expression of VDR-associated intracellular junction proteins, including occludin, claudin, and zonula occludens, in the distressed gut (122,123). Conversely, vitamin D deficiency may compromise the mucosal barrier (124), leading to an increased susceptibility to mucosal damage and infection risk in patients with COVID-19. Moreover, the synthesis and secretion of antimicrobial peptides were elevated via vitamin D metabolite-linked VDR activation or subsequent activation of TLR1/2 signaling in the mucosa (125,126), thereby regulating the excessive commensal bacteria and pathogens by the epithelium or mucosal immune system. Moreover, vitamin D supplementation can activate non-canonical pathways involving the aryl hydrocarbon receptor (AhR), facilitating epithelial tight junctions and mediating anti-inflammatory and antioxidant actions in the injured gut barrier (127). Collectively, vitamin D and the activation of its nuclear receptors, including VDR or AhR, could improve the gut mucosal and epithelial barrier during SARS-CoV-2 infection. . . Nutritional intervention against gut dysbiosis during viral infection In addition to the direct effects of vitamin D on gut cell physiology, nutritional supplementation is speculated to act on the gut microbial community as another mucosal exposome during SARS-CoV-2 infection. In various experimental models and human studies, notable correlations have been documented between vitamin D and gut microbiota (128, 129). Vitamin D supplementation in healthy individuals significantly increases gut microbial diversity, with an increased ratio of the phylum Bacteroidetes to Firmicutes (128). Moreover, vitamin D supplementation could remarkably enhance the abundance of health-promoting probiotic taxa, including Akkermansia, Bifidobacterium, Ruminococcaceae, Faecalibacterium, and Coprococcus, while a significant decrease in Bacteroides acidifaciens was observed in non-responders. In particular, some probiotic genera, such as Lactobacillus reuteri, can metabolize vitamin D to 7-dehydrocholesterol via bile salt hydrolase, subsequently contributing to the pools of circulating 25OHD (130). Moreover, supplementation with 25OHD reportedly attenuates inflammatory responses in experimental models of inflammatory bowel disease, accompanied by gut microbial regulation (131). Mechanistically, compared with vitamin D-deficient subjects, vitamin D-sufficient animals displayed enhanced levels of gut microbe-responsive RORγt/FoxP3+ regulatory T cells in the colon. Notably, the number of anti-inflammatory regulatory T cells positively correlated with the abundance of Bacteroides and Clostridium XIVa. Overall, vitamin D status was predicted to shape the gut microbial community, which can facilitate the bioactive metabolic conversion of vitamin D and regulatory responses against inflammation during SARS-CoV-2 infection (Figure 3). . Conclusions Gastrointestinal symptoms are reportedly associated with poor outcomes in patients with acute and post-acute COVID-19. Moreover, persistent remaining viral antigens in the gut mucosal tissue present a risk of recurrent, chronic COVID, and post-acute COVID complications. Based on the findings of a meta-analysis, gastrointestinal symptoms, such as diarrhea, nausea, vomiting, and abdominal discomfort, were notably associated with SARS-CoV-2 infection. In addition to gastrointestinal translocation from the airway in the gut-lung axis, the virus can transmit to water and food supply systems directly and ultimately reaches the gastrointestinal tract in humans via fecal-oral transmission. Despite the lack of mechanistic evidence, SARS-CoV-2 could disrupt the mucosal and epithelial barrier and reach the circulation and systemic immune system. Moreover, the prolonged presence of viral antigens and disruption of mucosal immunity may increase gut microbial and inflammatory risks, leading to pathological outcomes and postacute COVID-19 symptoms. In addition to host immune cell regulation, SARS-CoV-2 infection may alter the gut microbial community, potentially shaping the immunological profile during infection. Generally, patients with COVID-19 exhibit lower bacterial diversity and a higher relative abundance of opportunistic pathogens, such as Klebsiella pneumoniae, Streptococcus, and Ruminococcus gnavus in their gut microbiota than healthy controls. Despite the dysbiotic changes during infection, enhancing specific bacterial communities, such as Lactobacillus and Faecalibacterium parusnitzii, may counteract adverse inflammatory outcomes in the gut and other organs. Moreover, nutritional status, such as vitamin D deficiency, has been associated with disease severity in patients with COVID-19 via regulation of the gut microbial community and mucosal immunity. Vitamin D is predicted to improve the gut mucosal and epithelial barrier by activating its nuclear receptors during SARS-CoV-2 infection. Moreover, vitamin D status is predicted to shape the gut microbial community, which can facilitate the bioactive metabolic conversion of vitamin D and immune regulatory responses against infection-induced inflammatory storms. Herein, the collated evidence provides systemic insights into nutritional and microbiological interventions against acute or post-acute COVID-19 in the gut-lung axis. Author contributions YM contributed to supervision, conceptualization, methodology, formal analysis, visualization, writing, review, and editing.
2023-04-18T13:14:06.480Z
2023-04-17T00:00:00.000
{ "year": 2023, "sha1": "d00285e8611eb31e086c96cabe78fb334fa36739", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "d00285e8611eb31e086c96cabe78fb334fa36739", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7842282
pes2o/s2orc
v3-fos-license
Plaque-Like Sclerodermiform Localized Mucinosis Rapidly Responsive to Topical Tacrolimus We report the successful treatment of plaque-like sclerodermiform mucinosis using tacrolimus ointment topically. We present a 70-year-old male with a large chronic erythema and hardening of the nuchal skin and shoulder area. Subjective symptoms were a moderate pruritus and a rather disabling stiffness. A biopsy specimen revealed typical features of lichen myxedematosus. In a subsequent clinical examination, no associated illnesses such as hypothyroidism or gammopathy were found. Since no established therapy exists for this condition, and as there was a lack of response to potent topical glucocorticosteroids, tacrolimus 0.03% ointment was used off-label twice daily. Surprisingly, this resulted in a rapid, almost complete clearance of the skin within three weeks of treatment. The medical history was negative for other skin diseases. The subjective symptoms were a moderate pruritus and a rather disabling progressive stiffness of the nuchal area as well as of the shoulder area. No topical therapy had been applied yet. Co-existing medical conditions included spino-bulbar muscle dystrophia (type Kennedy), diabetes mellitus and a resected carcinoma of the urinary bladder. A laboratory work-up including complete blood cell count and comprehensive screening for viral hepatitis, Borrelia serology as well as erythrocyte sedimentation rate, C-reactive protein, hemostaseology and urine analysis were normal. Increased thyroid autoantibodies were present (thyroglobulin-antibodies, 469 IU/ml; thyroperoxidase antibodies, 192 IU/ml); however, peripheral thyroid hormones were normal. Paraproteins were detected neither in the serum nor in the urine by electrophoresis and immunofixation, respectively. A 6-mm punch biopsy from the left upper back was obtained for histopathologic analysis. Hematoxylin-eosin staining showed a partial loss of rete ridges in an otherwise normal epidermis. Basal keratinocytes were strongly pigmented, and stained positive with Fontana-Masson melanin stain. The dermis showed a subtle superficial perivascular inflammatory lymphocytic infiltrate and prominent perifollicular and interstitial deposition of mucin without a substantial increase of interstitial fibroblasts. The mucin was highlighted with colloidal iron staining ( fig. 2). The patient received treatment with topical steroids (0.1% mometasone) for 8 weeks which yielded neither a subjective nor an objective improvement of the skin lesions. Hence, topical treatment was changed to 0.03% tacrolimus (Protopic) twice daily, resulting in a nearly complete reduction of hardening and erythema of the previously affected skin within three weeks. Due to the good clinical response to this therapy, no vehicle control was performed. Discussion Cutaneous mucinoses comprise different entities with diffuse or focal deposition of mucin. Mucin consists of a mixture of glycosaminoglycans (hyaluronic acid/dermatan sulfate) either unbound (e.g. hyaluronic acid) or protein-bound (e.g. proteoglycan). Most glycosaminoglycans are generated by fibroblasts and/or keratinocytes. In 1969, Montgomery and Underwood [1] first made a distinction between lichen myxedematosus (LM), scleromyxedema, and generalized myxedema by means of different clinical patterns. More recently, according to different etiologies, pathogenesis and skin distribution, a classification has been set up by Rongioletti and Rebora differentiating three major categories: (a) generalized mucinosis (scleromyxedema), (b) localized and (c) atypical forms [2]. The localized variants comprise five subtypes including discrete papular LM, acral persistent papular mucinosis, juvenile and adult variant of self-healing papular mucinosis, papular mucinosis of the infancy and nodular LM [3,4,2,5]. As the morphology of the case reported herein does not fit well into the localized variants as described, we would assign our observation to the third group of the atypical forms of mucinoses. More typical localized variants present with skin-colored to reddish, firm papules or nodules on the upper extremities, trunk, thigh and sometimes on acral skin. Clinical differentials of LM include granuloma annulare, all other kinds of lichen (ruber, amyloidosus, etc.), reticulate erythematous mucinosis (REM) syndrome and, if diffuse, scleredema of Buschke. Histopathologically, the hallmarks of localized mucinosis are deposition of mucin with a varying degree of proliferating fibroblasts and a whirly arrangement of collagen bundles in late stages of the disease. Perivascular lymphocytic infiltrates can be observed. The histologic differentials in case of discrete or marginal fibroblast proliferation include myxedema and scleredema. Therefore, the diagnosis requires an analysis of the clinical picture and associated illnesses. As a rule, localized papular mucinoses are only exceptionally associated with hepatitis C, HIV/AIDS, monoclonal gammopathy, or plasmocytoma. To date, no guidelines for the treatment of localized mucinoses exist. Usually, topical steroids are used as first-line therapy. Circumscribed surgery, dermabrasion or laser therapy (carbon dioxide laser) can also be considered. In generalized forms, application of systemic steroids, chlorambucil, cyclophosphamide, aromatic retinoids, or chloroquine has been reported [6]. Topical tacrolimus or pimecrolimus have been reported as useful in rather discrete variants [6,7,8]. Transforming growth factor-beta and tumor necrosis factor-alpha are thought to play a major role in the pathogenesis of localized mucinosis [7]. Thus, positive local effect of topical tacrolimus is probably due to its immunosuppressive effects (blocking calcineurin signal pathway, inhibiting tumor necrosis factor-alpha secretion in human keratinocytes and inhibiting collagen synthesis) [7]. With the patient presented here we want to report another case of rapid improvement of an atypical plaque-like sclerodermiform mucinosis that caused considerable impairment of the patient's quality of life due to skin hardening of the nuchal area, and we want to expand the possible applications of topical tacrolimus in dermatology. Conflict of Interest The authors disclose no commercial or similar relationships to products or companies mentioned in or related to the subject matter of the article submitted.
2016-08-09T08:50:54.084Z
2010-08-30T00:00:00.000
{ "year": 2010, "sha1": "a42fa4123753d67bd834a5b103234d02505bc081", "oa_license": "CCBYNCND", "oa_url": "https://www.karger.com/Article/Pdf/320476", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a42fa4123753d67bd834a5b103234d02505bc081", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
195325843
pes2o/s2orc
v3-fos-license
Social Determinants of Antenatal Care Service Use in Ethiopia: Changes Over a 15-Year Span Background: Improving maternal health in Ethiopia is a major public health challenge. International studies indicate that it is possible to improve maternal health outcomes through action on the Social Determinants of Health (SDH). This study aimed to explore the SDH that influence the antenatal care (ANC) utilization in Ethiopia over time. Methods: The study used data from the nation-wide surveys conducted by the Ethiopian Central Statistical Agency (CSA) and ORC Macro International, USA in 2005, 2011, and 2016. A negative binomial with random effects at cluster level was used to model the number of ANC visits whereas a multilevel binary logistic regression modeled binary responses relating to whether a woman had at least 4 ANC visits or not. The model estimates were obtained with the statistical software Stata SE 15 using the restricted maximum likelihood method. Results: Although the median number of ANC visits significantly increased between 2005 and 2016, the majority of the women do not obtain the four ANC visits during pregnancy as recommended. The odds of having at least four ANC visits were significantly lower among women: below 20 years, those living in the rural areas, having higher birth order, or Muslim. In contrast, higher educational attainment, higher socio-economic status, exposure to mass media, and self-reporting decision empowerment were significantly associated with having at least four ANC visits. Conclusion: The use of ANC visits is driven mostly by the social determinants of health rather than individual health risk. The importance of the various SDHs needs to be recognized by Ministry of Health policy and program managers as a key driving force behind the country's challenges with reaching targets in the health agenda related to maternal health, particularly related to the recommended number of ANC visits. postnatal care (3,4). "The World Health Organization (WHO) recommended focused ANC (FANC) model consisting of at least four visits for low-risk pregnant women, with targeted interventions at each visit in 2002" (4). Few developing countries, including Ethiopia, have fully embraced and implemented the FANC model. In many resource-limited settings, increasing the number of ANC visits for women with uncomplicated pregnancies beyond four is not associated with improved birth outcomes (5,6). According to the most recent Ethiopian Demographic Health Survey (EDHS) (2), 62 percent of women who gave birth in the 5 years preceding the 2016 survey had at least one antenatal care visit however, with suboptimal attendance of the recommended visits. Social determinants are a major underlying cause for inequities in health. International studies suggest that it is possible to improve maternal health outcomes through action on the Social Determinants of Health (SDH), (7)(8)(9)(10). This has, however, not been a systematic theme in the health agenda of low and middle-income countries (11,12). In Ethiopia, the SDH are not systematically addressed in the Health Sector Transformation Plan (HSTP), although it is stated that targeting the social determinants of reproductive health could improve access to quality of services for mothers and children (13). To date, a number of studies have explored individual risk factors of antenatal care utilization in Ethiopia and driven by a complex set of factors that include demographic, cultural, and socio-economic factors, such as age of women, birth order, size of household, education, ethnicity, place of residence, religious background, marital status, employment, income level, and accessibility (14)(15)(16)(17)(18)(19). Nonetheless, none of these studies have systematically reviewed the social factors to show their overall pooled effect on the interconnection between social determinants and ANC visit use at national level. Hence, demonstrating of these social factors on ANC use was warranted. The current study will explore the SDH at different levels and its associations with ANC utilization in Ethiopia over a period of 15 years. MATERIALS AND METHODS The study used data from the three latest EDHSs, conducted by the Ethiopian Central Statistical Agency (CSA) and ORC Macro International (2,14,15). In the current study. We included total of 22 (EA) (clusters) in 2005; 7908 women from 548 clusters in 2011; and data from 7585 women from 575 clusters in 2016. The eligibility criteria were: being in the reproductive ages 15-49 years, reporting at least one births in the last 5 years preceding the actual survey, and participating in one of the three surveys from any region in the country. The number of interviewed females were 14,070 in the 2005 EDHS, 16,515 in the 2011 EDHS, and 18,500 in the 2016, making a total of 49,085 respondents (2,14,15). However, among all female respondents, 22, 799 (46.5%) met the eligibility criteria, and those with complete data on one or more of the variables of interest. Data on these eligible women were pooled from the survey datasets allowing the analysis to span the period 2001 to 2016. Outcome Variables The analyses in the current study were based on two ANCrelated outcomes: (1) The total number of ANC visits each participant had in the index pregnancy; (2) A binary outcome based on whether a woman had had four or more visits during the course of the pregnancy or not, according to at that time recommended four visits in the WHO guidelines for FANC (4), as recommended by the Ethiopian Ministry of Health during this period (13). Explanatory Variables Important individual and community level social determinants (SD) were considered in the analyses. Individual level SD included: marital status, religion, education level, employment status for both the participant and her partner, empowerment (relating to household decision making and whether the women were involved or not: on her own health care; large household purchases; and visits to family or relatives), household wealth index (low, middle, high), mass media (radio and TV) exposure (no exposure, exposed to either a radio or TV and exposed to both), sex of the household head, maternal age at last birth, birth order. The following community level SD were considered: place of residence, urban or rural, and if the region were classified as agrarian, pastoral, or urban. Modeling Number of ANC Visits The data available contained a significant number of zero counts due to the high number of women not attending ANC at all (71.5% in 2005, and 57.1% in 2011). We addressed these distributional challenges by fitting a negative binomial random effects (NBRE) model to our count data. It is important to note two key study assumptions that should be borne in mind when interpreting our findings: First, given the cross-sectional nature of DHS data, some of the information used in the analysis related to the time of the surveys rather than the time of birth and pregnancy. Secondly, we used 2005 as the reference survey year and estimated the incidence rate ratios (IRR), for 2011 and 2016. Estimates of IRR, which represents the change in the number of ANC visits in 2011 compared to 2016, relative to the number of ANC visits in 2005, were obtained from the NBRE model. Modeling Binary Responses Due to data clustering at the survey level, binary data relating to whether a woman had at least four ANC visits in pregnancy or not, were modeled using a binary logistic multilevel regression model after adjustments for several confounders. We identified the main confounding variables from the literature as: age while giving last birth, order of the last birth, place of residence, and husband's education. Multiple multilevel logistic regression model was used to control the effects of potential confounders and from the model, adjusted odds ratios (AOR) with 95% confidence intervals were obtained. In addition, we computed an estimate of intra-cluster correlation coefficient (ICC), which described the amount of variability in the response variable attributable to differences between the clusters. We then used the McKelvey & Zavoina Pseudo R 2 to assess the fit of the model (20,21). Modeling Strategies Both bivariate (data not given) "see Tables S1a,b." and adjusted models were fitted to count and binary response data. Individual and cluster level SD that were significantly (P ≤ 0.05) associated with having ANC visits were included in the multiple Poisson and logistic regression models while controlling for the effect of other variables contained in the model. The model parameter estimates were obtained in the statistical software StataSE 15 using the restricted maximum likelihood method (REML). The level of significance was set at α = 0.05. Ethical Consideration The study was conducted by confirming to national and international ethical guidelines for biomedical research involving human subjects (22) including the Helsinki declaration. This study was reviewed and approved from the Regional Committee for Medical and Health Research Ethics (REK) and Norwegian Center for Research data (NSD) at the University of Oslo. Our team also requested permission to have access to the data from the CSA and ICF international by registering online on the website www.dhsprogram.com 1 and submitting the study protocol (see Additional File 2). We also highlighted the objectives of the study as part of the online registration process. The ORC Macro Inc removed all information that could be used to identify the respondents; hence, confidentiality of the data was maintained. Participants' Characteristics Out of the 22 799 eligible women, 32.0 % were from the 2005, 34.7% from the 2011 and 33.3% were from the 2016 survey with a mean age of 29.1 (±7) years. As detailed in Table 1, the majority More and more women were involved in at least three major decision making of the household (P < 0.01). Over the years, more women had at least one ANC visit (P < 0.01). The analyses (Table 2) also exposed that women with jobs had 25% more ANC visits in 2011 and 11% more ANC visits in 2016 than those unemployed. Household wealth index was significantly associated with the number of ANC visits in all three-survey years. Women from households with middle wealth indices had 39% in 2005, 23% in 2011 and 27% in 2016 more visits than women from low wealth indexed households. Women from high indexed households had 84% in 2005, 51% in 2011, and 16% in 2016 more ANC visits than women from households with low wealth index. In all three surveys, women exposed to mass media had more ANC visits than women who were not exposed to any mass media. For instance, being exposed to both radio and TV increased the number of ANC visits by 53% in 2005, 80% in 2011, and 14% in 2016. Similarly, women empowerment was also found to be an important determinant for ANC use. Women who were empowered stating three or more major household decisions had 2.1 times more in 2011 and 49% higher ANC visits in 2016 than women who were not empowered at all controlling for all other variables in the model. Figure 1 show selected social determinant for the evolution of inequity in the mean distribution of ANC visits. Trends and Changes in the Number of ANC Visits in Ethiopia The changes in the number of ANC visits in each category of the predictors with the 2005 survey as the reference was observed (not shown) "see Table S2 Changes in Completing Four or More ANC Visits Over Time Changes in having at least four ANC visits during any pregnancy in each key social determinants over time were observed after controlling for potential confounding effects of age while giving last birth, order of the last birth, place of residence, and husband's education. Between 2011 and 2016, the odds of ANC use among pregnant women increased significantly by 2-fold: 1.13 (95% CI: 0.96 -1.32, p = 0.13) to 2.14 (95% CI: 1.84 -2.49, p < 0.01) ( Table 3). Furthermore, the results for the covariates included in the multilevel logistic regression model as controls (not shown) "see Table S3" conformed that mothers age, birth order of the child, religion, place of residence, women's education, wealth index, media exposure, sex of household head, and women empowerment were significant determinants for completing four or more ANC visits. Overall for these Ethiopian women, the odds of having at least four ANC visits were significantly higher in 2016 than in 2005 (P < 0.01). We obtained an intracluster correlation coefficient (ICC) of 0.11 from the adjusted multilevel logistic regression. This means that the differences between the clusters account for 11% of the variability in the distribution of women with adequate ANC visits. Based on the McKelvey & Zavoina Pseudo R 2 , the models provided a good fit of the data. Table 1). This is somewhat lower than in studies from other sub-Saharan African countries (41% to 87%) (23)(24)(25). The finding indicated that ANC use depended on the joint effect of individual and community level determinants ( Table 1). We explored, on the basis of available evidences, some of those factors which act as social determinants of ANC use. Globally, economically disadvantaged women suffer from maternal health inequity facilitated by several identifiable and modifiable social determinants, including household wealth (26)(27)(28). We demonstrate in this study that inequity is still there and the increase in utilization included the most vulnerable women, with low economic status or no formal education. Fortunately, the magnitude of disparity detected in our study was smaller than earlier studies in Ethiopia and other developing countries (29)(30)(31)(32)(33)(34)(35). Likewise, it is noteworthy that having a partner with a high educational level was one of the social determinants for ANC attendance in Ethiopia. The role of men in ANC use in a patriarchal society like Ethiopia, where women might seek a husband's permission or approval before taking decisions related to care, warrants further studies. The importance of men's education on maternal health issues, as well as the use of ANC, may play a critical role when politically shaping family priorities and health-seeking behaviors. The current study demonstrates that there is significant differences in the use of ANC services between women of different socio-demographic, cultural, and geographic backgrounds. Women from rural communities have had fewer ANC visits, with significant variations in the number of ANC visits across administrative regions. This should guide regional and local initiatives aimed at increasing utilization of ANC and other preventive services. Especially, pregnant women from pastoralist regions might require special support as poor health resources might be attenuated by a lower literacy rate in this population (36). Muslim women had fewer ANC visits than Christian women across all survey years. This warrants exploration and more in-depth qualitative studies. Our findings also indicate that women, not empowered in household decision-making or exposed to any form of mass media, have lower ANC utilization. This finding is consistent with other studies (37)(38)(39)(40). For the last two decades the political environment in Ethiopia has enabled more options for accessing public health information (41). Findings from our logistic regression model also suggest that the odds of having at least four ANC visits in pregnancy were significantly higher among women in higher age groups and those with higher education status. In general, Ethiopian health policy initiatives such as deploying Health Extension Workers (HEWs) and Women Developmental Armies (WDA), thus identify pregnant woman in rural communities earlier, might have contributed to increased awareness through health promotion, aimed at improved ANC utilization among poor women living in rural areas (42). Strength and Weakness of the Study The findings from this study are based on three waves of survey data that were collected by two reputable institutions: The Ethiopian Central Statistical Agency and the ORC Macro International, USA. The sample sizes of the three surveys were large providing high statistical power. The utilization of count data modeling provides methodologically advantage by taking discrete observations (counts) into account that made the model estimates reliable. Although these data were collected at three different time points, the outcome measures and the predictors were taken from different women in each survey. This makes it impossible to relate the changes in the utilization of the ANC services to the individual level but offer good estimates at the community or village level. Also, the cross-sectional nature of the data does not allow to draw causal inferences and DHS data are associated with recall bias that data was collected retrospectively on events that took place 5 years before the surveys. CONCLUSION The maternal health status could not be improved without fundamental changes in education, household wealth status, employment, media exposure, and empowerment. The importance of the various SDH needs to be recognized by the Ministry of Health policy and program managers as a key driving force behind the country's challenges with reaching targets in the health agenda related to maternal health, particularly related to the recommended number of ANC visits. To ensure adequate use of antenatal service in Ethiopia, upstream approaches that address social issues need to be considered. Such efforts could help improve health equity for maternal health outcomes in the country. More research should investigate whether the SDH identified in this study impact other maternal health indicators. ETHICS STATEMENT The EDHS data used in this study has an ethical clearance from at least one of the following institutes: Ethiopian Central Statistical Authority (CSA); Federal Ministry of Health; National Research Ethics Review Committee (NRERC); and the Institutional Review Board of ICF International through DHS Program. Consent was obtained from each study participant before conducting an interview. We obtained the data by submitting the study protocol through registering online on the website www.dhsprogram.com. AUTHOR CONTRIBUTIONS SO, VT, and JM conceived the research. IM and SO designed the study, analyzed the data, and developed the first draft. JM, JS, IM, and VT critically reviewed and edited the manuscript for intellectual content. All authors revised the final document, read and approved the final manuscript. FUNDING This publication was supported by NORAD (Norwegian Agency for Development Cooperation) under the NORHED-Program, Agreement no. ETH-13/0024. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
2019-06-25T13:16:34.000Z
2019-06-25T00:00:00.000
{ "year": 2019, "sha1": "c2b5a2ea1b576cd1b2589dd3c2a3469e531bd932", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpubh.2019.00161", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2b5a2ea1b576cd1b2589dd3c2a3469e531bd932", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
257409456
pes2o/s2orc
v3-fos-license
Infrared image fusion for quality enhancement This paper presents an approach for infrared image enhancement through fusion. Firstly, the infrared image is enhanced through histogram matching to enhance its dynamic range. A reference image with a good dynamic range, such as Cameraman, Lena, and Mandrill, is used in the histogram matching process. After that, the enhanced image is fused with the original image through curvelet fusion to inject much more details in the infrared image. The proposed approach achieves high quality of infrared image enhancement compared with different techniques. Introduction According to the simplest definition, image processing is the act of utilizing a digital computer to remove noise and other irregularities from digital images. The noise or irregularity may creep into the image either during its formation or during transformation [1]. Image enhancement is the process of improving the visibility of an image higher-and lower-frequency details [2]. The goal is to enhance the visual details of the image or to provide a better transform representation for use in image processing applications like analysis, detection, segmentation, and recognition. It also aids in the detection of background information, which is required to comprehend object behavior through human vision and perception. The low contrast of images prevents viewers from readily distinguishing objects against a dark background. If the colors of the items and the background are the same, the majority of color-based image processing techniques will not work in this situation. The study of image enhancement methods divides them into two major categories: transform-domain methods and spatial-domain methods. Image fusion is the process of gathering two or more images of the same scene, to get a single image with much details. This image should have a good content and be easier to interpret. Obtaining the images resulting from the fusion process of images captured from different instruments is an important process for use in different applications, such as infrared imaging and diagnosis of diseases. Fusion algorithms enhance the details of the resulting output images, and allow immunity to the environmental influences affecting the imaging process [3]. Image fusion can be implemented at three different levels: pixel, feature, and decision. Pixel-level fusion is a low level of fusion that is used to analyze and merge data from various sources prior to estimating and recognizing the original information. The feature level is a middle level of fusion that selects important image features such as shape, length, edges, segments, and direction. The decision level is a high level of fusion that indicates the actual target. In addition, fusion methods are divided into two categories: spatial-domain fusion and transform-domain fusion. Spatialdomain fusion methods include averaging, Brovery method, and Principal Component Analysis (PCA). Unfortunately, spatial-domain methods may cause spatial distortion in the fused images. Hence, transform-domain methods can be used to solve this problem. For fusion, Discrete Wavelet Transform (DWT) image fusion can be used as shown in Figs. 1. Moreover, other transforms such as the curvelet transform can also be used. The proposed approach for infrared image enhancement through histogram matching and image fusion is illustrated in Fig. 2. Related work Image fusion is the process that is used to collect multiple images to obtain a single image with more details. The aim of image fusion in infrared imaging is to obtain new images that are more suitable for visual interpretation. Image fusion aims to improve image quality by decreasing redundancy to increase the applicability of infrared images. The importance of image fusion lies in the fact that each observation image contains supplementary information. When this supplementary information is merged with that of another observation, an image with much details is obtained. Different techniques can be used for image fusion, such as DWT, Dual-Tree Complex Wavelet Transform (DT-CWT), and fuzzy processing. Wavelet transform is a multi-resolution tool that can be used for image decomposition. It allows decomposition of the image into high-frequency and low-frequency components by different filtering operations at multiple scales [4][5][6]. The DWT is the transform that decomposes the signal into a mutually-orthogonal set of wavelet scales. One of the drawbacks of the wavelet decomposition of images is the limited ability to deal with curved shapes or lines. The curved lines need some sort of piecewise approximation that is possible with the curvelet transform as shown in Fig. 3. Dual-Tree Complex Wavelet Transform (DT-CWT) is a fusion tool, which can be implemented in more than one methodology. One of them is decomposing the source images into coefficients of high frequency and low frequency. After that, the high-frequency coefficients are fused by the maximum-choice fusion rule and the low-frequency coefficients that include some important information are fused depending on the weighted-average fusion rule. Complex basis functions are used in this implementation to allow efficient utilization of phase information. To obtain the fused image, the Inverse Dual-Tree Complex Wavelet Transform (IDT-CWT) is used [7]. The advantages of this tool are good shift invariance, selectivity, perfect reconstruction, and simple computation [8]. Fuzzy image fusion is based on the rules with which the human makes decisions. Fuzzy machines work on the same way as the human, on the condition that the decision and how to choose this decision are replaced by fuzzy groups, and the rules are changed by fuzzy rules [9]. The fuzzy image fusion steps are summarized as follows: Aghamaleki et al. [10] proposed a technique for image fusion using DT-DWT and an optimization process. Nagaraja et al. [11] introduced a method for medical image fusion using a hybrid meta-heuristic approach. Firstly, the weighted fast discrete curvelet transform is applied to obtain the image high-frequency and low-frequency sub-bands. The high-frequency sub-bands of the two images are integrated by the optimized type II fuzzy technique. The averaging approach is used to perform the fusion of low-frequency sub-bands. Finally, the inverse transform is performed to produce the final fused image. Desale et al. [12] presented a technique for image fusion based on PCA , DCT , and DWT. DWT-based techniques achieve better image fusion results than other ones. Sruthy et al. [13] proposed an image fusion method based on DT-CWT. This method was implemented on medical images for cancer diagnosis. The proposed approach for image fusion and enhancement The proposed approach consists of two stages: image enhancement and image fusion. Firstly, the infrared image is enhanced by histogram matching to a visual image with good characteristics represented in a wide histogram such as Cameraman, Lena or Mandrill. After that, the enhanced image is fused with the original image. Different types of image fusion are considered including curvelet, fuzzy and DT-CWT. Figure 2 shows the block diagram of the proposed approach. Histogram matching The proposed approach is based on histogram matching for image enhancement. It includes the following steps: A visual image with good histogram characteristics is selected. 2. The mean value of the visual image is estimated as follows: 3. The standard deviation of the visual image is estimated as follows: 4. The mean of the infrared image is estimated as follows: (1) 6. A correction factor is estimated as follows: 7. Mean correction is performed as follows: 8. The enhanced image is obtained with the following formula: Curvelet fusion The curvelet transform is more suitable for representation of curved objects. It is fast in implementation and more y).c r + m 2m efficient in image representation. It allows piecewise representation of curved lines [14] as shown in Fig. 3. Steps of the curvelet transform for image processing are summarized below: Steps for image fusion using the curvelet transform: • Registration for the two input images is performed. • All steps of the curvelet transform are applied on both images to get different tiles of sub-bands. • The maximum-frequency fusion rule is applied on the tiles to be fused from both images. • To get the fused image, inverse curvelet transform steps are applied. Simulation results The proposed approach is applied on infrared images. It consists of two stages: image enhancement and image fusion. The image enhancement is performed by histogram a. Cameraman as a reference b. Mandrill as a reference c. Lena as a reference intensity. Figure 4 illustrates the resulting enhanced images. Figure 5 clarifies the resulting fused images with DT-CWT. Figure 6 clarifies the resulting fused images with the fuzzy technique. Figure 7 clarifies the resulting fused images with curvelet fusion. Tables 1, 2, and 3 give the evaluation metrics obtained with all types of fusion applied in the paper. Tables 1, 2, and 3 present the resulting evaluation metrics of infrared image enhancement with all utilized fusion techniques. The results reveal that the curvelet fusion technique achieves the best performance. The quality of the resulting fused images with curvelet fusion is better than those with other techniques. From the visual perspective, the resulting images with curvelet fusion are the clearest. Conclusion A proposed approach for quality enhancement has been applied for infrared images. It consists of two stages: image enhancement and image fusion of enhanced and original images. The image enhancement is performed by histogram matching. We compared the results of the proposed approach with those of different fusion techniques. The performance of the proposed approach is the best with the curvelet fusion technique. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-03-09T16:06:19.204Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "db53666cadea950abd615dac51204c21fffabd02", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12596-022-01018-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "fd7db0a31e77330005a0f44f4b2d808fa4f80754", "s2fieldsofstudy": [ "Engineering", "Physics", "Computer Science" ], "extfieldsofstudy": [] }
125616442
pes2o/s2orc
v3-fos-license
The principle of least effort and Zipf distribution ”Each individual will adopt a course of action that will involve the expenditure of the probably least average of his work.” This statement was named ”the principle of least effort”. The principle of least effort is often known as a ”deterministic description of human behavior”. In this paper, we present a brief introduction of this principle. Applications of the principle in different fields are also summarized. As the principle of least effort is proposed by Zipf, it is also called Zipf’s law. We then discuss the correlation between three widely considered distributions: Zipf distribution, Pareto distribution and probability distribution. With empirical investigations, it is often stated that, most social behaviors are controlled by the pure Zipf’s law that corresponds to the Zipf distribution of exponent -1. We summarily present the discovery of Zipf’s law in different social behaviors. Some empirical studies are also given as examples, verifying that, in most countries, the distribution of city size by population follows Zipf’s law, and the exponent of Zipf distribution of individual income is about -0.5, the same as Zipf predicted in theory. Introduction The principle of least effort was first discovered in 1894 by a French philosopher: Guillaume Ferrero. He discussed this principle in his article entitled "L'inertie mentale et la loi du moindre effort" [1]. However, until 1949, the principle was proposed by George Kingsley Zipf, an American professor of philology at Harvard University, in his book "Human Behavior and the Principle of Least Effort" [2]. Zipf theorised that the distribution of word use was due to tendency to communicate efficiently with least effort. Hence, the principle of least effort is also known as Zipf's Law. Based on the principle of least effort, it is human nature to want the greatest outcome at the least amount of work. And Zipf showed that useful behaviors were performed frequently. Frequent behaviors became quicker and easier to perform over time. With this phenomenon, it is presented that people often chose their entire behavior along the direction of minimizing the effort. Basically, Zipf's law describes people's social behavior in space. Zipf studied the least effort of individual behavior and of collective behavior, separately. Regarding individual behavior, he statistically analyzed, for example, the speech, words and their meaning, the verbalizations of children. For the collective behavior, he mainly focused on the studies of economy of human social behavior, such as the economy of geography, the distribution of economic power and social status, the distribution of prestige symbols and vogues. In both of these two parts, Zipf began with the empiric aspect, presenting a large number of observations from a truly wide range of living phenomena. And then, he gave the underlying theoretic analysis, attempting to rationalize different kinds of empiric laws in terms of a single uniform principle. Therefore, the principle of least effort is a theory which could be well understood from both empirical and theoretical views. Actually, the topic of effort first aroused the interest of some experimentalists. In 1930, J. A. Gengerelli published some results of a series of experiments performed with blinded and normal white rats. These rats were used to help determine the nature of the path that animals would eventually select from an indefinite number of possible paths leading to food [3]. Experimental results suggested that in practically all cases that the path finally chosen by animals (both normal and blinded ones) was the path of "least effort," namely, the path of minimal distance. Two years later, L. S. Tsai stated that "Among several alternatives of behavior leading to equivalent satisfaction of some potent organic need, the animal, within the limits of its discriminative ability, tends finally to select that which involves the least expenditure of energy" [4]. And also, in 1937, R. H. Waters declared that "Thus, Theseus, after slaying the minotaur, found his way out of the labyrinth and to his loved one by following the string which he had carried with him into the labyrinth. Perhaps, this was not the most direct route in terms of distance, time, or effort, but it was the only sure way he had of escaping. Likewise our rats found that by sticking to the outside pathways they more readily achieved the goal" [5]. Since the principle of least effort was proposed by Zipf, many scholars are attracted to study it and to apply it to different fields. In the next section, we will discuss applications of the principle of least effort in different fields. The field of information retrieval The principle of least effort is exceptionally important in designing libraries and researching in the context of modern library. User's desire to find information quickly and easily is often the primary consideration in a library design. In library literature, "least effort" is restated as Mooers' law, an empirical observation of behavior provided by an American computer scientist Calvin Mooers in 1959 [6]. More commonly, Mooers's law is considered to be a derivation of the principle of least effort. With Mooers' law, it is stated that an information retrieval system will tend not to be used whenever it is more painful and troublesome for a user to have information than for him not to have it. In 1987, T. Mann classified the principle of least effort as one of several principles controlling information seeking behavior [7]. He emphasized seven different research methods helping individuals get further into a subject more quickly, and with less wasted effort. Furthermore, E. G. Bierbaum also declared that "least effort" was one scholar's suggestion for the uniform principle needed in research and practice, library and information science [8]. She stated "No other principle underlies as much of library and information science. Least effort explains the one-look-up reader, staff resistance to automation, the reliance of the scientist in colleagues rather than collections, and the rapid acceptance of CD-ROM compared to microfilm". T. E. Chrzastowski, a professor of library administration, discussed in 1995 whether the workstations, aided by the principle of least effort, had changed the nature of how research was performed in academic libraries [9]. This discussion was proposed based on an investigation about library workstation popularity and the principle of least effort. With empirical data, she first analyzed the impacts of IBIS databases on the UIUC Chemistry library and then the journal use in the UIUC Chemistry library. Empirical results supported the principle of least effort and also showed that "least effort" method was often considered by many patrons as a suitable model for academic library research. In 2004, the principle of least effort was further explored by Z. Liu and Z. Yang [10]. They provided a self-administered questionnaire to study the principal individual and environmental factors influencing a student's decision process of selecting and using their information sources. Survey results say that reasons given by respondents for selecting and using primary information sources show their strong preference for fast and easy information retrieval. This phenomenon suggests that the principle of least effort also controls the respondent's selection and use of information sources. The field of human behavior Generally speaking, interpretation of palaeoenvironmental changes is a key mission of researchers from archaeology, botany, geography and many other relevant disciplines. The principle of least effort also owns its specified scope of application in this field. Employing the principle, scholars A. Scholtz and M. L. Tusenius respectively provided a functional interpretation of charcoal data sets in 1986 [11,12]. In 1992, C. M. Shackleton and F. Prins further discussed the applicability of the principle of least effort in explaining palaeoclimatic data [13]. They proposed a conceptual model to determine whether the principle of least effort was applicable to a given situation. In the conceptual model, four generalized areas used for inhabitant's collection of fuelwood were described, occurring in chronological order. Area one regards the situation of plenty of fuelwood, both dead and live, with a considerable diversity of species available. Area two considers the situation of a medium abundance of dry wood, declining selectivity and maximum effort. Areas three and four take into account the conditions of low wood abundance, little selectivity, minimal effort and demand for wood far exceeding supply. Experimental results display that the principle of least effort is only applicable to the situation of area three. This conceptual model can not only help identify when the principle of least effort is appropriate for interpreting some data sets, but also be a valuable aid for understanding the past human behavior. In 2002, R. F. Cancho and R. V. Sole illustrated a hypothesis of Zipf on the principle of least effort [14]. They aimed to provide new theoretical insights into the absence of intermediate stages between animal communication and language. Beginning with this idea and purpose, Cancho and Sole established a simple form of language game employing a mathematical model that involves a set of signals and objects. With the game, they studied the problem of compromise between speaker and hearer needs. Results strongly display that Zipf's law is a hallmark of symbolic reference but not a meaningless feature. The field of animal behavior In animal ethology, an animal often wants to achieve the most energy with the lowest cost while foraging, so as to maximize the fitness. The optimal foraging theory is a model that helps predict the best strategy with which an animal can achieve the goal stated above. This theory is well known as the most essential theory helping predict how an animal behaves while collecting food. So, the optimal foraging theory can be considered as a derivative of the principle of least effort. The optimal foraging theory assumes that the most economically advantageous foraging pattern will be selected by a species through natural selection that has achieved optimal allocation of time and energy expenditures [15]. In 1984, A. Kacelnik found that starlings could maximize net energy gain per unit time [16]. J. R. Krebs and N. B. Davies also stated in 1989 that, with maximal energy efficiency, the bees were able to avoid expending too much energy per trip and to live long enough to maximize their lifetime productivity for their hive [17]. However, a long term assumption regarding livestock trails in grazing procedure said that livestock often established their pathways of least resistance between frequented portions of their pastures [18,19]. This assumption was proposed by considering only a component of MacArthur and Pianka's optimum foraging theory, stating that animals were expected to minimize energy [20]. They mapped cattle trails in three 800+ ha pastures containing global positioning units. Then they used GIS to quantify characteristics of both trails and the landscape, and also to plot least-effort pathways connecting water sources and distant points on selected trails in each pasture. Furthermore, the assumption that "cattle develop lest-effort routes of travel in rugged terrain by comparing the characteristics of cattle trails and least-effort pathways" was also testified. The relationship between three kinds of distribution In the field of empirical investigation, three widely considered distributions include the probability distribution, Pareto distribution, and Zipf distribution. Many man made and naturally occurring phenomena, such as city sizes, incomes, word frequencies, and earthquake magnitudes, are distributed according to a power-law distribution [21]. This power-law behavior is often presented by the probability distribution that describes the probability of occurrence of an event during the whole system. Zipf distribution focuses on the correlation between the frequency of occurrence of an event and the rank of the underlying event. Pareto distribution is a cumulative distribution, illustrating the probability that a person has an income no less than a given number. Though these three distributions are established in different aspects, they refer to the same thing. In this section, we will discuss the correlation between them. Specifically, we mainly focus on the derivation of both Pareto and probability distributions from a Zipf one. We take the distribution of individual income as example. Zipf's law states that the size of income for individual people is inversely proportional to the rank of this income in decreasing order. Mathematically, Zipf's law is formulated as where I r is the income of rank r in decreasing order, and C 1 the size of income with rank 1. For convenience, we label the exponent in Zipf distribution by a general variable α, that is The rth income owns size C 1 * r −α , meaning that there are r kinds of income with size no less than C 1 * r −α . With this description, the probability that an income is no less than C 1 * r −α is proportional to r, expressed as With Pareto distribution, we can also directly get the cumulative probability distribution, that is Taking the first derivative of the above equation with respect to x, we then have the probability distribution: According to the above discussion, it is noticed that a Zipf frequency-rank distribution of exponent α corresponds to a Pareto cumulative distribution of exponent 1 α and a power-law probability distribution of exponent ( 1 α + 1). X. Gabaix and Y. M. Ioannides illustrated in 2004, by Monte Carlo simulations, that it was considered as a successful application of Zipf's law when the exponent of Pareto distribution was between 0.8 and 1.2 [22]. Figure 1 presents a simple empirical investigation regarding both Zipf and Pareto distributions of American city size by population in 2013. It is illustrated that the exponent of Pareto distribution is about -1.366, quite close to the reciprocal of the exponent of Zipf distribution -0.823. This statistical phenomenon directly provides the empirical evidence for the correlation between exponents of Zipf and Pareto distributions. Alternative expressions of Zipf 's law Zipf's law can be generalized by an approximation on the relationship between rank and frequency where r is the rank of a word-type in decreasing order of frequency, f the frequency of occurrence of the corresponding word in a given text, and C a constant. C, depending upon the underlying text, is often about one-tenth of the size of the text (the total number of running words). When α = 1, Eq. (9) is well known as the pure form of Zipf's law: Pure Zipf's law states that the size of the rth largest occurrence of an event is inversely proportional to its rank. In 1954, B. Mandelbrot proposed a further refinement of Zipf's law: in which r is the rank of a word, f the frequency, C, ρ and B the constants dependent upon the underlying text [23]. This expression forms the basis of statistical LNRE (large number of rare events) models and provides a better fit to low rank but high frequency words. H. P. Edmundson developed a new 3-parameter expression in 1972 considering the relationship between frequency and rank: where a, b and c are constants [24]. Except for the distribution of words by frequency in a given text, the behavior of Zipf's law is also exhibited in many other aspects: the distribution of city size by population (as presented above), the distribution of individual income, the distribution of scientist by the number of published papers, etc. Zipf 's law in different aspects 3.3.1. The distribution of city size The distribution of city size by population has been a longstanding topic of interest since the last century. In 1913, F. Auerbach, a German physicist, launched an initial interest in the distribution of city size [25]. With empirical data, he found there was a universal relationship connecting the city's population and city's rank in United States and five European countries. This relationship can be denoted by, where A is a constant, P i the average population of cities in size-class i, and r i the rank of class i in the order of size decreasing. A. J. Lotka, a US scholar, found empirically in 1925 a better fit function for the first 100 largest cities in United States in 1920 [26]. The function is expressed as P i * r 0.93 i = 5, 000, 000. In 1940, Zipf analyzed the rank-frequency distribution of the first 100 largest Metropolitan Districts in United States in 1940, according to the Sixteenth Census. He discovered that the slope of the underlying distribution was -0.98 (see Fig. 9-2 in Ref. [2]). Zipf's law in the distribution of city size by population suggests that the city with the largest population in any country is generally twice as large as the next-biggest, and three times as large as the third biggest, and so on. Incredibly, Zipf's law in city size distribution has always held true for most countries in the world. Furthermore, K. T. Rosen and M. Resnick classically studied in 1980 the rank-size distribution of cities of 44 countries encompassing both developing and developed nations, using 1970 census data [27]. With their results, the first 50 largest cities in most countries can be predicted by a Pareto distribution of the form: where R is the number of cities with population S or more, A a constant, S the population of city, and a the Pareto exponent. For 44 countries under their investigation, exponent a ranges from 0.809 to 1.963 and has mean value 1.136. And for 32 of them, the Pareto exponent is greater than one. This discovery suggests that the population in most countries is more evenly distributed than that could be predicted by the pure Zipf's law which corresponds to a Pareto distribution of exponent 1. Similar results were also provided by K. T. Soo in 2005 who assessed the empirical validity of Zipf's law in the distribution of city size using new data of 73 countries and two estimation methods [28]. In 2007, he also performed a test of Zipf's law in the rank-size distribution of cities, based on five years population censuses from Malaysian (1957, 1970, 1980, 1991 and 2000) [29]. Meanwhile, the factors that possibly influenced the growth of a city in Malaysia were also explored. Soo found that Zipf's law held for the size-rank distribution of cities in Malaysia in 1957. But since then, cities were more unequal in size than that would be predicted by Zipf's law. Furthermore, he also stated that a city growth was negatively related to the city size. This statement is against Gibrat's law, a rule defined by R. Gibrat, stating that the growth rate of a city is independent of the city size [30,31]. In 2012, M. Cristelli et al. also found that Zipf's law held approximately for the distribution of city size in each European country (France, Italy, Germany, etc). While for the aggregated data in European union, Zipf's law completely failed. They declared that "In fact, historically, the geographic level for Europe, at which an integrated evolution is observed, is the national state, while in the US, the whole confederation, not each independent state, has collectively and organically evolved towards a distribution of cities that follows Zipf's law. From this perspective, the US is an organic, integrated economic federation, while the EU has not yet become so, and shows little convergence to such an economic unit" [32]. This would seem to support the idea that Zipf's law is a response to economic conditions, since it works only if you compare cities that are connected economically in a country. With theoretical analysis and empirical studies, Y. Chen found that Zipf's law was closely related to the hierarchical scaling law [33]. Beginning with a general form of Zipf's law of exponent q, the author firstly defined a self-similar hierarchy of cities based on q-sequence. And then he deduced theoretically the hierarchical scaling law of cities, and observed that the exponent of rank-size scaling law was the reciprocal of that of size-number scaling law. Empirical data from both America and China showed the existence of Zipf's law in the ranksize distribution of cities with exponent -0.738 and -0.889, respectively. The exponent of the number-size distribution was -1.364 for America and -1.193 for China. In 2013, S. Li and D. Sui studied the rank-size distribution of China's urban system based on empirical data in the period of 24 years: from 1984 to 2008 [34]. They displayed that the upper tails of rank-size distributions of Chinese cities followed a power-law function of slope slightly less than -1. This characteristic suggested that Chinese cities were more evenly distributed than that predicted by Zipf's law. A similar trend was also found in many other countries [35] J. Luckstead and S. Devadoss provided in 2014 a comparison and examination of size distributions of Chinese and India cities from 1950 to 2010, using log-normal, Pareto, and general Pareto distributions. Results provide that large number of cities from China and India have similar trends: the rank-size distribution of cities is log-normal in the early periods but power-law in 2010. Furthermore, in both 2000 and 2010, the distribution of city size in India is controlled by pure Zipf's law, different from the situation in China [36]. An empirical investigation regarding the distribution of city size is also presented here, see Fig. 2(a − e). We mainly focus on the first 295 largest cities in United States in both 2010 and 2013, major cities in Germany of more than 100,000 inhabitants in 2010, major cities in France of more than 75,000 inhabitants in 1999, major cities in China of more than 155,540 inhabitants in 2010, and major cities in Japan of more than 202,283 inhabitants in 2010. Results reveal that Zipf distributions of the size (denoted by population) of cities under investigation all behave obviously power-law distribution: from other countries could be found in Ref. [2]. The distribution of firm size As early as 1949, Zipf illustrated the rank-frequency (the number of wage earners) distribution of manufactures in United States. With data collected in 1939 and manufactures ranked in the order of their decreasing number of wage earners, he showed the slope of the underlying distribution was -2/3 ( Fig. 9-8 in Ref. [2]). Zipf also studied the rank-frequency distribution of corporation assets in United States, based on data dating from 1931 to 1936. Results exhibit that corporation assets act as a power-law function of corporation's rank with exponent close to -1 [2]. It has often been observed that the upper tail of the size distribution of firms resembles a Pareto distribution. However, people does not know how to explain this kind of distribution by economic theory. In 1958, H. A. Simon and C. P. Bonini developed a stochastic model of the growth process of firms on the basis of two assumptions [37]. Simulation results provided by the model were quite consistent with empirical data. With their assumptions, Yule distribution would be the steady-state distribution of the same process. Regarding the distribution of firm size, Yule distribution in the upper tail can be approximated by the Pareto distribution: f (s) ∝ s −(ρ+1) as s → ∞. Simon and Bonini showed that parameter ρ here was represented by in which, G is the net growth during some specified periods of assets of all firms in an industry, and g the part of net growth attributable to new firms (firms that have reached the minimum size during the period). In 1995, with data of American firms from Compustat, M. H. R. Stanely et al demonstrated using Zipf plot that the upper tail of the distribution of firm size was too thin, relative to the log normal, rather than too fat [38]. They reported that the distribution of American firm size could be approximated as a log-normal distribution. This discovery, however, could not explain the general property of firms as the empirical data they used were unrepresentative of overall population of American firms. Regarding this limitation, R. L. Axtell studied in 2001 the distribution of American firm size,with the entire population considered, based on the data combined from Census/SBA and Compustat [39]. With "non-employee" firms (of size 0) neglected, he found the distribution of firm size by the number of employees followed Zipf's law of slope -1.008. And the slope would be -0.963 if those firms of size 0 were included. Based on a large number of empirical results, researchers also stated that "the Zipf distribution of firm size is robust across varying definition of 'size', and so too it is quantitatively invariant over time" [40]. In view of the fact that most of researches only focus on the economy in developed countries, J. Zhang et al analyzed in 2009 the data of top 500 Chinese firms from 2002 to 2007 [41]. China is commonly regarded as the biggest developing country in the world. Her rapid growth in economy always attracts the interest of many researchers from different fields. With empirical investigation, Zhang et al revealed that the revenues and ranks of Chinese firms exactly obeyed the pure Zipf's law, with exponent being -1 for each year under consideration. Furthermore, they offered an explanation of this characteristic using a simple economic model, namely AK model. The model is usually applied to describe how the revenue of a firm depends upon its capital and technology level by neglecting the impact of human resource [42]. AK model, however, can satisfactorily represent the formation process of firm size only in China. This is because human resource is more abundant and much cheaper for Chinese firms, compared to other aspects of consumption [43]. The distribution of individual income The distribution of individual income has always been a central concern in economic theory and policy. We have discussed above the rank-frequency distribution of firm size which is analogous to the distribution of group income. Zipf indicated theoretically that the slope of the rankfrequency distribution of firm size was -1, in good agreement with empirical results. However, for the rank-frequency distribution of individual income, the slope should equal -0.5 except the case of the settlement of liability claims against automobile insurance companies whose slope would often rise to 1. This value was predicted theoretically by Zipf in 1949 [44]. Based on individual income data samples from seven different countries including the United Kingdom, France, Finland, etc, he also illustrated some empirical results of the rankfrequency distribution of individual income. All these results exhibit a very good verification of the theoretical prediction of slope -0.5, independent of the country and the year that data samples come from. But differently, H. T. Davis claimed as early as 1941 that the slope of Pareto distribution of individual income was -1.5, according to a very large amount of empirical data [45]. We have discussed before the reciprocal relationship between the exponent of Pareto cumulative distribution and that of Zipf rank-frequency distribution. Based on Davis's results, the slope of rank-frequency distribution of individual income in double logarithmic coordinates was -2/3. Davis also argued that the slope would tend to fall in good times and to increase in bad times. A simple empirical investigation is also given as an example here, regarding the Pareto distribution of individual income in United States for 2010. The underlying result is presented in Fig. 2(f ), indicating the exponent of Pareto distribution is close to -2. This suggests the slope of Zipf rank-frequency distribution in double logarithmic coordinates is about -0.5, the same as Zipf predicted in theory. Conclusions We first present a brief introduction of the principle of least effort. Applications of this principle in the field of information retrieval, human behavior and animal behavior are also summarized. Furthermore, we also discuss the correlation between three widely used distributions: the Zipf distribution, Pareto distribution and probability distribution. Results suggest that both Pareto and probability distributions can be derived from Zipf distribution. A Zipf distribution of exponent α is analogous to the Pareto distribution of exponent 1 α , and the probability distribution of exponent (1 + 1 α ). Zipf distribution considers the correlation between the frequency of occurrence of an event and the rank of the underlying event. With empirical investigations, it is widely stated that Zipf distribution acts as power-law decay function in most social behaviors. And the corresponding exponent is always close to -1, namely the pure Zipf's law. Regarding individual income, however, the exponent of Zipf distribution is close to -0.5, predicted in theory and also verified by empirical investigation by Zipf. We summary discuss the discovery of Zipf's law in different aspects. Some empirical investigations are also provided as examples in this paper. Results verify again that, in different countries, the distribution of city size by population is controlled by Zipf's law of slope close to -1. The slope of Zipf distribution of individual income is about -0.5, the same as Zipf predicted in theory.
2019-04-22T13:13:15.478Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "b4f3931dbc3a3369bcbdc3c411089aaf8a9e5281", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1113/1/012007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5ae81fe1899eb759d0652d7b04430edd5dfe3b7e", "s2fieldsofstudy": [ "Mathematics", "Economics" ], "extfieldsofstudy": [ "Physics", "Economics" ] }
12377734
pes2o/s2orc
v3-fos-license
Incretins in patients with rheumatoid arthritis Background The precise mechanism linking systemic inflammation with insulin resistance (IR) in rheumatoid arthritis (RA) remains elusive. In the present study, we determined whether the incretin-insulin axis and incretin effect are disrupted in patients with RA and if they are related to the IR found in these patients. Methods We conducted a cross-sectional study that encompassed 361 subjects without diabetes, 151 patients with RA, and 210 sex-matched control subjects. Insulin, C-peptide, glucagon-like peptide-1 (GLP-1), gastric inhibitory polypeptide (GIP), dipeptidyl peptidase 4 (DPP-4) soluble form, and IR indexes by homeostatic model assessment (HOMA2) were assessed. A multivariable analysis adjusted for IR-related factors was performed. Additionally, ten patients and ten control subjects underwent a 566-kcal meal test so that we could further study the postprandial differences of these molecules between patients and control subjects. Results Insulin, C-peptide, and HOMA2-IR indexes were higher in patients than in control subjects. This was also the case for GLP-1 (0.49 ± 1.28 vs. 0.71 ± 0.22 ng/ml, p = 0.000) and GIP (0.37 ± 0.40 vs. 1.78 ± 0.51 ng/ml, p = 0.000). These differences remained significant after multivariable adjustment including glucocorticoid intake. Disease Activity Score in 28 joints with erythrocyte sedimentation rate (β coefficient 46, 95% CI 6–87, p = 0.026) and Clinical Disease Activity Index (β coefficient 7.74, 95% CI 1.29–14.20, p = 0.019) were associated with DPP-4 serum levels. GLP-1 positively correlated with β-cell function (HOMA2 of β-cell production calculated with C-peptide) in patients but not in control subjects (interaction p = 0.003). The meal test in patients with RA revealed a higher total and late response AUC for glucose response, a later maximal response of C-peptide, and a flatter curve in GIP response. Conclusions The incretin-insulin axis, both during fasting and postprandial, is impaired in patients with RA. Electronic supplementary material The online version of this article (doi:10.1186/s13075-017-1431-9) contains supplementary material, which is available to authorized users. Background The concept that oral nutrient (glucose) administration promotes a much greater degree of insulin secretion than a parenteral isoglycemic glucose infusion underlies the incretin effect, namely the existence of gut-derived factors that enhance glucose-stimulated insulin secretion from the islet β cell. This phenomenon is estimated to account for approximately 50-70% of the total insulin secreted following oral glucose administration. To date, gastric inhibitory polypeptide (GIP) [1] and glucagonlike peptide-1 (GLP-1) [2] fulfill the definition of an incretin hormone in humans. Furthermore, several studies have shown that these two peptides potentiate glucose-stimulated insulin secretion in an additive manner, likely contributing equally to the incretin effect and together fully accounting for most of the incretin effect in humans. GIP and GLP-1 are degraded by dipeptidyl peptidase 4 (DPP-4) [3], which is a membrane-associated peptidase widely distributed throughout numerous tissues. DPP-4 also exists as a soluble circulating form in plasma, and significant DPP-4-like activity is detectable in plasma from humans. Several studies have confirmed that DPP-4-mediated inactivation of these peptides is a critical control mechanism for regulating the biological activity of both GIP and GLP-1 in vivo in humans [4]. This arc of discovery has led to newly approved antidiabetic therapies during the last decade: GLP1 analogues (exenatide, liraglutide) and DPP-4 inhibitors (saxagliptin, sitagliptin, vildagliptin). Additionally, there has been considerable interest in determining whether insulin resistance (IR) and diabetes are associated with one or more defects in this incretin axis, as well as whether these defects contribute to the development of type 2 diabetes or arise as a consequence of hyperglycemia or other metabolic manifestations of diabetes itself. Several studies have shown an increased prevalence of IR in patients with rheumatoid arthritis (RA) [5][6][7], a finding potentially associated with the degree of RA disease activity [8]. It is thought that low-grade inflammation may contribute to its development [9]. This is supported by the fact that IR in patients with RA has been found to directly correlate with levels of interleukin 6, tumor necrosis factor (TNF)-α, and C-reactive protein (CRP) [10]. In addition, anti-TNF-α therapy has been shown to improve insulin sensitivity and reduce IR in RA [11]. However, the precise mechanism linking systemic inflammation with IR in RA remains elusive. In the present study, we sought to determine whether the incretin-insulin axis and incretin effect are impaired in patients with RA, as well as if they are related to the IR found in these patients. Study participants We conducted a cross-sectional study that included 361 nondiabetic individuals. Of these, 151 were nondiabetic patients with RA and 210 were sex-matched control subjects. All patients with RA were aged 18 years or older and fulfilled the 2010 American College of Rheumatology/European League Against Rheumatism classification criteria for RA [12]. They had been diagnosed by rheumatologists and were periodically followed at rheumatology outpatient clinics. For the purpose of inclusion in the present study, RA disease duration was required to be ≥ 1 year. Although anti-TNF-α treatment has been associated with changes in IR [5,[13][14][15], patients with RA undergoing TNF-α antagonist therapy were not excluded in the present study. The control group consisted of patients recruited from the Spanish Camargo Cohort [16,17]. This cohort was set up between February 2006 and February 2011, and individuals included in this cohort have been followed ever since. The aim of using this cohort was to evaluate the prevalence and incidence of metabolic bone diseases and mineral metabolism disorders. Control subjects included in the present study were subjects without diabetes. Patients and control subjects with diabetes mellitus were not included in the study. Therefore, none of the patients or control subjects were receiving glucoselowering drugs or insulin therapy. All patients and control subjects had a glycemia < 7 mmol/L. Patients and control subjects were excluded if they had a history of cardiovascular events that included myocardial infarction, angina, stroke, or peripheral arteriopathy; a glomerular filtration rate < 60 ml/minute/1.73 m 2 ; history of cancer; or any other chronic disease or evidence of infection. None of the control subjects were receiving glucocorticoid treatment. However, because prednisone is often used in the management of RA, patients taking this drug or an equivalent dose ≤ 10 mg/day were not excluded. The study protocol was approved by the institutional review committee at Hospital Universitario de Canarias and Hospital Universitario Marqués de Valdecilla (both in Spain), and all subjects provided written informed consent. Data collection Surveys of patients with RA and control subjects were performed in the same way. Subjects completed a cardiovascular risk factor and medication use questionnaire and underwent a physical examination to determine their anthropometrics and blood pressure. Medical records were reviewed to ascertain specific diagnoses and medications. Waist circumference was measured at the smallest circumference point between the rib cage and the iliac crest while the subject was in a standing position. The hip circumference was measured at the widest circumference point between the waist and thighs. The waist-to-hip ratio also was estimated. Hypertension was defined as a systolic or diastolic blood pressure higher than 140 or 90 mmHg, respectively. Dyslipidemia was defined as one of the following metrics being present: total cholesterol > 200 mg/dl, triglyceride > 150 mg/dl, high-density lipoprotein (HDL) cholesterol < 40 mg/dl in men or < 50 mg/dl in women, or low-density lipoprotein (LDL) cholesterol > 130 mg/dl. In patients with RA, disease activity was measured using the Disease Activity Score in 28 joints (DAS28) [18], and disease disability was determined using the Health Assessment Questionnaire [19]. Clinical Disease Activity Index (CDAI) [20] and Simplified Disease Activity Index (SDAI) [21] scores for RA disease activity were obtained as previously described. Assessments The homeostatic model assessment (HOMA) method was performed to determine IR; specifically, in this study, we used HOMA2: the updated computer HOMA model [22,23]. Briefly, this method consists of a structural computer model of the glucose-insulin feedback system in a homeostatic (overnight-fasted) state. The model is composed of a number of nonlinear empirical equations (and precludes an exact algebraic solution) that describe the functions of organs and tissues involved in glucose regulation. This model can be used to determine insulin sensitivity (%S) and β-cell function (%B) from paired fasting plasma glucose and specific insulin or from C-peptide concentrations across a range of 1-2200 pmol/L for insulin and 1-25 mmol/L for glucose. In our study, we used C-peptide to calculate β-cell function because the former is a marker of secretion. In addition, we used insulin data to calculate %S (because HOMA-%S is derived from glucose disposal as a function of insulin concentration). This computer model provides an insulin sensitivity value expressed as HOMA2-%S (where 100% is normal). HOMA2-IR (IR index) is simply the reciprocal of %S. Insulin (ARCHITECT i2000; Abbott Diagnostics, Abbott Park, IL, USA) and C-peptide (IMMULITE 2000; Siemens Healthcare, Erlangen, Germany) were determined using chemiluminescent immunometric assays. GLP-1 and GIP were assessed using an enzyme-linked immunosorbent assay (ELISA) (Phoenix Pharmaceuticals, Burlingame, CA, USA). The assay sensitivity (minimum detectable concentration) was 0.11 ng/ml for GLP-1 and 0.47 ng/ml for GIP. These two ELISAs do not cross-react with human insulin, and the presence of insulin in serum does not interfere with the assay results. The kits also have no cross-reactivity with the major species of proinsulin metabolites. The GLP-1 assay does not have cross-reactivity with human GLP-2 or human glucagon. Similarly, the GIP kit does not cross-react with human amylin. Precision was estimated for GLP-1 as interassay 3.79-3.85 and intra-assay 3.81%, for GIP as interassay 3.7-5.06 and intra-assay 4.40%. Serum levels of soluble CD26/DPP-4 were measured through ELISA (R&D Systems, Inc., Minneapolis, MN, USA). The intraassay and interassay coefficients of variation were 4.2% and 8.1%, respectively. Standard techniques were used to measure plasma glucose, CRP, the Westergren erythrocyte sedimentation rate (ESR), and serum lipids. Blood collected from all the participants by means of venipuncture was stored at 4°C for < 4 h and then centrifuged, and subsequently serum/plasma was removed and stored at −80°C. Meal test Ten nondiabetic patients with RA (mean age 45 ± 8 years, body mass index [BMI] 22.6 ± 4.1 kg/m 2 ) and ten control subjects (mean age 46 ± 10 years, BMI 26.6 ± 5.2 kg/m 2 ) were tested for postprandial levels of glucose, insulin, C-peptide, GIP, and GLP-1 after a meal test. For the purpose of this study, both patients and control subjects must have had a BMI < 30 kg/m 2 . In addition, patients with RA were selected if disease activity was not considered to be in remission (DAS28 ≥ 2.6). In order to avoid the confusing effect it could have, none of the patients with RA were on glucocorticoid therapy. Additional file 1: Table S1 describes the demographic and disease-related characteristics of these 20 subjects in whom the meal test was performed. The test meal consisted of 50 g of white bread, 50 g of black bread, 10 g of butter, 40 g of cheese, 20 g of jam, and 200 ml of milk (34% fat, 47% carbohydrate, and 19% protein), comprising a total of 566 kcal (2370 kJ), and the meal was consumed within 15 minutes. Venous blood was drawn 10 minutes before and 30, 60, 90, 120, 150, 180, 210, and 240 minutes after ingestion of the meal. Blood samples were placed in tubes that were immediately cooled on ice and centrifuged within 20 minutes at 4°C, and plasma was stored at −20°C until prompt analysis. Statistical analysis Demographic and clinical characteristics were compared between patients with RA and control subjects using the chisquare test for categorical variables or Student's t test for continuous variables (with data described as mean ± SD). For noncontinuous variables, either the Mann-Whitney U test was performed or logarithmic transformation was done, and data were expressed as median and IQR. Binary variables included in Additional file 1: Table S1 were compared using Fisher's exact test. Differences in glucose homeostasis metabolism molecules and HOMA indexes were studied using three different linear multivariable regression models: a univariate unadjusted model; a second model adjusted for those variable with a p value < 0.20 in the differences between patients and control subjects (age, sex, waist circumference, dyslipidemia, statins, antihypertensive treatment, and CRP and cholesterol levels); and a third model with the same variables, though with the addition of glucocorticoid intake as a binary variable. The association of incretins and DPP-4 with HOMA2 indexes was assessed with multivariable regression analysis performed with the predictive data of the adjusted model (for age, sex, waist circumference, dyslipidemia, statins, antihypertensive treatment, CRP and cholesterol levels, and glucocorticoid intake). Differences between control subjects and patients in the β coefficients of the relationship between incretins (independent variable) and HOMA2 IR indexes (dependent variable) were assessed by adding incretins × RA as an interaction factor into the linear regression models. For the meal test, early response from baseline to minute 60, late response from minute 60 to minute 240, maximum response (expressed in the molecule units as median and IQR) and minutes to maximum response were defined. Differences between AUC in the meal test were calculated using the DeLong method [24]. For all analyses, we used a 5% two-sided significance level, and all analyses were performed using IBM SPSS Statistics version 21 software (IBM, Armonk, NY, USA) and Stata version 13/SE software (StataCorp, College Station, TX, USA). A p value < 0.05 was considered statistically significant. Results Demographic, analytical, and disease-related data A total of 361 nondiabetic participants comprising 151 patients with RA, and 210 control subjects with mean ± SD ages of 53 ± 11 years and 58 ± 9 years (p = 0.00), respectively, were included in this study. The demographic and disease-related characteristics of the participants are shown in Table 1. There were no differences between patients and control subjects regarding BMI, although waist circumference was found to be higher in patients than in control subjects (92 ± 14 vs. 96 ± 13 cm, p = 0.015). The frequency of hypertension was not different between patients and control subjects. This was not the case for the lipid profile. In this regard, patients with RA had lower levels of total cholesterol (206 ± 36 vs. 219 ± 37 mg/dl, p = 0.000), LDL cholesterol (121 ± 31 vs. 135 ± 34 mg/dl, p = 0.000), HDL cholesterol (56 ± 15 vs. 63 ± 18 mg/dl, p = 0.000), and apolipoprotein A1 (170 ± 28 vs. 191 ± 35 mg/dl, p = 0.000). In contrast, triglycerides, lipoprotein A, and apolipoprotein B were found to be higher in patients with RA. As expected, the assessment of ESR and CRP values revealed statistically significant higher levels in patients with RA. Patients from our series had moderate active disease as shown by DAS28 (3.7 ± 1.2), and 50 (38%) of them were on prednisone (median dose 5 [IQR 5-6] mg/day). Disease duration was 6.6 (IQR 3.3-13.9) years, and 59% and 72% were positive for anticitrullinated protein antibodies and rheumatoid factor, respectively. In addition, whereas 85% of the patients were on disease-modifying antirheumatic drugs, 13% were on anti TNF-α treatment and 23% were receiving biologic therapy. Differences in carbohydrate metabolism molecules, incretins, and insulin resistance indexes between patients with RA and control subjects HOMA2-IR indexes, whether calculated with insulin or C-peptide, were different between patients and control subjects ( Table 2). In this sense, HOMA2-S% was lower in patients with RA than in control subjects after adjusting for traditional IR-related factors and prednisone intake (105 ± 53 vs. 108 ± 75, p = 0.006). Similarly, HOMA2-IR was found to be higher in patients with RA than in control subjects after multivariable analysis (1.27 ± 0.82 vs. 1.65 ± 1.69, p = 0.054). In contrast, HOMA2-%B was higher in patients with RA in the univariate analysis. However, the difference was lost after adjustment for covariables (p = 0.14). When HOMA2 indexes were constructed with Cpeptide, the differences between patients and control subjects were found to be stronger. In this regard, all comparisons disclosed higher HOMA2-IR and HOMA2-%B indexes and lower HOMA2-%S in patients with RA even after multivariable analysis ( Table 2). Whereas glucose serum levels were not different between control subjects and patients, insulin (9.8 ± 6.5 vs. 13.0 ± 13.4 U/ml, p = 0.007) and C-peptide serum levels (1.53 ± 0.77 vs. 3.37 ± 2.94 ng/ml, p = 0.000) were found to be upregulated in patients with RA. These differences were maintained after multivariable adjustment including glucocorticoid intake. Similarly, GLP-1 (0.49 ± 1.28 vs. 0.71 ± 0.22 ng/ml, p = 0.06 in univariate analysis) and GIP (0.37 ± 0.40 vs. 1.78 ± 0.51 ng/ml, p = 0.000) were higher in patients with RA than in control subjects. These differences were also present after adjusting for factors related to IR and prednisone intake; in the case of GLP-1, it reached statistical significance (p = 0.000) after multivariable analysis. In contrast, DPP-4 soluble-form serum levels were found to be significantly lower in patients with RA than in control subjects (811 ± 459 vs. 696 ± 301 ng/ml) in univariate analysis. These differences were out of the range of significance after adjustment (p = 0.15) ( Table 2). With regard to RA treatments, neither methotrexate nor anti-TNF-α inhibitors were related to insulin, incretins, DPP-4 serum levels, or IR indexes. In contrast, glucocorticoids were significantly associated with higher Anti-TNF-α drugs, n (%) 20 (13) Abbreviations: ACPA Anticitrullinated peptide/protein antibody, Apo Apolipoprotein, BMI Body mass index, CDAI Clinical Disease Activity Index, CRP C-reactive protein, DAS28 Disease Activity Score in 28 joints, DMARD Disease-modifying antirheumatic drug, ESR Erythrocyte sedimentation rate, HAQ Health Assessment Questionnaire, HDL High-density lipoprotein, LDL Low-density lipoprotein, NSAID Nonsteroidal anti-inflammatory drug, RA Rheumatoid arthritis, SDAI Simplified Disease Activity Index, TNF-α Tumor necrosis factor-α Current prednisone doses pertain to prednisone users only. Data are expressed as mean ± SD or median (IQR). Dichotomous variables are expressed as count and percent. p<0.05 are depicted in bold levels of insulin, C-peptide, GLP-1, GIP, and HOMA-IR indexes, as well as with lower levels of DPP-4 ( Table 3). Relation of incretins and DPP-4 to IR indexes DPP-4 serum levels showed a correlation with IR (HOMA2-IR) and β-cell secretion (HOMA2-%B-C-peptide) in both patients and control subjects after multivariate regression analysis. In both cases, they had a negative and statistically significant correlation with both indexes. A β-coefficient comparison revealed no differences, showing that the relationship of DPP-4 to these indexes, whether in patients or in control subjects, did not differ (Table 4). GIP had a statistically significant association with HOMA2-%B-C-peptide in both patients and control subjects, which did not differ between the two populations (interaction p = 0.29). Otherwise, GIP was found to be related to HOMA2-IR in patients with RA but not in control subjects, although the interaction factor in this case was not significant. Regarding GLP-1, its relationship to HOMA-IR (interaction p = 0.068) and HOMA2%-C-peptide (interaction p = 0.003) was different between patients and control subjects. In fact, although the relationship of GLP-1 to HOMA-IR was found to be negative in control subjects (β coefficient −1.34 [95% CI −2.46 to −0.23], p = 0.018), its correlation with HOMA2-%B-C-peptide was found to be positive and statistically significant in patients with RA (β coefficient 155 [95% CI 105-205], p = 0.000) ( Table 4). Meal test Additional file 1: Table S1 shows the characteristics of patients and control subjects who underwent the meal test. Traditional cardiovascular risk factors, use of medications, and laboratory data did not differ between patients and control subjects. Only waist circumference was found to be higher in patients with RA (75.3 ± 11.8 vs. 95.3 ± 10.0 cm, p = 0.028). Fasting GIP serum levels were also higher in patients with RA than in control subjects (0.93 ± 0.14 vs. 1.14 ± 0.18 ng/ml, p = 0.029). The AUC of glucose response after the meal test was found to be higher in patients with RA compared with control subjects (691 ± 78 vs. 843 ± 114, p = 0.006). Lateresponse AUC in glucose response was also higher in patients with RA, although statistical significance was not reached (p = 0.054). Similarly, C-peptide minutes until maximal response was higher in patients with RA compared with control subjects (30 vs. 75 [60-105] minutes, p = 0.029). Although an insulin AUC comparison between patients and control subjects showed no differences, visually AUC was higher in patients with RA and slower than in control subjects. Moreover, GIP response had a flatter curve in patients with RA, although statistical significance was not reached in this case (Table 5 and Fig. 1). Discussion The present study shows, for the first time, to our knowledge, the expression of incretins in patients with RA. According to our findings, incretins and DPP-4 differ in patients with RA and control subjects. This is related to disease activity and glucocorticoid intake. We also observed that insulin, C-peptide levels, and HOMA-IR indexes are independently elevated in patients with RA compared with control subjects. We have previously demonstrated that β-cell function is impaired in patients with RA because of the elevation in serum levels of split and intact forms of proinsulin [7]. In keeping with former studies [8,25,26], our data confirmed the association between IR and RA. In this context, Chung et al. studied IR in 104 patients with RA and 124 patients with systemic lupus erythematosus (SLE). The former had a higher IR index than those with SLE, and IR showed a positive correlation with the levels of proinflammatory cytokines interleukin-6, TNF-α, and CRP [27]. Severe IR has been also found to be present in patients with early untreated RA [28]. Type 2 diabetes mellitus was thought to be characterized by a severely impaired or absent GIP insulinotropic effect that most likely resulted in worsening insulin secretion. However, current analyses have revealed that type 2 diabetes seems unlikely to result from deficient incretin secretion [29]. On the basis of results obtained during the course of both oral glucose tolerance testing and meal testing, GIP secretion and fasting levels actually seem to increase in both the impaired and diabetic state [30]. In our study, fasting incretin serum levels were higher in patients with RA than in control subjects. This increase in incretin levels was in keeping with the upregulation of insulin and C-peptide. In contrast, DPP-4 was found to be downregulated in patients with RA. We believe that DPP-4 downregulation is consistent with the increase in incretins due to the accepted opposite relationship of incretins and DPP-4. Interestingly, DPP-4 was also found to be positively related to disease activity through DAS28-ESR and CDAI scores. Previous reports have shown decreased enzymatic activity and low DPP-4 serum levels in patients with RA compared with those of healthy control subjects [31,32]. However, an increase in the number of peripheral T lymphocytes expressing DPP-4 has been reported in patients with active RA [33]. This apparent contradictory result may explain the positive correlation of DPP-4 with disease activity observed in our patients with RA. Of note, a recent study involving 50 patients with RA revealed a decrease in DPP-4 serum activity but not in DPP-4/CD26 expression [34]. In another study involving 27 patients with RA, there was also an elevation in blood plasma DPP-4 but a decrease of DPP-4/CD26 in peripheral blood mononuclear cells after clinical improvement following treatment [35]. Taking these observations into account, we feel that the number of peripheral T lymphocytes expressing DPP-4/CD26 is higher in the blood of patients with active RA. In contrast, however, the enzymatic activity and serum levels of DPP-4/CD26 may be lower in the sera of patients with RA than in those of healthy control subjects. Interestingly, prednisone intake was associated with higher levels of incretins and lower levels of DPP-4. To the best of our knowledge, there are no studies focused on the effects of glucocorticoids over incretins or DPP-4 in chronic diseases. We believe that the mechanism by which glucocorticoids impair incretin may be similar to that underlying increases in insulin and C-peptide: The glucose homeostasis disruption and IR state that they induce probably lead to a secondary and compensatory elevation in incretins, as occurs with insulin and C-peptide. We also assessed whether the relationships of incretins and DPP-4 with insulin resistance (HOMA2-IR) and βcell function (HOMA2-%B-C-peptide) in patients with RA differed from those in control subjects. Interestingly, we observed a different relationship of HOMA indexes with GLP-1 but not with GIP or DPP-4. It is known that GIP does not modulate glucose-dependent insulin secretion in type 2 diabetes, even at supraphysiological (pharmacological) plasma levels. Therefore, GIP incompetence is detrimental to β-cell function, especially after eating. GLP-1 remains insulinotropic in type 2 diabetes, and this fact has led to the development of compounds that activate the GLP-1 receptor with a view to improving insulin secretion [30]. In our study, we found that GLP-1 was negatively related to HOMA-IR in control subjects but not in patients with RA. In contrast, GLP-1 was positively related to β-cell function (HOMA2-%B-Cpeptide) in patients with RA but not in control subjects. We do not have an explanation for this finding. We believe that although the association of GLP-1 with IR can be lost in RA, it could still remain as an insulinotropic agent in terms of enhancing higher β-cell secretion in these patients. We also focused on studying incretins in a nonfasting (postprandial) state. To this end, we performed a meal test in nonobese control subjects and patients who were not taking glucocorticoids and who had moderate or high disease activity. Reports regarding meal tests in other populations, such as subjects with diabetes, used similar numbers of individuals to those of our study [36] because the meal test is technically complex and requires a well-trained team. We feel that our results may indicate that the meal test is different in patients with RA when compared with control subjects and that the expression of incretins after this meal test is altered in patients with RA. In this regard, in patients with RA, glucose, insulin, and GLP-1 curves were abnormal compared with those of control subjects. With respect to the trends observed in the meal test, we think that the assessment of a larger series of patients and control Fig. 1 Meal test curves of glucose, insulin, C-peptide, gastric inhibitory polypeptide, and glucagon-like peptide 1 concentrations in patients with rheumatoid arthritis and control subjects subjects could have led to stronger results in terms of statistical significance. Nevertheless, to the best of our knowledge, such findings regarding the meal test in patients with RA have not previously been reported in the literature. Conclusions Our study, which, to our knowledge, constitutes the first assessment of the incretin-insulin axis and the incretin effect in RA, shows that these molecules (metabolic hormones) are impaired in patients with RA. The presence of this impairment reinforces the concept that the disease itself, probably by means of the effect of inflammation, leads to an IR state. Our results demonstrate the existence of a mechanism linking inflammation with IR that warrants further studies.
2017-10-18T21:27:25.629Z
2017-10-17T00:00:00.000
{ "year": 2017, "sha1": "cfd195f947374860151c61d7641c2a515ecba15e", "oa_license": "CCBY", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-017-1431-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cfd195f947374860151c61d7641c2a515ecba15e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118923323
pes2o/s2orc
v3-fos-license
How to Suppress Dark States in Quantum Networks and Bio-Engineered Structures Transport across quantum networks underlies many problems, from state transfer on a spin network to energy transport in photosynthetic complexes. However, networks can contain dark subspaces that block the transportation, and various methods used to enhance transfer on quantum networks can be viewed as equivalently avoiding, modifying, or destroying the dark subspace. Here, we exploit graph theoretical tools to identify the dark subspaces and show that asymptotically almost surely they do not exist for large networks, while for small ones they can be suppressed by properly perturbing the coupling rates between the network nodes. More specifically, we apply these results to describe the recently experimentally observed and robust transport behaviour of the electronic excitation travelling on a genetically-engineered light-harvesting cylinder (M13 virus) structure. We believe that these mainly topological tools may allow us to better infer which network structures and dynamics are more favourable to enhance transfer of energy and information towards novel quantum technologies. I. INTRODUCTION Understanding the mechanisms of optimal transport of various quantities, such as energy or information, across some underlying topology is fundamental to many problems in physics and beyond (see, for instance, Refs. [1][2][3] and references therein). Networks can be used to model quantum channels: for examples, states can be transferred along spin chains [4,5]. In these studies, the aim is perfect state transfer and there tends to be a fixed Hamiltonian that drives the transfer. Controllability of networks asks what kind of possibly time-dependent interactions-which then affects the connectivity structure of the network-will enable any state to be transferred [6]. More recently, quantum network theory has also been applied to model how energy is transferred through biological photosynthetic complexes [7][8][9][10][11][12][13][14] and over more abstract complex networks [15][16][17]. There are numerous factors that need to be considered in order to achieve optimal transport: the dynamics of the network and the approximations used, the initial preparation and its coherence, the location of the target node, site energies, static disorder, noise, dissipation, etc. In this context optimally refers to several transport features as absence of losses, short required time, and robustness (regardless of sudden changes of working conditions). One hindrance to optimal transport is represented by the presence of dark or invariant subspaces/states [7]. Inspired by the similar use of the term "dark states" in quantum optics [18] and condensed matter physics [19,20], Ref. [7] defines them as Hamiltonian eigenstates that have no overlap with the "target" node on the network. They, hence, act as a trap on the network blocking transport. Then, transport efficiency can be increased by either avoiding the dark subspace, or applying certain techniques to nudge states out of the dark subspace, or by destroying the subspace [21][22][23][24]. Here, we will discuss these different methods to enhance quantum transport by means of graph theoretical tools, and apply them to describe the energy transport behaviour that has been recently experimentally observed for a bio-engineered light-harvesting complex realized on a cylinder (M13 virus) structure [25]. This paper is structured as follows. In Sec. II, we formally introduce the network, its dynamics and the corresponding dark subspace. Sec. III reviews methods that are used to enhance (energy) transfer on quantum networks through the lens of dark states: initialisation outside of the dark subspace, using control fields, and coupling with the environment thus introducing noise and disorder. In Sec. IV, we employ graph theoretical results in order to find two results on the dark subspaces on graphs: that there exist dynamics having no associated dark subspace, and that very large graphs asymptotically almost surely have no dark subspace. In Sec. V we describe some applications of these studies to lightharvesting complexes. Finally, in Sec.VI we illustrate the results numerically by changing the underlying topology of a particular system inspired by a recent experiment with genetically-engineered light-harvesting structures [25]. We also highlight the importance of dephasing noise to enhance the transmission efficiency. Some conclusions are drawn in Sec. VII. II. QUANTUM NETWORK A quantum network consists of an underlying graph, on which the dynamics is described via quantum mechanics, as opposed to the usual transition matrices or hopping dynamics of classical networks [26]. A graph, G = (V, E), consists of a set of vertices or nodes V (G) and a set of edges E (G). Let N = |V (G)| be the number of nodes on the graph. The graph can be described by its adjacency matrix A (G), defined as where i, j ∈ V are nodes of the network, and α ij are the weights of the edges. We consider the edges to be undirected and without loops, unless specified otherwise. The coherent dynamics is described by the Hamiltonian: where σ + i and σ − i are the raising and lowering operators at node i respectively, ω i is the local site energy, and [A (G)] ij = α ij determines the hopping rate (interaction) between joined nodes i and j. In the following we will consider the single excitation approximation, as often use for light-harvesting complexes and for quantum states and information transfer [7,27,28]. Hence, the state |i denotes the presence of one excitation in node i, i.e. σ + i = |i 0|, etc. The exit or target node can be thought of as the location from which a decay process transfers irreversibly excitation to a sink, labelled as N + 1. If the target node is node N , then this decay can be formally described by the addition of the Lindblad superoperator where ρ describes the state of the network, Γ N +1 is the decay rate to the sink, and {A, B} = AB + BA is the anti-commutator. The transmission efficiency is given by the probability of population transfer to the final node: Formally, the transfer efficiency represents the probability for the electronic excitation to be transferred to the sink, while 1 − p sink (t) corresponds to the energy trapped in the network. Now, we consider the following definition of dark subspace [7]: Definition 1 Consider a graph G with Hamiltonian dynamics H 0 and target node N , corresponding to the state |N = (0, 0, . . . , 0, 1) in the site basis. The dark subspace is the vector space spanned by the eigenvectors of H 0 that are orthogonal to |N . In order to determine the dark states, it is necessary to know the spectrum of the Hamiltonian and the position of the exit node. The term "dark state" in this context was first used by Ref. [7], who called the dark subspace as the "invariant subspace", since it is invariant under the dynamics described above. We can also define the corresponding light subspace as being spanned by all the eigenvectors of H 0 that are not orthogonal to the target node |N . In this last set of eigenvectors it is possible to identify a particular subset made of vectors whose scalar product with the target node is bounded by a very small positive quantity ε. We can define these vectors as quasi-dark states because they are quasi-orthogonal to |N , hence they are not able to trap the excitation as dark states do, but they can cause transport to be very slow. Here, we introduce a new quantity, named darkness strength, (ε), which enables us to quantify the eigenvector capacity in trapping the excitation inside its eigenspace: it is zero for dark states and very close to zero for quasi-dark states. In the case of noisy quantum dynamics, that is the network is coupled to some environment, then there can be also dissipative and dephasing processes. They can be described by the following Lindblad superoperators, where Γ j and γ j are dissipation and dephasing rates for node j, respectively. The total evolution of the state of the network is theṅ where L is the Lindblad superoperator that describes the coherent and incoherent part of the system evolution. A. Examples of dark subspaces In the homogeneous case of equal local energies and uniform coupling rates, the Hamiltonian H 0 in the first excitation subspace is the adjacency matrix of the underlying network. Thus, the dark subspaces of the Hamiltonian are the eigenspaces of the network that are orthogonal to (0, . . . , 0, 1). Non-degenerate eigenvalues with eigenvectors of form (. . . , 0) lead to one-dimensional dark subspaces, while eigenvalues with degeneracy k are related to dark subspaces of at least dimension k − 1, depending on whether or not the eigenspace is entirely orthogonal to the target node -see Appendix in Ref [7] to see how to find them. We can consider the more general question of whether a network has any potential dark subspaces-whether it has any eigenvectors with zero entries in the site basis. Clearly, networks with degenerate eigenvalues will automatically have dark subspaces relative to any node of the network. In terms of substructures, it has been found that 0 and −1 eigenvalues are related to stars and cliques on the network [29][30][31], suggesting that graphs with many stars or cliques will have degenerate 0 or −1 eigenvalues, respectively. Now, we look at some examples of dark subspaces on paths, lattice graphs and complete graphs [7,15]. However, by exploiting the knowledge of the eigenspectrum of numerous other classes of graphs [32], our statements about the corresponding dark subspaces can be generalized to other complex networks. • Path and Lattice Graphs: State transfer on spin chains and spin networks have been studied in the literature (e.g., [1,4]), and they are one of the fundamental models in physics. Underling spin chains with nearest neighbour coupling are path graphs. The eigenvalues of path graphs are all nondegenerate λ k = 2 cos (πk/ (N + 1)) for k = 1, . . . , N . The corresponding unnormalised eigenvectors x k have components (x k ) m = sin (πmk/ (N + 1)) and zeros emerge at "symmetry points" that split the path graph into equal parts [32]. Thus, if our target node is at any one of these zeroes, then there is a dark subspace. However, in typical state transfer on spin chains, the target node is the end node, where there is never a zero: hence perfect state transfer is clearly possible because there is no relevant dark subspace. Larger lattice graphs also have dark nodes at symmetry points of the network [15,33]. • Complete Graphs: A fully connected network (FCN), or complete graph, of N nodes, is defined as a network where there is a link between any pair of nodes. There is one eigenstate |φ = 1/ √ N N j=1 |j with eigenvalue λ 1 = N − 1, and a degenerate eigenspace of dimension N − 1 with eigenvalue λ 2 = . . . = λ N = −1 [32], whose basis can be chosen as |ψ j = |1 − |j for j = 2, . . . , N . The dark subspace is spanned by {|ψ j : j = 2, . . . N − 1}, which has dimension N − 2. If the initial state is localised on a single node, then it is unavoidable that a component of it will lie in the dark subspace [7]. III. HOW TO ENHANCE TRANSFER In this section, we review several tools that can be exploited in order to increase the network transfer efficiency. One could either choose specific initial states, as in Subsec. III A, or use control fields to time-dependently change the effective Hamiltonian dynamics as in Subsec. III B. Subsec. III C considers the case where disorder and dephasing are applied to the system dynamics. A. Smart Initialisation The evolution of the eigenstates in dark subspace is coherent and stationary (up to a phase), hence it will never lead to a state with a non-vanishing component on site N , i.e. without reaching the exit node N . Indeed, the evolution of the dark subspace as a whole is also invariant. If the initial state on the network has any non-zero component in the dark subspace, that component remains within the dark subspace and thus forever trapped on the network. Only the components in the corresponding light subspace will transfer to the exit. By initialising completely outside the dark subspace, i.e. with an initial state that is orthogonal to the dark subspace, full transfer of the energy can occur in the limit of time t → ∞. This line of attack is pursued by [7,34], who consider small networks with three nodes known as trimers, shown in Fig. 1. Trimers have one dark state that causes excitations to get trapped [7,20,[35][36][37][38]. In fact, one can consider the following Hamiltonian (in the first excitation subspace), with the target node being |3 = (0, 0, 1). Hence, the dark state is |D = (|1 − |2 ) / √ 2, and the other two eigenstates are 1/2 (|1 + |2 ) ± √ 2/2 |3 . If the network state is initialised as |1 or |2 , or in an incoherent combination, then the state is inevitably partly trapped in the dark state. Conversely, if the initial state is the coherent superposition (|1 + |2 ) / √ 2, then perfect transfer occurs. For the in-between initialisation |1 + e iφ |2 / √ 2, there is imperfect transfer, with zero transfer when e iφ = −1 (i.e. initialisation as the dark state). There, dephasing in conjunction with smart initialisation (cf. Subsec. III C) is required to suppress the dark state. This holds for more general networks-if the initial state is completely within the light subspace then the asymptotic transport efficiency is unity. However, since eigenstates tend to be delocalized and a generic initial superposition will necessarily have a nonzero component in the dark subspace, other techniques will be exploited later to enhance transport. B. Control Fields Applying various control fields on the network during the transfer process could alter the direction of the evolution of the state of the network, and increase transport efficiency by modifying the nature of the dark subspace. Given a controlled system, a state ρ is reachable from state ρ 0 if there is a sequence of control fields (along with any underlying Hamiltonian evolution) that will evolve ρ 0 into ρ in some finite time. A system is controllable (or fully controllable [39]) if any state in the state space is reachable from any other state [6,40]. Formally, if H 0 is the system/network Hamiltonian (that is timeindependent), H m are a set of Hamiltonians that can be applied onto the network, and f m (t) are the time-varying controls, then the total Hamiltonian under which the system evolves is A system is fully controllable if the Lie algebra rank condition is true: if the Lie algebra generated by iH and iH m is isomorphic to unitary group u (N ) [39], generating all possible unitaries. Pemberton-Ross et al. [6] find that the more symmetric a network is, the larger the dark subspace tends to be; by adding controls, modifying the Hamiltonian etc., these symmetries can be broken and some dark states can be accessed. In Ref. [41], symmetry breaking is used to make a controlled quantum thermal switch. When the switch is "off", the central qubits are all in the dark subspace and no energy can be transferred from one side to the other. Ref. [42] breaks time-reversal symmetry to increase transport efficiency. More generally, Refs. [43,44] study how symmetries of the Hamiltonian relate to lack of full controllability, and Ref. [45] finds that lack of certain symmetries of the Hamiltonians are necessary for full controllability. Control fields could also take the network into a higher excitation subspace. By doing so, Pemberton-Ross et al. [6] define two grades of dark states: weaker dark states that become non-dark by the introduction of extra excitations or energy-preserving control fields; and truly dark states that require permutation symmetry-breaking [46] to be destroyed. As such, the weaker dark states could be used as storage, since they are more protected from decay (from the sink) than the non-dark states, and are more accessible than the truly dark states [6]. The application of control fields is often not desirable, however. A static network that has high transfer efficiency is generally simpler to implement. Since the breaking of symmetry can lead to enhanced transfer, one can indeed add randomness or dissipative dynamics to break symmetry and assist transport [7]. C. Disorder And Dephasing For a FCN of size N , Caruso et al. [7] find that the probability of transfer is i.e., for large networks the transfer is very small. In fact, such perfectly coherent networks are even worse than classical networks with incoherent hopping which have complete transfer in the limit t → ∞. The poor transfer can be seen as being due to the large size of the dark subspace, given the network symmetries intrinsic in the complete graph with identical nodes -in fact, it has the largest possible dark subspace of dimension N − 2 for a network of N nodes. By introducing static disorder to D local node energies, the dark subspace reduces in size and the probability increases to Hence, for a FCN with D = N − 2 different (disordered) node energies, p sink (∞) = 1. Any initial state has no component in any remaining invariant subspace [7]. Static disorder can also make transfer more robust against dissipation/noise in the weak dissipation regime [47]. Local dephasing on the network nodes has a very similar effect. If there is local dephasing on all nodes then the dark subspace can vanish, and p sink (∞) → 1. In the special case of FCN the best method to obtain a unity transfer efficiency in short times is to apply strong dephasing, which leads to complete lack of coherence and so to a classical dynamics; this is due to the large size of the dark subspace, as discussed before. Instead, other networks need an interplay between quantum coherence and dephasing to destroy the invariant subspace and to obtain the same performance of FCN in the classical regime [7,15]. These two different behaviours are shown in Fig. 2, for a FCN and for a cylinder, both with N = 32 nodes. Dephasing also leads to line broadening, i.e. another way to view the enhanced transport is due to the stronger overlap between excitation lines of the interacting nodes [7]. With the combination of dephasing and static disorder, static disorder is only advantageous when dephasing is weak. When noise (dissipation or dephasing) is too strong, quantum Zeno phenomena occur and the dynamics is frozen [7,15,24,47]: this may be exploited for storage. IV. GRAPH THEOREMS For uniform site energies and coupling rates, i.e. H = A, we can apply two theorems from graph theory which ultimately give the existence of network dynamics for which there are no dark subspaces. The first result is based on the following theorem. Given a real symmetric matrix A = [a ij ] of size N , one can always associate a weighted graph G with N nodes and with edges {i, j} that have weights a ij for i = j. Theorem 1 (Monfared and Shader [48]) For a given connected graph G of N vertices, and given a set of distinct values λ 1 , λ 2 , . . . , λ N , there exists a real symmetric matrix A whose graph has the same topology as G and whose eigenvalues are λ 1 , λ 2 , . . . , λ N , such that none of the eigenvectors of A have a zero entry. By the above theorem, if we have some given underlying connected topology given by graph G, then we can find a set of weightings for the edges-interactions between the different nodes-such that the corresponding adjacency matrix A(G) of the graph has distinct eigenvalues, and all the corresponding eigenvectors have no zero entry. With such dynamics, there is no dark subspace on the network relative to any target node. Corollary 1 For any given underlying connected graph G, there exists Hamiltonian dynamics on the graph for which there is no dark subspace. Real networks tend to have eigenvalues with higher multiplicities (degeneracy) than comparable randomly generated networks [49]. However, if we are able to change the interactions between the nodes that are joined, using, for example, a combination of a different underlying Hamiltonian, control fields, and disorder and noise, we can eliminate the dark subspace altogether and achieve perfect energy transfer. In addition, our next result ultimately states that we do not even need to consider weighting the edges if the graph in question is sufficiently large. Erdős-Rényi graphs G (N, p) have N nodes, in which any edge between any two nodes has some probability p of being there [50]. These graphs are very likely [51] to be disconnected if p < ln (N ) /N i.e. if the probability of edges is sufficiently low [52]. Note that for p = 0, 1, the set of all G (N, p) graphs is equivalent to the set of all graphs, since any graph will be an instance of an Erdős-Rényi graph. Given this fact, we can use the following theorem to subsequently make a statement about all asymptotically large graphs: Theorem 2 (O'Rourke and Touri [53]) A graph G N, 1 2 is controllable with probability at least 1 − CN −α , for any α, where C > 0. This theorem was conjectured by Godsil [54] (see also [55]), and proven by O'Rourke and Touri [53]. The notation of controllability is the same as that introduced in Subsection III B, i.e., the graph is controllable if the dynamics (determined by the adjacency matrix, which is equivalent to the Hamiltonian) can evolve any state into any other state on the graph. Stated in another way, Theorem 2 implies that the relative number of controllable graphs to any graph tends to one as N → ∞. By picking a very large graph at random, it is almost surely controllable, and thus almost surely has no dark states. Corollary 2 A connected graph G of size N , with Hamiltonian dynamics given by the adjacency matrix, asymptotically almost surely has no dark subspace as N → ∞. Hence almost surely, energy transfer on large graphs will happen perfectly if we allow for time t → ∞, without requiring the addition of further controls or different interaction strengths between the nodes. V. APPLICATION TO LIGHT-HARVESTING Real quantum networks are always subjected to noise. However, environmental interaction can enhance transport through a dissipative network. This is true even in classical mechanics, but via physically different mechanisms (e.g., stochastic resonance [56]). Besides, quantum mechanically, noise can maintain and even generate quantum coherence and entanglement [57][58][59][60][61]. The transport of excitations in light-harvesting complexes has attracted much interest in the last decade. Light-harvesting complexes, or antenna systems, are networks composed of chromophores absorbing photons and transporting the created electronic excitations to the reaction centre (the target node). In particular, in the simplest light-harvesting complex, known as Fenna-Mathews-Olson (FMO) complex, found in green sulphur bacteria, experimental evidence strongly suggests that quantum coherence features play a crucial role during the energy transport process [8][9][10]. Theoretical studies show that the additional presence of dephasing noise is needed to describe the observed transport efficiency of almost 100% [7, 12-14, 62, 63]. A more recent example of experimental evidence where it is possible to obtain an optimal transport combining quantum coherence and noise is described in Ref. [25]. Particularly, a light-harvesting antenna system has been realized with a biological material, the M13 virus, and a chromophore network has been created on its filaments. Two versions of this system have been genetically planned: one with a network made of weakly coupled chromophores, and the other one with reduced interchromophoric distance, causing clusters of strongly coupled chromophores. In this second version, involving coherent and incoherent features, they have observed a remarkable improvement of both transport speed and dif-fusion length of the electronic excitation. The average chromophoric distance was exploited to study and control the optimal mixing rate between coherence and noise. Here, the environment assists the transport by suppressing the dark subspaces or inducing interaction between them and other states, causing ultimate leakage into the sink [7,47]. In this paper, our particular choice of the cylinder graph for quantum transport simulations is indeed inspired by the topology of this virus structure. VI. TOPOLOGY ROBUSTNESS Inspired by Theorem 1, we have implemented some numerical simulations that randomly remove a specific number of links in the network. This approach enables us to study both the effect on dark subspaces and to mimic a real condition that could happen in presence of perturbations (e.g., material defects). Removing links is beneficial to the FCN because it reduces the dark subspace dimension, hence also reducing the amount of trapped energy. In contrast, the cylinder graph benefits from link deletion only up to a small percentage of removed links (about 5% of the total); when this percentage grows another dark subspace appears again and the transport gets worse. As we can see in Fig. 3, the energy trapped grows linearly with the number of dark states for both the FCN and cylinder networks. As the number of removed links grows, the energy trapped on the FCN network monotonically decreases, whilst the energy trapped on the cylinder network decreases initially and then increases again. This last behaviour is probably due to the appearance of new symmetries, hence new dark states appear. However, although the deletion of links is a good method to reduce the number of dark states, it is not sufficient in reducing the presence of quasi-dark states, since the latter are more persistent. In Fig. 4 the plotted quantity is the number of dark states and quasi-dark states as a function of the darkness strength and of the number of deleted links. Note that it turns out to be more difficult to destroy quasi-dark states by means of removing links. Moreover, in agreement with Fig.3, after removing too many links the appearance of new dark states can occur, as shown in the right panel of Fig pathways from the initial node to the final one and therefore suppresses both dark states and quasi-dark states. The presence of noise is more effective than links deletion for transport improvement. Indeed, in Fig. 5 we have plotted the time evolution of transfer efficiency of a cylindrical topology, comparing the case of no removed links with the one of an optimal number of removed links (corresponding to the minimum of the energy trappedsee Fig. 3). As already discussed above, without links deletion we have a dark subspace obstructing electronic excitation from reaching the sink. Then, removing 5 links allows us to obtain p sink (∞) = 1. If the aim is instead the achievement of an optimal fast transport, dephasing noise plays a crucial role: in fact p sink reaches unity in a much shorter time scale (dot line in Fig. 5). Indeed, noise-assisted transport is characterized not only by a reduced time scale for the transmission, but also by the robustness against possible changes of the underlying topology, as discussed in [15]. By varying the geometry and adding the right amount of noise, a very good transport performance is guaranteed. This does not occur in the fully coherent and incoherent cases, where the transfer efficiency quickly decreases, as it can be seen in the inset of Fig. 6. This remarkable robustness is present in the regime of noise-assisted transport, as shown by the smaller dispersion around the optimal efficiency with respect to the fully coherent and incoherent regimes. Finally, let us point out that the minimum of the relative standard deviation and the maximum of the average of the transfer efficiency in the coherent case (corresponding to 5% of removed links) is a further sign of dark subspace suppression. VII. CONCLUSIONS The dark side of quantum networks is an antagonist to optimal energy transfer. Different tools can be employed to deal with the dark subspaces: we can avoid them using smart initialisation, or suppress and destroy them by breaking the network symmetries through the use of control fields, noise, or disorder. Indeed, dark subspaces have a deep connection with topological symmetries, and can grow in size on more symmetric networks (associated to more degenerate adjacency matrices). The FCN network, for example, has the most symmetries possible on a network and hence the largest dark subspace. At the same time, the FCN network also responded most favourably to dark space suppression tools as opposed to the less symmetric cylinder graph. Whilst the dark subspace has been defined in relation to the eigenstates of the Hamiltonian describing the dynamics on the network, the framework of the dark subspaces could also be generalised to include other features, such as impurities that trap and cause decay of energy on the network [38], and to Lindbladian eigenstates in more generality. The best method to get optimal transport would depend on the function of the device we want to plan: if the goal is the unity of p sink without time limits, then designing a proper weighted network could be the solution (assuming that it is within our engineering ability); if short times and performance robustness are crucial (as it is usually the case), then the introduction of noise in the dynamics is required. Given that noise is unavoidable in most realistic systems, this implies that we generally do not need to eradicate all noise to achieve optimal transport-we just need to be able to control it to some degree. Besides, we found that a network does not have any truly dark states, if the interactions can be tuned to achieve full controllability although this may not be quite feasible experimentally. If the interactions can be engineered, then this is advantageous in two ways: first, no excitation is truly trapped on the network, hence we can always be sure that full transfer will eventually occur; second, there will be "temporary" dark states that could be used as energy storage. Furthermore, sufficiently large graphs almost surely have no dark states, implying that as our quantum networks grow in size (i.e., as the particular quantum technology grows in size), we are very likely to not require extensive interaction engineering to ensure full transport. These results allow one to move further in understanding and enhancing state transfer on quantum networks [4,27,64]. These results can also be employed to understand other quantum processes such as electron transfer, and to designing solar energy devices (e.g., inspired by the energy transfer networks in photosynthetic complexes), and potential quantum thermal devices. VIII. ACKNOWLEDGEMENTS We would like to thank Joshua Lockhart, Yasser Omar, Danial Dervovic, Bryan Shader, Gabriel Coutinho and Stefano Gherardini for useful discussions. This work was supported by the EPSRC Centre for Doctoral Training in Delivering Quantum Technologies [EP/L015242/1]. F. C. was also financially supported from the Fondazione CR Firenze, through the project Q-BIOSCAN; S. S. was financially supported from the Royal Society, EPSRC, Innovate UK, BHF and NSCF.
2017-07-24T11:07:10.000Z
2017-07-24T00:00:00.000
{ "year": 2018, "sha1": "337034e4cc7cd3cb0b74474f5526c4912eeb099a", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1751-8121/aad3e6/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "337034e4cc7cd3cb0b74474f5526c4912eeb099a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
17002440
pes2o/s2orc
v3-fos-license
Modeling Human Inference Process for Textual Entailment Recognition To prepare an evaluation dataset for textual entailment (TE) recognition, human annotators label rich linguistic phenomena on text and hypothesis expressions. These phenomena illustrate implicit human inference process to determine the relations of given text-hypothesis pairs. This paper aims at understanding what human think in TE recognition process and modeling their thinking process to deal with this problem. At first, we analyze a labelled RTE-5 test set which has been annotated with 39 linguistic phenomena of 5 aspects by Mark Sammons et al ., and find that the negative entailment phenomena are very effective features for TE recognition. Then, a rule-based method and a machine learning method are proposed to extract this kind of phenomena from text-hypothesis pairs automatically. Though the systems with the machine-extracted knowledge cannot be comparable to the systems with human-labelled knowledge, they provide a new direction to think TE problems. We further annotate the negative entailment phenomena on Chinese text-hypothesis pairs in NTCIR-9 RITE-1 task, and conclude the same findings as that on the English RTE-5 datasets. Introduction Textual Entailment (TE) is a directional relationship between pairs of text expressions, text ( T) and hypothesis (H).Given a text pair T and H, if human would consider that the meaning of H is right by using the information of T, then we can infer H from T and say that T entails H (Dagan, Glickman, & Magnini, 2006).(S1) shows an example where T entails H. (S1) T: Norway"s most famous painting, "The Scream" by Edvard Munch, was recovered Saturday, almost three months after it was stolen from an Oslo museum. Because such an inference is important in many applications (Androutsopoulos & Malakasiotis, 2010), the researches on textual entailment have attracted much attention in recent years.Recognizing Textual Entailment (RTE) (Bentivogli et al., 2011), a series of evaluations on the developments of English TE recognition technologies, have been held seven times up to 2011.In the meanwhile, TE recognition technologies in other languages are also underway.The 9th NTCIR Workshop Meeting first introduced a TE task in Chinese and in Japanese called Recognizing Inference in Text (RITE-1) into the IR series evaluation (Shima et al., 2011). The overall accuracy is used as the only evaluation metric in most TE recognition tasks (Androutsopoulos & Malakasiotis, 2010).However, it is hard to examine the characteristics of a system when only considering its performance by accuracy.Sammons et al., (2010) proposed an evaluation metric to examine the characteristics of a TE recognition system.They annotated text-hypothesis pairs selected from the RTE-5 test set with a series of linguistic phenomena required in the human inference process.When annotators assume that some linguistic phenomena appear in their inference process to determine whether T entails H, they would label the T-H pair with these phenomena.The RTE systems are evaluated by the new indicators, such as how many T-H pairs annotated with a particular phenomenon can be correctly recognized.The indicators can tell developers which systems are better to deal with T-H pairs with the appearance of which phenomenon.On the other hand, that would give developers a direction to enhance RTE systems. For example, (S2) is an instance that matches the linguistic phenomena Exclusive Relation, and this phenomenon suggests T does not entail H.More than one argument of H, i.e., Venus Williams, Marion Bartoli, 2007, and Wimbledon Championships, appear Such linguistic phenomena are thought as crucial in the human inference process by annotators.In the RITE-2 in the 10th NTCIR Workshop Meeting, some linguistic phenomena for TE in Japanese are reported in the unit task subtask (Watanabe et al., 2013).In a similar manner, types of some linguistic phenomena in Chinese are consulted in the RITE-VAL task in the 11th NTCIR Workshop Meeting1 .In this paper, we use this valuable resource from a different aspect.Instead of using the labelled linguistic phenomena in the evaluation of TE recognition, we aim at knowing the ultimate performance of TE recognition systems which embody human knowledge in the inference process.The experiments show five negative entailment phenomena may be strong features for TE recognition, and this finding confirms the previous study of Vanderwende et al (2006).Moreover, we propose a method to acquire the linguistic phenomena automatically and use them in TE recognition.Our method is evaluated on both the English RTE-5 dataset and the Chinese NTCIR-9 RITE-1 dataset. Experimental results show that our method achieves decent performances near the average performances of RTE-5 and NTCIR-9 RITE-1.Compared to the other methods incorporating a lot of features, only a tiny number of binary features are required by our methods.This paper is organized as follows.In Section 2 we introduce the linguistic phenomena used by annotators in the inference process, do a series of analyses on the human annotated dataset released by Mark Sammons et al., and point out five significant negative e ntailment phenomena.Section 3 specifies the five negative entailment phenomena in detail, proposes a rule-based method and a machine learning method to extract them from T-H pairs automatically, and discuss their effects on TE recognition.In Section 4, we extend the methodology to the BC (binary class subtask) dataset distributed by NTCIR-9 RITE-1 task (Shima et al., 2011), annotate the dataset similar to the schema of Sammons et al. (2010), discuss if the negative entailment phenomena also appear in Chinese T-H pairs, and show their effects on TE in Chinese.Section 5 concludes the remarks. Analyses of Human Inference Process in Textual Entailment We regard the human annotated phenomena as features in recognizing the binary entailment relation between the given T-H pairs, i.e., ENTAILMENT and NO ENTAILMENT.Total T-H pairs were chosen from the RTE-5 test set by Sammons et al. (2010), and total linguistic phenomena divided into the following 5 aspects as follows, including knowledge domains, hypothesis structures, inference phenomena, negative entailment phenomena, and knowledge resources, are annotated on the selected dataset.Table 1 summarizes the phenomena in the five aspects.(e) Knowledge Resources: Each phenomenon in this aspect is a kind of knowledge or common senses which are required in the inference process in textual entailment. Five Aspects as Features We train SVM classifiers to evaluate the performances of the five aspects of phenomena as features for TE recognition.The implementation LIBSVM with the RBF kernel (Chang & Lin, 2011) is adopted to develop classifiers with the parameters tuned by grid search.The experiments are done with 10-fold cross validation. For the dataset of Sammons et al. (2010), two annotators are involved in labeling the above 39 linguistic phenomena on the T-H pairs.They may agree or disagree in the annotation. In the experiments, we consider the effects of their agreement.Table 2 shows the results.Five aspects are first regarded as individual features, and then merged together.The two schemes , Annotator 1 and Annotator 2, mean the phenomena labelled by annotator 1 and annotator 2 are used as features, respectively.The scheme "1 AND 2", a strict criterion, denotes a phenomenon exists in a T-H pair only if both annotators agree with its appearance.In contrast, the scheme "1 OR 2", a looser criterion, denotes a phenomenon exists in a T-H pair if at least one annotator marks its appearance. We can see that the aspect of negative entailment phenomena is the most significant features of the five aspects.With only 9 phenomena in this aspect, the SVM classifier achieves accuracy above 90% no matter which labeling schemes are adopted.Comparatively, the best accuracy in RTE-5 task is 73.5% (Iftene & Moruz, 2009).In negative entailment phenomena aspect, the "1 OR 2" scheme achieves the best accuracy whereas the performances of Annotator 1 and "1 OR 2" are the same in the setting with all the five aspects as features.In the following experiments, we adopt this labeling scheme. Table 2. The accuracy of recognizing binary TE relation with the five aspects as features. Aspect Negative Entailment Phenomena There is a large gap between negative entailment phenomena aspect and the second effective aspect (i.e., inference phenomena).Moreover, using the negative entailment phenomena aspect as features only is even better than using all the 39 linguistic phenomena as features. We further analyze which negative entailment phenomena are more significant. There are nine linguistic phenomena in the aspect of negative entailment phenomena.We take each phenomenon as a single feature to do the task of two-way textual entailment recognition.Table 3 shows the experimental results.The first column is the phenomenon ID, the second column is the phenomenon, and the third column is the accuracy of using the phenomenon in the binary classification.Comparing with the best accuracy 97.62% shown in Table 2, the highest accuracy in Table 3 is 69.52%, when missing argument is adopted.Each phenomenon may be suitable for some T-H pairs, and consequently all negative entailment phenomena together achieve the best performance.The model using all nine phenomena achieves the best accuracy of 97.62%.Examining the combination sets, we find phenomena IDs 3, 4, 5, 7 and 8 appear quite often in the top 4 feature settings of each combination set.In fact, this setting achieves an accuracy of 95.24%, which is the best performance in 5 9 combination set.On the one hand, adding more phenomena into (3, 4, 5, 7, 8) setting does not have much performance difference.On the other hand, removing some phenomena from (3, 4, 5, 7, 8) setting or adopting features rather than these phenomena decreases the performance.The best performance of using the feature setting (-(0,6)), i.e., only 7 phenomena, is the same as that of using all 9 phenomena shown in Table 2.The correlations between these five phenomena are shown in Table 5.Each row presents the T-H pairs which are labelled with the corresponding negative entailment phenomenon by the scheme "1 OR 2".Each column in each row denotes the percentage of the T-H pairs which are also labelled with another negative entailment phenomenon.For example, the number of the T-H pairs which are labelled with "Disconnected Relation" is 14, and 2 of the 14 T-H pairs are also labelled with "Missing Argument".Therefore, the column "Missing Argument" in the row "Disconnected Relation" shows the number 2/14 = 14.29%.Table 5 shows the low correlations between most significant negative entailment phenomena.In other words, these phenomena are complementary.In the above experiments, we do all the analyses on the corpus annotated with linguistic phenomena by human.In some sense, we aim at knowing the ultimate performance of TE recognition systems embodying human knowledge in the inference.Of course, the human knowledge in the inference cannot be captured by TE recognition systems fully correctly.In the later experiments, we explore the five critical features, (3,4,5,7,8), and examine how the performance is achieved if they are extracted automatically. Negative Entailment Phenomena Extraction The experimental results in Section 2. In the following, two methods are proposed to extract the phenomena from T-H pairs automatically in Section 3.2 and Section 3.3.The pre-processing of the pairs is described in Section 3.1. Preprocessing Before extraction, the English T-H pairs are pre-processed according to following considerations.and H (de Marneffe et al., 2006).The results of dependency parsing contain crucial information for capturing negative entailment phenomena. A Rule-Based Method Noun phrases are the fundamental elements for comparing the existences of entailment. Instead of measuring the relatedness of T-H pairs by comparing T and H on the predicate-argument structure (Wang & Zhang, 2009), our method tries to find the five negative entailment phenomena based on the similar representation.Each of the five negative entailment phenomena is extracted as follows according to their definitions.To reduce the error propagation which may be arisen from the parsing errors, we directly match those nouns and named entities appearing in H to the text in T. Furthermore, we introduce WordNet to align synonyms in H and T. (a) Disconnected Relation: If (1) for each a  {noun in H}{nnp in H}{cnn in H}, we can find a  T too, and (2) for each r 1 =h(a 1 ,a 2 )  {relation in H}, we can find a relation r 2 =h(a 3 ,a 4 )  {relation in T} with the same header h, but with different arguments, i.e., a 3 ≠a 1 and a 4 ≠a 2 , then we say the T-H pair has the "Disconnected Relation" phenomenon. (b) Exclusive Argument: If there exist a relation r 1 =h(a 1 ,a 2 ){relation in H}, and a relation r 2 =h(a 3 ,a 4 ){relation in T} where both relations have the same header h, but either the pair (a 1 ,a 3 ) or the pair (a 2 ,a 4 ) is an antonym by looking up WordNet, then we say the T-H pair has the "Exclusive Argument" phenomenon. (c) Exclusive Relation: If there exist a relation r 1 =h 1 (a 1 ,a 2 ){relation in T}, and a relation r 2 =h 2 (a 1 ,a 2 ){relation in H} where both relations have the same arguments, but h 1 and h 2 have the opposite meanings by consulting WordNet, then we say that the T-H pair has the "Exclusive Relation" phenomenon. (d) Missing Argument: For each argument a 1 {noun in H}{nnp in H}{cnn in H}, if there does not exist an argument a 2 T such that a 1 =a 2 , then we say that the T-H pair has "Missing Argument" phenomenon. (e) Missing Relation: For each relation r 1 =h 1 (a 1 ,a 2 ){relation in H}, if there does not exist a relation r 2 =h 2 (a 3 ,a 4 ){relation in T} such that h 1 =h 2 , then we say that the T-H pair has "Missing Relation" phenomenon. A Machine Learning Method We aim at finding meta-features to describe the characteristic of negative entailment phenomena, and use them for classification.We analyse the dependencies in a T -H pair with Stanford dependency parser (de Marneffe et al., 2006) and derive two dependency sets DT and DH for T and H, respectively, where a dependency gr (g,d) is in terms of a binary grammatical relation gr between a governor g and a dependent d.We further define the following three multisets to capture the relationships between T and H: Experiments and Discussion The following two datasets are used in English TE recognition experiments.ENTAILMENT and 50% NO ENTAILMENT. Table 6 shows the performances of the negative entailment phenomena detection by rule-based and machine-learning methods.The performances of rule-based model are especially poor.The major challenge is to identify the arguments in T-H pairs.(S3) shows an instance.The correct arguments of H in (S3) are "Fifth Amendment right" and "driving license", but the arguments captured by our method are "Fifth Amendment" and "license". The issue can be improved with a better dependency parser. (S3) T: "There is a rational basis to distinguish between people driving cars and semi trucks," Jambois said."All I would say is I think he has an uphill battle."The lawsuit says the truckers' Fifth and Fourteenth amendment rights are being violated because there is no way for them to apply for an occupational license. Mutschler said the state is taking away the truckers' right to drive a truck for a living.He said he will argue that while driving is a privilege, once a person has a license for work, it becomes a right. H: Fifth Amendment right is about driving license.Although the rule-based method is poorly-performed, and the machined learning method is not so good at precision and F-Score, the resulting models for TE recognition achieve decent performances.These interesting results are depicted in Table 7.The "Human-annotated" column shows the performance achieved by using the phenomena annotated by human.Using "Human-annotated" phenomena can be seen as the upper-bound of the experiments.In data set (a), the performance of using all the 5 phenomena as features by the machine learning method (M2) is better than that of using the rule-based method (M1).However, the results are reverse in data set (b).This may be because data set (b) contains some cases that cannot be recognized by the model trained from the T-H pairs annotated by human.On the other hand, the rule-based method is implemented directly from the definitions, which is more robust. Though the performance of using the phenomena extracted automatically by machine is not comparable to that of using the human annotated ones, the accuracy achieved by using only 5 features (59.17%) is just a little lower than the average accuracy of all r uns in RTE-5 formal runs (60.36%) (Bentivogli et al., 2009).It shows that the significant phenomena are really effective in dealing entailment recognition even though the phenomena detector is extremely simple.If we can improve the performance of the automatic phenomena detection algorithm, it may make a great progress on the textual entailment. So far the experiments are two-stage classification.In the first stage, we perform the rule-based or the learning-based model to extract the five negative entailment phenomena. And then, the presences of the five phenomena are used as binary features to recognize the TE in the second stage.In this perspective, the features used for phenomena extraction in Section 3.3 are the meta-features of M2.In order to understand the impact of error-propagation, we train a one-stage TE recognizer, M3, by using the meta-features of M2 as features directly.(3,4,5,7,8), are the same as those on English data.Besides, we can use only six phenomena to achieve the same performance as using all nine phenomena as features.Furthermore, we also classify the entailment relation by the phenomena extracted automatically by the rule-based method.The process is similar to those of English text described in Section 3.1 and Section 3.2, while Additional effort of processing is required for Chinese text.We segmented Chinese words with Stanford word segmenter (Chang et al., 2008) and performed Chinese dependency parsing using Stanford parser and the CNP parser (Chen et al., 2009).We extract two sets of negative entailment phenomena according to the parsing results of Stanford parser and CNP parser separately.Both sets are used as independent features to achieve a better performance.The rule-based method obtains a similar result of TE recognition in Chinese.The accuracy achieved by using the five automatically extracted phenomena as features is 57.11%, and the average accuracy of all runs in NTCIR-9 RITE task is 59.36% (Shima et al., 2011). Compared to other methods using a lot of features, only 12 binary features are used in our method. Conclusion In this paper we conclude that the negative entailment phenomena have a great effect in dealing with TE recognition.The systems with human annotated knowledge achieve very good performance.Experimental results show that not only can it be applied to the English TE problem, but also has the similar effect on the Chinese TE recognition.To automatically in T, but the relation defeated in H contracts the relation triumphed in T. (S2) T: Venus Williams triumphed over Marion Bartoli of France 6-4, 6-1 yesterday to win the Women's Singles event at the 2007 Wimbledon Championships.For the first time, an American and Frenchwoman were matched up to compete for the British women's singles title.A Wimbledon champion in 2000, 2001 and 2005, Williams was not the favorite to win the title again this year.Currently ranked 23rd in the world, she entered the tournament in the shadow of her sister, Serena Williams.H: Venus Williams was defeated by Marion Bartoli at the 2007 Wimbledon Championships. ( a ) Knowledge Domains (Hypothesis Types): Each phenomenon in this aspect denotes whether the information in H belongs to the corresponding knowledge domain.(b) Hypothesis Structures: Each phenomenon in this aspect denotes whether the H contains elements of the corresponding type.(c) Inference Phenomena: Each phenomenon in this aspect indicates the corresponding linguistic phenomenon which is used to infer H from T. (d) Negative Entailment Phenomena: Each phenomenon in this aspect is a pattern which may appear in negative entailment instances. ( a ) Disconnected Relation: The arguments and the relations in H are all matched by counterparts in T. None of the arguments in T is connected to the matching relation.(b) Exclusive Argument: There is a relation common to both H and T, but one argument is matched in a way that makes H contradict T. (c) Exclusive Relation: There are two or more arguments in H that are also related in T, but by a relation that means H contradicting T. (d) Missing Argument: Entailment fails because an argument in H is not present in T, either explicitly or implicitly.(e) Missing Relation: Entailment fails because a relation in H is not present in T, either explicitly or implicitly. 2 show that disconnected relation, exclusive argument, exclusive relation, missing argument, and missing relation are significant.Our experimentsshow the combination of these five phenomena is even more powerful.Vanderwende et al. (2006) suggested some phenomena that are the clue to false entailments.To model the annotator"s inference process, we must first determine the arguments and the relations existing in T and H, and then align the arguments and relations in H to the related ones in T. It is easy for human to find the important parts in a text description in the inference process, but it is challenging for a machine to determine what words are important and what are not, and to detect the boundary of arguments and relations.Moreover, two arguments (relations) of strong semantic relatedness is not always literal identical. ( a ) Numerical Character Transformation: All the numerical values are normalized to a single format.The fractional numbers and percentages are converted to real numbers.(b) Stemming: The stemming is performed to each word in the T-H pair with NLTK (Bird, 2002).(c) Part-of-Speech Tagging: Stanford Parser is performed to tagging each word in the T-H pair (Levy & Manning, 2003).(d) Dependency Parsing: Stanford Parser also generates the dependency pairs from T (a) {H only}={gr|gr(g,d)D H -(D T  D H )} (b) {Partially identical in governor}={gr|gr(g,d 1 )D T , gr(g,d 2 )D H , d 1 ≠d 2 } (c) {Partially identical in dependent}={gr|gr(g 1 ,d)D T , gr(g 2 ,d)D H , g 1 ≠g 2 } A T-H pair is represented as a feature vector (V(a), V(b), V(c)), where the dimensions of the three vectors V(a), V(b), and V(c) are the number of grammatical relations in the dependency parser.The weights of each grammatical relation gr in V(a), V(b), and V(c) are the number of gr appearing in the multisets {H only}, {Identical in governor only} and {Identical in dependent only}, respectively.The SVM classifier with the RBF kernel is adopted to develop classifiers with the parameters (cost and gamma) tuned by grid search and evaluated with 10-fold cross validation. (a) 210 pairs from part of RTE-5 test set: The 210 T-H pairs are annotated with the linguistic phenomena by human annotators in the work of Mark Sammons et al (2010).They are selected from the 600 pairs in RTE-5 test set, including 51% ENTAILMENT and 49% NO ENTAILMENT.(b) 600 pairs of RTE-5 test set: The original RTE-5 test set, including 50% capture the negative entailment phenomena in the text, we propose the phenomenon extraction algorithms with the rule-based and the learning-based approaches.Though the automatic extraction of the negative entailment phenomena still needs a lot of efforts, it gives us a new direction to deal with the TE problem.The fundamental issues such as determining the boundary of the arguments and the relations, finding the implicit arguments and relations, verifying the antonyms of argument and relations, and determining their alignments need to be further examined to extract correct negative entailment phenomena.Besides, multi-class TE recognition will be explored in the future. Table 3 . Accuracy of recognizing TE relation with individual negative entailment phenomena. 9 =511 feature settings, and use each feature setting to do the task of two-way entailment relation recognition by SVM classifiers.The notation denotes a set of m!/((m-n)!n!) feature settings, each with n features.For the sake of paper space, we only list the best 4 results in each combination set shown in Table4.Each feature setting is denoted by a set of phenomenon IDs enclosed parentheses.The notations between Table 8 compares M1, M2, and M3.The models M2 and M3 do the TE recognition according to the same information, but the two-stage classifier M2 slightly outperforms M3.This result
2015-06-05T01:59:53.000Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "14885c99671318877d1273d3b5b20d02a8f697d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "14885c99671318877d1273d3b5b20d02a8f697d9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253011065
pes2o/s2orc
v3-fos-license
Effect of Currently Available Nanoparticle Synthesis Routes on Their Biocompatibility with Fibroblast Cell Lines Nanotechnology has acquired significance in dental applications, but its safety regarding human health is still questionable due to the chemicals utilized during various synthesis procedures. Titanium nanoparticles were produced by three novel routes, including Bacillus subtilis, Cassia fistula and hydrothermal heating, and then characterized for shape, phase state, size, surface roughness, elemental composition, texture and morphology by SEM, TEM, XRD, AFM, DRS, DLS and FTIR. These novel titanium nanoparticles were tested for cytotoxicity through the MTT assay. L929 mouse fibroblast cells were used to test the cytotoxicity of the prepared titanium nanoparticles. Cell suspension of 10% DMEM with 1 × 104 cells was seeded in a 96-well plate and incubated. Titanium nanoparticles were used in a 1 mg/mL concentration. Control (water) and titanium nanoparticles stock solutions were prepared with 28 microliters of MTT dye and poured into each well, incubated at 37 °C for 2 h. Readings were recorded on day 1, day 15, day 31, day 41 and day 51. The results concluded that titanium nanoparticles produced by Bacillus subtilis remained non-cytotoxic because cell viability was >90%. Titanium nanoparticles produced by Cassia fistula revealed mild cytotoxicity on day 1, day 15 and day 31 because cell viability was 60–90%, while moderate cytotoxicity was found at day 41 and day 51, as cell viability was 30–60%. Titanium nanoparticles produced by hydrothermal heating depicted mild cytotoxicity on day 1 and day 15; moderate cytotoxicity on day 31; and severe cytotoxicity on day 41 and day 51 because cell viability was less than 30% (p < 0.001). The current study concluded that novel titanium nanoparticles prepared by Bacillus subtilis were the safest, more sustainable and most biocompatible for future restorative nano-dentistry purposes. Introduction Nanotechnology has quickly gained importance in medical and dental applications due to its quality production and prompt response to host tissues interaction by crossing tissue barriers [1,2]. Several metal nanoparticles have recently attracted interest as a consequence of their unique qualities, which include optical, mechanical, biological and physical properties [3]. Titanium is the most preferred material among them because it has many additional compelling features and characteristics that make it superior, e.g., high electrical conductivity, high thermal diffusivity, malleability, low thermal conductivity and wear and corrosion (scratching) resistance [4]. Moreover, titanium has also become the material of choice due to its cost-effectiveness [5], non-allergic nature, low toxicity, fatigue resistance and biocompatibility [6,7]. Titanium has achieved great success in dentistry as a result of its accountable biological reaction with human tissues [8]. There are multiple applications of commercially available titanium nanoparticles in medicine and dentistry [9], such as cell imaging, biosensors of biological assays, drug delivery systems, photodynamic therapy for cancer and genetic engineering in medicine [10]. The vast utilization of titanium nanoparticles in clinical dentistry includes composite adhesives and bonding agents [11], glass ionomer cement restorations [11][12][13], dental implants [14], bleaching and whitening agents [15], irrrigants in root canal treatment, mouth washes, tooth pastes and polishing pastes [16]. Previously, titanium nanoparticles enhanced the antimicrobial properties and bond strength of composites in orthodontics [11]. In the application of glass ionomer cements, these nanoparticles significantly increased the flexural strength, compressive strength, micro-hardness and shear bond strength to both enamel as well as dentin to a large extent [12,13]. In addition, their usage as dental implants have improved the osteoblast proliferation, phosphate activity, bone matrix deposition and adhesion [14]. The bleaching and whitening products that employed titanium nanoparticles in dentistry has imparted the utmost aesthetics to the teeth [15]. The increased efficacy of titanium nanoparticles as irrigants, toothpastes and mouthwashes in dentistry has been due to their increased antibacterial activity as compared to the chlorohexidine used previously. Titanium nanoparticles are also used in manufacturing orthodontic wires, crowns, maxillary obturators, bridges and files [17,18]. The biocompatibility of titanium nanoparticles is the main feature that makes them unique and extensively utilized in the field of dentistry [16]. The factors responsible for the biocompatibility of titanium nanoparticles are synthesis routes, surface topography, properties such as phase form, particle size, band gap energy, elemental composition and functional groups [19]. The most significant factor responsible for making these nanoparticles cytotoxic and non-biocompatible are the routes involved in their synthesis. The nanoparticles are synthesized by either conventional methods (physical and chemical) or biological methods (microorganisms and plants) [20]. For years, several different popular conventional methods used for synthesis of metal oxide nanoparticles have employed different chemicals as reducing and capping agents. These chemicals form toxic by-products in the production of titanium nanoparticles, resulting in the cytotoxicity of these newly formed nanoparticles [21][22][23]. Many completely natural resources have been manipulated by biological synthesis, such as algae, plants, bacteria, viruses and fungi. These organisms utilize their natural biomolecules as reducing and capping agents. These natural biomolecules do not produce any toxic byproduct, resulting in non-cytotoxic behavior of these nanoparticles [24]. Thus, the stability and sustainability of these nanoparticles is enhanced, which leads to their superior behavior in clinical performance [25,26]. The conventional methods including both physical and chemical processes for production of nanoparticles in dentistry are very common because of their purity, uniformity and quick production. The major drawbacks associated with these nanoparticles is their low yield; high temperature, pressure and energy consumption; potent chemical accelerators utilization; and release of toxic by-products. All these factors are responsible for adversely affecting living beings, as well as our environment on the larger scale [25,27,28]. Although, titanium nanoparticles synthesized by physical and chemical processes, microorganisms and plants have been widely used in various dental applications, insufficient data are available on the cytotoxicity of these nanoparticles regarding synthesis protocols. Still, hazardous side effects of titanium of nanometer size have already been reported in the literature [29]. The reason behind this could be that chemically synthesized titanium nanoparticles had been used previously to enhance the mechanical properties of dental materials without focusing on the most important aspect in the health conditions referred as biocompatibility and biosafety [12,13]. There is an urgent need to evaluate human health and environmental safety regarding the use of nanoparticles. The biocompatibility and biosafety of few commercial titanium nanoparticles have been investigated, which showed different levels of cytotoxicity against various cell lines [19]. The current study is performed to find out the cytotoxic nature of novel titanium nanoparticles produced by Bacillus subtilis, Cassia fistula and hydrothermal heating of titanium tetrachloride methods in order to ascertain the most biocompatible titanium nanoparticles obtained that could be utilized in future restorative nanodentistry without any fear of failure. Materials Titanium chloride-IV (Sigma-Aldrich, Darmstadf, Germany) was purchased from the Pakistan Institute of Engineering and Applied Sciences (PIEAS). The strains of Bacillus subtilis having accession no. ATCC ® 6633 TM were acquired from the National Institute of Health (NIH), Islamabad, Pakistan. The leaves of Cassia fistula plant were taken from Public Park of I/8 sector, Islamabad, and were dried. The titanium chloride-IV, strains of Bacillus subtilis and leaves of Cassia fistula were used for the synthesis of titanium nanoparticles. The L929 mouse fibroblast cell line ('ATCC, Manassas, VA, USA) was used to test the cytotoxicity of Ti nanoparticles using the MTT assay [30]. Preparation of Titanium Nanoparticles by Three Routes The methodology for synthesis of biogenic titanium nanoparticles incorporating Bacillus subtilis was produced according to Kirthiet et al. (2011) [31]. Fresh culture of Bacillus subtilis was incubated at 28 • C and centrifuged at 150 rpm into 100 mL of nutrient broth to form the bacterial culture solution. After 24 h, 20 milliliters of 0.025M Ti(OH) 2 solution (American Elements, 10884-Weyburn Ave, Los Angeles, CA, USA) was inserted into the bacterial culture solution at 60 • C for 10 min to obtain newly formed titanium nanoparticles, which were annealed at 80 • C and calcinated at 450 • C to obtain a fine powder. The method for the formation of green titanium nanoparticles using Cassia fistula leaves was taken from a previous study [32]. One milligram of dried Cassia fistula leaves was mixed with 100 mL of water, which was heated at 100 • C for 5 min to form a plant extract. Then, 1 mL of Ti(OH) 2 (American Elements, 10884-Weyburn Ave, Los Angeles, CA, USA) was poured into 80 mL of water to obtain a Ti(OH) 2 stock-solution. Afterwards, 20 mL of plant extract solution and 80 mL Ti(OH) 2 stock-solution was kept at 28 • C and centrifuged at 150 rpm for 24 h to obtain titanium nanoparticles. The nanoparticles were dried at 80 • C and then calcinated at 450 • C for fine powder formation. The procedure used to synthesize titanium nanoparticles through hydrothermal heating of titanium tetrachloride salt (TiCl 4 ) was conducted according to the study conducted previously [33]. Firstly, 1 mL TiCl 4 salt (Sigma-Aldrich, Merck KGaA, Darmstadf, Germany) was poured into 100 mL of deionized water to obtain a 1M salt solution. After this, the salt solution was heated at 80 • C under continuous stirring in order to attain titanium nanoparticles. Then, these nanoparticles were annealed at 110 • C and calcinated at 450 • C into fine powder form. Characterization Techniques The characterization techniques used for confirming the shape, phase state, size, surface roughness, elemental composition, texture and morphology of the novel titanium nanoparticles formed by Bacillus Subtilis, Cassia fistula and hydrothermal heating was carried out by XRD cells were maintained in the Dulbecco modified eagle medium (DMEM) (Invitrogen Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Thermo Fisher Scientific, New York, NY, USA) and 1% penicillin-streptomycin antibiotics (Life Technologies, Auckland, NZ, USA). L929 cells from passage P4-P8 at 70-80% confluence was seeded directly onto a 96 well plate (1 × 10 4 cells), which was incubated for 24-48 h to obtain a confluent culture [34,35]. All types of titanium nanoparticles prepared by three routes were utilized as 1 mg/mL concentrations to form stock solutions for these nanoparticles, which was 100 ug/mL. Deionized water was used as the control. When cells attained 70-80% confluence, the cells were exposed to titanium nanoparticles (50ul/well) for the first 24 h. Then, twenty-eight microliters of MTT dye (Sigma-Aldrich, Merck CT01-5,KGaA, Darmstadf, Germany) (2mg/mL) was added to each well, and incubated at 37 • C for 2 h. The culture medium was changed after every two weeks throughout the experiment to prevent its contamination by bacteria and fungi. The mouse fibroblast cells were checked after every 2-h and were incubated at 37 • C in 5% CO 2 in humidified atmosphere on regular intervals for the growth. After the multiplication of cells in the flask, their splitting was performed, and they were detached from the base to float easily. After the increase in the number of cells, they were seeded in a 96-well plate (1 × 10 4 cells) and were cultured after every two days for the whole duration of the experiment. Before cytotoxicity testing, 50 ul/well of titanium nanoparticles were exposed to freshly prepared cell culture containing the maximum number of cells prepared for each analysis at different days. Later on, the fluorescence reader 'BIORAD' (Thermo-Fisher, New York, NY, USA) was utilized to measure fluorescence at 490 nm wavelength for day 1. Similarly, the readings were obtained in triplicates for the remaining days, i.e., day 15, day 31, day 41 and day 51, with the reader [36]. The cytotoxicity (cell viability) was measured as [37]: The cell viability refers to the cytotoxicity status of nanoparticles by showing the percentage of alive or dead fibroblast cells exposed to them. The cytotoxicity status of titanium nanoparticles is declared as "non-cytotoxic in case of cell viability = >90%", "mildly cytotoxic in case of cell viability = 60-90%", "moderately cytotoxic in case of cell viability = 30-60%" and "severely cytotoxic in case of cell viability = 30% or less" [38]. Cell Morphology Assessment An inverted fluorescence microscope (OPTO-EDU, A-16.0910, Beijing, China) was used to investigate the extent of the cytotoxicity status through fibroblasts cell morphology exposed to titanium nanoparticles produced by different routes in this study. The abnormal changes in the cell morphology of these fibroblasts were demonstrated via images taken with this microscope. These abnormal changes were witnessed with respect to the fibroblasts' size, shape, structure and organelles after being exposed to the titanium nanoparticles [34]. Statistical Analysis All statistical analyses were conducted using statistical analysis software PRISM (GraphPad Prism 6, San Diego, CA, USA). Data for the experiments are expressed as mean ± standard deviation (SD). One-Way ANOVA test was used to determine the statistically significant differences. Once differences were obtained, then a Post Hoc Tukey test was conducted for multiple differences at a confidence interval of 95% (p < 0.05). Preparation of Titanium Nanoparticles by Three Routes The fabrication of titanium nanoparticles from Bacillus subtilis culture, Cassia fistula plant and titanium tetrachloride salt was confirmed by change in the color of their solutions used during the preparation process. The initial color of Bacillus subtilis culture solution was yellowish, whereas that of the Cassia fistula plant was green and titanium tetrachloride salt was purplish black (Figure 1). The color of these solutions turned white initially, followed by the formation precipitates at the bottom of each flask containing the titanium nanoparticles. Preparation of Titanium Nanoparticles by Three Routes The fabrication of titanium nanoparticles from Bacillus subtilis culture, Cassia fi plant and titanium tetrachloride salt was confirmed by change in the color of their tions used during the preparation process. The initial color of Bacillus subtilis cultur lution was yellowish, whereas that of the Cassia fistula plant was green and titanium rachloride salt was purplish black (Figure 1). The color of these solutions turned w initially, followed by the formation precipitates at the bottom of each flask containin titanium nanoparticles. XRD The XRD investigated the phase form and particle size of the Titanium nanopart The XRD of the titanium nanoparticles generated by Bacillus subtilis were mixed an and rutile phases, whereas those formed by Cassia fistula and hydrothermal heating pure anatase phase. The particle sizes of the titanium nanoparticles were calculated b Debye-Scherer formula and were found to be 63.13 nm for nanoparticles prepared b Bacillus subtilis, while those formed by Cassia fistula and hydrothermal heating were nm and 11.29 nm, respectively ( Figure 2, Table 1). XRD The XRD investigated the phase form and particle size of the Titanium nanoparticles. The XRD of the titanium nanoparticles generated by Bacillus subtilis were mixed anatase and rutile phases, whereas those formed by Cassia fistula and hydrothermal heating were pure anatase phase. The particle sizes of the titanium nanoparticles were calculated by the Debye-Scherer formula and were found to be 63.13 nm for nanoparticles prepared by the Bacillus subtilis, while those formed by Cassia fistula and hydrothermal heating were 15.79 nm and 11.29 nm, respectively ( Figure 2, Table 1). cles prepared by the Bacillus subtilis, while those formed by Cassia fistula and hydrothermal heating were 15.79 nm and 11.29 nm, respectively ( Figure 2, Table 1). SEM The SEM images confirmed the shape and particle size of the titanium nanoparticles. The SEM image of titanium nanoparticles formed by Bacillus subtilis were spherical and SEM The SEM images confirmed the shape and particle size of the titanium nanoparticles. The SEM image of titanium nanoparticles formed by Bacillus subtilis were spherical and 63.13 nm in diameter. On the other hand, the titanium nanoparticles prepared by Cassia fistula were 15.79 nm in diameter, having a mixture of spherical and irregularly shaped nanoparticles. The titanium nanoparticles prepared by hydrothermal heating were predominantly irregular in shape and 11.29 nm in diameter. The results of SEM and XRD were in agreement with each other ( Figure 3, Table 1). AFM The AFM determined the surface roughness of the titanium nanoparticles. The AFM image of the titanium nanoparticles prepared by the Bacillus subtilis showed the minimum surface roughness of about 4.11 Rms while the titanium nanoparticles formed by Cassia fistula revealed moderate surface roughness of about 7.96 Rms. Additionally, the titanium nanoparticles formulated by hydrothermal heating depicted severe surface roughness of about 11.31 Rms ( Figure 4). SEM The SEM images confirmed the shape and particle size of the titanium nanoparticles. The SEM image of titanium nanoparticles formed by Bacillus subtilis were spherical and 63.13 nm in diameter. On the other hand, the titanium nanoparticles prepared by Cassia fistula were 15.79 nm in diameter, having a mixture of spherical and irregularly shaped nanoparticles. The titanium nanoparticles prepared by hydrothermal heating were predominantly irregular in shape and 11.29 nm in diameter. The results of SEM and XRD were in agreement with each other ( Figure 3, Table 1). EDS The EDS images were utilized to check the titanium and oxygen peaks in the titanium nanoparticles. The EDS image of the titanium nanoparticles formed by Bacillus subtilis showed large quantities of titanium and less oxygen, whereas the EDS image of the titanium nanoparticles prepared by Cassia fistula and hydrothermal heating revealed comparatively lesser quantities of titanium and greater quantities of oxygen. The amount of titanium in the hydrothermal heating was much less than those formed by Cassia fistula (Figure 5). EDS The EDS images were utilized to check the titanium and oxygen peaks nium nanoparticles. The EDS image of the titanium nanoparticles formed subtilis showed large quantities of titanium and less oxygen, whereas the ED the titanium nanoparticles prepared by Cassia fistula and hydrothermal heatin comparatively lesser quantities of titanium and greater quantities of oxygen. T of titanium in the hydrothermal heating was much less than those formed by C ( Figure 5). 3.2.6. DRS DRS scans were taken to confirm the size of the titanium nanoparticles through band gap absorbance energy using a standard value of 3.2 EV. The greater value showed the small particle size of the nanoparticles while the smaller value revealed the larger particle size. The DRS scan of the titanium nanoparticles formed by Bacillus subtilis depicted a larger particle size with a lesser calculated band-gap absorbance energy of 2.7 EV. The titanium nanoparticles fabricated by the Cassia fistula and hydrothermal heating confirmed the smaller particle size with greater calculated band-gap absorbance energies of 3.6 EV and 3.9 EV, respectively ( Figure 7). TEM The diameter of particles measured by TEM images (DTEM) was found to be 63 nm for the titanium nanoparticles prepared by Bacillus subtilis, whereas the DTEM values for titanium nanoparticles fabricated by Cassia fistula and hydrothermal heating were calculated to be 15 nm and 11 nm, respectively ( Figure 8). The sharp, elongated and narrow peak confirming the large particle size were depicted by titanium nanoparticles fabricated by Bacillus subtilis (Figure 8d). On the other hand, shallow and broad peaks confirming a small particle size was revealed by titanium nanoparticles prepared by Cassia fistula whereas the most shallow and broadest peak confirming the smallest particle size was revealed by titanium nanoparticles produced by hydrothermal heating (Figure 8e,f, Table 1). 3.2.6. DRS DRS scans were taken to confirm the size of the titanium nanoparticles through band gap absorbance energy using a standard value of 3.2 eV. The greater value showed the small particle size of the nanoparticles while the smaller value revealed the larger particle size. The DRS scan of the titanium nanoparticles formed by Bacillus subtilis depicted a larger particle size with a lesser calculated band-gap absorbance energy of 2.7 eV. The titanium nanoparticles fabricated by the Cassia fistula and hydrothermal heating confirmed the smaller particle size with greater calculated band-gap absorbance energies of 3.6 eV and 3.9 eV, respectively ( Figure 7). 2.2.6. DRS DRS scans were taken to confirm the size of the titanium nanoparticles through band gap absorbance energy using a standard value of 3.2 EV. The greater value showed the small particle size of the nanoparticles while the smaller value revealed the larger particle size. The DRS scan of the titanium nanoparticles formed by Bacillus subtilis depicted a larger particle size with a lesser calculated band-gap absorbance energy of 2.7 EV. The titanium nanoparticles fabricated by the Cassia fistula and hydrothermal heating confirmed the smaller particle size with greater calculated band-gap absorbance energies of 3.6 EV and 3.9 EV, respectively ( Figure 7). TEM The diameter of particles measured by TEM images (DTEM) was found to be 63 nm for the titanium nanoparticles prepared by Bacillus subtilis, whereas the DTEM values for titanium nanoparticles fabricated by Cassia fistula and hydrothermal heating were calculated to be 15 nm and 11 nm, respectively ( Figure 8). The sharp, elongated and narrow peak confirming the large particle size were depicted by titanium nanoparticles fabricated by Bacillus subtilis (Figure 8d). On the other hand, shallow and broad peaks confirming a small particle size was revealed by titanium nanoparticles prepared by Cassia fistula whereas the most shallow and broadest peak confirming the smallest particle size was revealed by titanium nanoparticles produced by hydrothermal heating (Figure 8e,f, Table 1). TEM The diameter of particles measured by TEM images (D TEM ) was found to be 63 nm for the titanium nanoparticles prepared by Bacillus subtilis, whereas the D TEM values for titanium nanoparticles fabricated by Cassia fistula and hydrothermal heating were calculated to be 15 nm and 11 nm, respectively ( Figure 8). The sharp, elongated and narrow peak confirming the large particle size were depicted by titanium nanoparticles fabricated by Bacillus subtilis (Figure 8d). On the other hand, shallow and broad peaks confirming a small particle size was revealed by titanium nanoparticles prepared by Cassia fistula whereas the most shallow and broadest peak confirming the smallest particle size was revealed by titanium nanoparticles produced by hydrothermal heating (Figure 8e,f DLS The hydrodynamically calculated sizes (DH) of the titanium nanoparticles matched with the results of DTEM as hydrodynamic sizes (DH) are always greater than XRD, SEM and TEM as per standards. The diameter of particle size measured by DLS data (DH) was 200 nm for the Titanium nanoparticles prepared by Bacillus subtilis, whereas DH value for titanium nanoparticles fabricated by the Cassia fistula and hydrothermal heating was calculated to be 37 nm and 29 nm (Figure 9). The longest sharp peak confirmed the hydrodynamically calculated large size of the titanium nanoparticles prepared by the Bacillus subtilis which was greater than DTEM (Figure 9a). On the other hand, broad peak confirmed the hydrodynamically calculated small sizes of the titanium nanoparticles formed by the Cassia fistula and hydrothermal heating completely, which was again greater than DTEM (Figure 9 b-d, Table 1). DLS The hydrodynamically calculated sizes (D H ) of the titanium nanoparticles matched with the results of D TEM as hydrodynamic sizes (D H ) are always greater than XRD, SEM and TEM as per standards. The diameter of particle size measured by DLS data (D H ) was 200 nm for the Titanium nanoparticles prepared by Bacillus subtilis, whereas D H value for titanium nanoparticles fabricated by the Cassia fistula and hydrothermal heating was calculated to be 37 nm and 29 nm (Figure 9). The longest sharp peak confirmed the hydrodynamically calculated large size of the titanium nanoparticles prepared by the Bacillus subtilis which was greater than D TEM (Figure 9a). On the other hand, broad peak confirmed the hydrodynamically calculated small sizes of the titanium nanoparticles formed by the Cassia fistula and hydrothermal heating completely, which was again greater than D TEM (Figure 9 b-d, Table 1). Cytotoxicity (Cell Viability %) of Prepared Titanium Nanoparticles by Three Routes: The titanium nanoparticles synthesized by Bacillus subtilis, Cassia fistula and titanium tetrachloride were compared with each other at day 1, day 15, day 31, day 41 and day 51. The control group (water) depicted 100% non-cytotoxic behavior at all the days investigated. The titanium nanoparticles prepared by Bacillus subtilis were in close collaboration with control group revealing a non-cytotoxic behavior in comparison to titanium nanoparticles fabricated by Cassia fistula revealing moderate cytotoxicity and titanium tetrachloride revealing severe cytotoxicity, which was significant at day 51 of cytotoxicity analysis (Figures 10-14). The titanium nanoparticles synthesized by Bacillus subtilis, Cassia fistula and titanium tetrachloride were compared with each other at day 1, day 15, day 31, day 41 and day 51. The control group (water) depicted 100% non-cytotoxic behavior at all the days investigated. The titanium nanoparticles prepared by Bacillus subtilis were in close collaboration with control group revealing a non-cytotoxic behavior in comparison to titanium nanoparticles fabricated by Cassia fistula revealing moderate cytotoxicity and titanium tetrachloride revealing severe cytotoxicity, which was significant at day 51 of cytotoxicity analysis (Figures 10-14). Cytotoxicity Analysis (Cell Viability %) at First Day The titanium nanoparticles formed by Bacillus subtilis revealed slight reduction in fibroblast cell lines viability% on the first day as compared to the control group but these nanoparticles remained non-cytotoxic as cell viability was > 90%. The titanium nanoparticles prepared by Cassia fistula and titanium tetrachloride depicted more reduction in fibroblast cell lines viability % as compared to the control group. They fell in the mildly Cytotoxicity Analysis (Cell Viability %) at First Day The titanium nanoparticles formed by Bacillus subtilis revealed slight reduction in fibroblast cell lines viability% on the first day as compared to the control group but these nanoparticles remained non-cytotoxic as cell viability was > 90%. The titanium nanoparticles prepared by Cassia fistula and titanium tetrachloride depicted more reduction in fibroblast cell lines viability % as compared to the control group. They fell in the mildly cytotoxic status, as the cell viability was between 60 and 90%. The linear decrease in cell viability % was observed in titanium nanoparticles formed by Bacillus subtilis, Cassia fistula and titanium tetrachloride as compared to the control group which was significant. The mean differences between titanium nanoparticles formed by Bacillus subtilis, Cassia fistula and titanium tetrachloride was also significant on the first day (p < 0.001) ( Figure 10). Cytotoxicity Analysis (Cell Viability %) at 15th Day The titanium nanoparticles formed by Bacillus subtilis displayed more reduction in fibroblast cell lines viability % 15th day as compared to control group, but these nanoparticles were non-cytotoxic as cell viability was still >90%. The titanium nanoparticles prepared by Cassia fistula and titanium tetrachloride depicted comparatively more reduction in fibroblast cell lines viability % when compared with the control group. They were found to be in the range of mild cytotoxicity as cell viability was between 60 and 90%. The 15th day also revealed linear pattern decrease in cell viability % in titanium nanoparticles prepared by Bacillus subtilis, Cassia fistula and titanium tetrachloride in comparison to control group which was significant. The mean differences between titanium nanoparticles formed by Bacillus subtilis, Cassia fistula and hydrothermal heating was also significant at 15th day (p < 0.001) (Figure 11). Cytotoxicity Analysis (Cell Viability %) at 31st Day The titanium nanoparticles formed by Bacillus subtilis again revealed a slight reduction in fibroblast cell lines viability % at the 31st day as compared to the control group, but these nanoparticles remained non-cytotoxic as cell viability was >90%. The titanium nanoparticles prepared by Cassia fistula displayed moderate reduction in fibroblast cell lines viability % at the 31st day when compared to the control group and fell within range of mild cytotoxicity, as cell viability was between 60 and 90%. The titanium nanoparticles prepared by titanium tetrachloride depicted maximum reduction in fibroblast cell lines viability % as compared to control group. They fell in moderately cytotoxic status as cell viability was found to be between 30 and 60%. The linear pattern decreases in cell viability % was observed in titanium nanoparticles formed by Bacillus subtilis, Cassia fistula and titanium tetrachloride, as compared to control group which was significant. The mean differences between titanium nanoparticles formed by Bacillus subtilis, Cassia fistula and hydrothermal heating was also significant at the 31st day (p < 0.001) (Figure 12). Cytotoxicity Analysis (Cell Viability %) at 41st Day The titanium nanoparticles formed by Bacillus subtilis again revealed a slight reduction in fibroblast cell lines viability % at the 41st day as compared to the control group, but these nanoparticles remained non-cytotoxic, as cell viability was >90%. The titanium nanoparticles prepared by Cassia fistula displayed moderate reduction in fibroblast cell lines viability % at the 41st day when compared to the control group and fell within the range of mild cytotoxic as cell viability was between 60 and 90%. The titanium nanoparticles prepared by titanium tetrachloride depicted a maximum reduction in fibroblast cell lines viability % as compared to the control group. They fell in severely cytotoxicity standard because cell viability was found to be 30% or less. The linear decrease in cell viability % was observed in titania nanoparticles formed by Bacillus subtilis, Cassia fistula and titanium tetrachloride as compared to the control group, which was significant. The mean differences between titanium nanoparticles prepared by Bacillus subtilis, Cassia fistula and titanium tetrachloride was also significant at the 41st day (p < 0.001) (Figure 13). Cytotoxicity Analysis (Cell Viability %) at 51st Day The titanium nanoparticles formed by Bacillus subtilis again revealed a slight reduction in fibroblast cell lines viability % at 51st day as compared to control group, but these nanoparticles remained non-cytotoxic as cell viability was > 90%. The titanium nanoparticles prepared by Cassia fistula displayed moderate reduction in fibroblast cell lines viability % at the 51 st day when compared to control group and fell within range of moderate cytotoxic as cell viability was between 30 and 60%. The titanium nanoparticles prepared by titanium tetrachloride depicted maximum reduction in fibroblast cell lines viability % as compared to control group. They fell in severely cytotoxicity standard because cell viability was found to be 30% or less. The linear decrease in cell viability % was observed in titanium nanoparticles formed by Bacillus subtilis, Cassia fistula and titanium tetrachloride as compared to the control group, which was significant. The mean differences between titanium nanoparticles prepared by Bacillus subtilis, Cassia fistula and titanium tetrachloride was also significant at the 51st day (p < 0.001) (Figure 14). Cell Morphology of Fibroblasts Exposed to Titanium Nanoparticles The fibroblasts are normally large, elongated and flat cells possessing branched cytoplasm surrounding nucleus having two or more nucleoli. Cell Morphology at First Day The normal characteristic morphology of fibroblast cell lines was observed when they were exposed to titanium nanoparticles prepared by Bacillus subtilis on the first day (Figure 15b), which was quite similar to the control group. The initiation of pore formation was revealed by fibroblast cell lines exposed to titanium nanoparticles formed by Cassia fistula and titanium tetrachloride (Figure 15c,d), leading to slight degradation in the fibroblast's cell morphology as compared to the control group (Figure 15a). Figure 15. Mouse fibroblast's cell morphology exposed to control group prepared by water on first day, 15th day, 31st day, 41st day and 51st day, showing normally large, elongated flat cells with cytoplasm (a,e,i,m,q). Mouse fibroblast's cell morphology exposed to experimental group of titanium nanoparticles prepared by Bacillus Subtilus on first day, 15th day, 31st day, 41st day and 51st day showing normally large, elongated flat cells with cytoplasm (b,f,j,n,r). Mouse fibroblast's cell morphology exposed to experimental group of titanium nanoparticles prepared by Cassia fistula on first day, 15th day, 31st day, 41st day and 51st day, showing initiation of pore formation (c), increased pore formation (g), increased pore formation and mild degradation (k), increased pore formation and mild degradation (o) and loss of normal spindle shape (s). Mouse fibroblast's cell morphology exposed to experimental group of titanium nanoparticles prepared by hydrothermal heating on the first day, 15th day, 31st day, 41st day and 51st day, showing slight degradation (d), increased pore formation and degradation (h), greater disruption (l), complete loss of cell symmetry (p) and entire loss of normal size, shape and symmetry of cell (t). Cell Morphology at 41st Day Normal characteristic morphology of fibroblast cell lines was observed when they were exposed to titanium nanoparticles prepared by Bacillus subtilis at the 41st day ( Figure 15 n), which was quite similar to the control group (Figure 15m). There was pore formation and mild degradation revealed by fibroblast cell lines exposed to titanium nanoparticles formed by Cassia fistula at the 41st day (Figure 15o) after comparing it with the Figure 15. Mouse fibroblast's cell morphology exposed to control group prepared by water on first day, 15th day, 31st day, 41st day and 51st day, showing normally large, elongated flat cells with cytoplasm (a,e,i,m,q). Mouse fibroblast's cell morphology exposed to experimental group of titanium nanoparticles prepared by Bacillus Subtilus on first day, 15th day, 31st day, 41st day and 51st day showing normally large, elongated flat cells with cytoplasm (b,f,j,n,r). Mouse fibroblast's cell morphology exposed to experimental group of titanium nanoparticles prepared by Cassia fistula on first day, 15th day, 31st day, 41st day and 51st day, showing initiation of pore formation (c), increased pore formation (g), increased pore formation and mild degradation (k), increased pore formation and mild degradation (o) and loss of normal spindle shape (s). Mouse fibroblast's cell morphology exposed to experimental group of titanium nanoparticles prepared by hydrothermal heating on the first day, 15th day, 31st day, 41st day and 51st day, showing slight degradation (d), increased pore formation and degradation (h), greater disruption (l), complete loss of cell symmetry (p) and entire loss of normal size, shape and symmetry of cell (t). Cell Morphology at 15th Day The normal morphology of the fibroblast cell lines was revealed after exposing them to titanium nanoparticles formed by Bacillus subtilis at the 15th day (Figure 15f), similar to the control group (Figure 15e). The titanium nanoparticles prepared by Cassia fistula and titanium tetrachloride (Figure 15g,h) manifested increased pore formation and degradation in the fibroblast cell lines in comparison to the control group (Figure 1e). Cell Morphology at 31st Day The fibroblasts of titanium nanoparticles prepared by Bacillus subtilis (Figure 15j) displayed normal morphology without any change after comparing it with the control group (Figure 15i). There was mild degradation in the fibroblast cell lines at the 31st day when they were exposed to titanium nanoparticles prepared by Cassia fistula (Figure 15k). On the other hand, greater disruption of fibroblast cell lines was depicted on exposure to titanium nanoparticles prepared by titanium tetrachloride at the 31st day (Figure 15l) in comparison to the control group (Figure 15i). Cell Morphology at 41st Day Normal characteristic morphology of fibroblast cell lines was observed when they were exposed to titanium nanoparticles prepared by Bacillus subtilis at the 41st day (Figure 15 n), which was quite similar to the control group (Figure 15m). There was pore formation and mild degradation revealed by fibroblast cell lines exposed to titanium nanoparticles formed by Cassia fistula at the 41st day (Figure 15o) after comparing it with the control group ( Figure 15m). The titanium nanoparticles prepared by titanium tetrachloride (Figure 15p) displayed entire fibroblast cell line disruptions on exposure where these cells lost their complete symmetry as compared to the control group on the 41st day (Figure 15m). Cell Morphology at 51st Day Fibroblasts of titania nanoparticles prepared by Bacillus subtilis (Figure 15r) displayed normal morphology without any change after comparing them with the control group at the 51st day (Figure 15q). There was comparatively greater degradation in the fibroblast cell lines revealed at the 51st day when they were exposed to titanium nanoparticles prepared by Cassia fistula. These fibroblasts showed signs of losing their normal spindle shape (Figure 15s). On the other hand, maximum disruption of fibroblast cell lines was depicted on exposure to titanium nanoparticles prepared by titanium tetrachloride at the 51st day ( Figure 15t) in comparison to the control group (Figure 15q). These fibroblasts completely lost their normal size, shape and symmetry (Figure 15a-t). Discussion There is a dire need to carry out cytotoxicity testing on a large scale before declaring nanoparticles safe for use in medical and dental applications [39]. The MTT assay is the most reliable test utilized to investigate the toxicity of nanoparticles [40]. There is a universal standard used for assessing the cytotoxicity of nanoparticles depending on cell viability %, which is given as: "non-cytotoxic as cell viability > 90%", "mildly cytotoxic as cell viability between 60-90%", "moderately cytotoxic as cell viability between 30-60%" and "severely cytotoxic as cell viability of 30% or less [38]. The cytotoxicity analysis of titanium nanoparticles prepared with the help of Bacillus subtilis demonstrated a non-cytotoxic nature when exposed to fibroblast cell lines because of cell viability > 90% at all days investigated in comparison with control group. This titanium nanoparticle justified higher biocompatibility and higher cell viability as compared to other nanoparticles synthesized by Cassia fistula and titanium tetrachloride. The cytotoxicity analysis of titanium nanoparticles prepared through Cassia fistula displayed mild cytotoxicity on the first day, 15th day, 31st day and 41st day because the cell viability was between 60 and 90%. These nanoparticles turned moderately cytotoxic at the 51st day due to cell viability being between 30 and 60% as compared to the control group. The cytotoxicity analysis of titanium nanoparticles prepared through titanium tetrachloride revealed their mild cytotoxic nature on the first day and 15th day because of their cell viability between 60 and 90%; they were moderately cytotoxic at the 31st day due to their cell viability being between 30 and 60%; and they eventually turned severely cytotoxic at the 41st day and the 51st day as the cell viability was less than 30% in comparison to the control group (Figures 10-14). The most significant factor responsible for the cytotoxicity of the nanoparticles is their mode of synthesis [41]. Bacteria are considered as the best option for synthesizing metal oxide nanoparticles because of their outstanding biocompatibility, flexible nature, high growth rate, high yield, cost-effectiveness and ease of culturing and manipulation. The biocompatibility of titanium nanoparticles produced by bacteria is due to presence of natural biomolecules which are used during their synthesis, such as enzymes [42]. These biomolecules might have produced safe, strong, stable and sustainable layers around these nanoparticles during the synthesis, thus rendering them biocompatible without producing any toxic by-product. The results in the current study regarding titanium nanoparticles synthesized by Bacillus subtilis matched the literature [43]. Different phytochemicals such as terpenoids, glycosides and alkaloids present in the plants are responsible for inducing toxicity in nanoparticles. This is due to the fact that they adversely affect the biological functions of the cells to which they are exposed [44]. The titanium nanoparticles produced by Cassia fistula might have become toxic due to the presence of residual phytochemicals that might have been released during the synthesis. These residual phytochemicals might have become entrapped and accumulated on the surfaces of these titanium nanoparticles, thus making them cytotoxic. Eventually, these nanoparticles, after coming in contact with cell lines, might have initiated a toxic reaction, thus reducing the cell viability [45]. The cytotoxicity results of titanium nanoparticles prepared by plants in the current study are in accordance with previous studies. The nanoparticles prepared by hydrothermal conventional method utilize toxic chemicals and high temperature during their synthesis [41]. These two factors might have released toxic byproducts that might have been adsorbed on the surface of these nanoparticles, bringing them into an unstable state. Thus, these titanium nanoparticles became cytotoxic and non-biocompatible by reducing the cell viability [46]. The nanoparticles synthesized by conventional methods revealed enhanced cytotoxicity compared to those synthesized by biological processes as a result of utilizing the toxic chemicals [47]. Other factors that play a key role in the cytotoxicity of nanoparticles are predominantly dependent on their physico-chemical properties, time-duration of exposure and concentration used. The important physico-chemical properties that take part in the cytotoxicity include their shape, size, phase state, texture, elemental composition, band gap absorbance energy, functional groups and surface roughness, which are measured by standard characterization techniques such as XRD, SEM, TEM, EDS, AFM, DRS, DLS and FTIR. The size and shape of nanoparticles are the key factors involved in the cytotoxicity, where the results of XRD, SEM, TEM and DLS are in accordance with each other as per standard protocols in the current study. The XRD and SEM data gave the exact particle size of all the fabricated titanium nanoparticles by Bacillus Subtilis, Cassia fistula and hydrothermal heating, which was in close proximity with EDS, AFM, DRS and FTIR. There was a visible collaboration in the particle size of titanium nanoparticles fabricated by Bacillus Subtilis, Cassia fistula and hydrothermal heating measured by TEM and DLS that confirmed the formation of both the actual physical size and hydrodynamic size of these nanoparticles. The particle sizes of all the titanium nanoparticles calculated by D TEM images taken through TEM were comparatively smaller than D H data calculated via DLS, which revealed larger particle sizes. This is due to the fact that TEM calculates the actual physical particle size of nanoparticles, which is always smaller as compared to DLS, which calculates the hydrodynamic size and is always larger. This particle size difference between TEM and DLS values in the current studies was in accordance with the already reported literature that confirmed their significant role in cytotoxicity [48]. Previous research has reported that nanoparticles of large size and spherical shape are non-cytotoxic and vice versa. The large size and spherical shape have smaller surface area to volume ratios, due to which they cannot be penetrated easily. The mixed phase states of anatase and rutile are safer compared to the pure phase sate of anatase because anatase is a highly reactive state. The minimal surface roughness supports the non-cytotoxic behavior of nanoparticles as compared to the moderate and maximum surface roughness depicted by the nanoparticles. The standard value for the band gap absorbance energy is 3.2 eV, where a calculated value greater than this shows a smaller particle size, whereas a calculated value lesser than this depicts a larger nanoparticle size [41]. The titanium nanoparticles prepared by Bacillus Subtilis were large in size, spherical and contained a large quantity of Ti, a band-gap absorbance energy of 2.7 eV, minimum surface roughness, few functional groups and a mixed phase state of anatase and rutile. The titanium nanoparticles formulated by Cassia fistula were small in size, had mixture of spherical and irregular shapes, contained a smaller quantity of Ti, a band-gap absorbance energy of 3.6 eV, moderate surface roughness, more functional groups and a pure phase state of anatase. On the other hand, the titanium nanoparticles produced by hydrothermal heating were very small in size, dominantly irregular in shape and contained the lowest quantity of Ti, had a band-gap absorbance energy of 3.9 eV, maximum surface roughness, a large amount of functional groups and a pure phase state of anatase. The titanium nanoparticles prepared by Bacillus Subtilis were the most biocompatible because of their large size and spherical shape, which might have prevented their absorption in fibroblast cell lines in large quantities. Additionally, the titanium nanoparticles formed by Cassia fistula and hydrothermal heating were of smaller size and irregular shape, which might have increased their absorption in the fibroblast cell lines, leading to their cytotoxicity. The minimum roughness, mixed phase state of anatase and rutile, and few functional groups present in the titanium nanoparticles formed by Bacillus Subtilis might have resulted in providing stability and sustainability to these nanoparticles, thus preventing them from becoming cytotoxic. On the other hand, moderate and maximum roughness, pure phase state of anatase and more functional groups observed in the titanium nanoparticles prepared by Cassia fistula and hydrothermal heating might have reduced their stability and sustainability, resulting in their cytotoxicity. The presence of a large amount of Ti in the elemental composition of the titanium nanoparticles formed by Bacillus Subtilis might have prevented them from becoming cytotoxic as compared to the lesser amount of Ti in the elemental composition of the titanium nanoparticles formulated by Cassia fistula and hydrothermal heating. The increased concentration and time duration of nanoparticles' exposure to cell lines also reduces cell viability with every passing day and makes them cytotoxic eventually [49,50]. The titanium nanoparticles synthesized by Bacillus subtilis were exposed to fibroblast cell lines in large concentrations and longer duration but remained non-cytotoxic even after a month. This made possible the formation of biologically stable and uniform capping layer around these nanoparticles that imparted them biocompatibility and safety without affecting their cell viability. The titanium nanoparticles synthesized by Cassia fistula exposed to fibroblast cell lines become moderately cytotoxic after a month because residual phytochemicals adsorbed on the surfaces of these nanoparticles became more toxic, reducing their cell viability with the progression of time. The titanium nanoparticles synthesized by hydrothermal heating when exposed to fibroblast cell lines became severely cytotoxic after a month. A plausible explanation could have been that toxic by-products released during the synthesis might have made these nanoparticles unstable and reduced the cell viability with every passing day. The possible mechanisms responsible for generating cytotoxicity in titanium nanoparticles are apoptosis, inflammation and oxidative stress, which in turn results in rapid and excessive generation of ROS (reactive oxygen species), leading towards a reduction in cell viability followed by the death of cells exposed to the nanoparticles. The titanium nanoparticles formed by Bacillus subtilis did not produce any abnormal morphological changes in fibroblast cell lines in comparison to the control group. The fibroblasts were elongated, large, flat, elongated and spindle-shaped, with special processes coming out. This shows that all the fibroblast cells exposed to titanium nanoparticles formed by Bacillus subtilis were mostly viable (alive) and normal because of their normal size and shape. The titanium nanoparticles formed by Cassia fistula produced slight abnormal changes with pore formation and degradation in the cell's structure when compared to control group. These nanoparticles decreased the number of viable (alive) fibroblasts by changing their size and shape in turn, disintegrating them. The titanium nanoparticles formed by titanium tetrachloride produced increased number of pores with entire disruption of cell's structure resulting in their irregular sizes and shapes. Thus, these nanoparticles severely declined the number of viable (alive) fibroblast cells. The mor-phological changes, including pore formation and degradation, were similar to previous research on these materials ( Figure 15) [51]. Conclusions The present study concluded that routes of synthesis greatly influence the cytotoxicity of nanoparticles. The titanium nanoparticles synthesized by Bacillus subtilis remained non-cytotoxic with enhanced cell viability > 90%, even after their exposure to L929 mouse fibroblast cell lines for more than one month as compared to titanium nanoparticles produced by other routes such as Cassia fistula and hydrothermal heating, which depicted reduced cell viability in the ranges of cytotoxicity. Moreover, characterization techniques, including XRD, SEM, DRS, TEM and AFM, are the most important tools for measuring the cytotoxic behavior of TiO 2 nanoparticles. These tools purely supported the TiO 2 nanoparticles prepared by Bacillus subtilis as a result of their large size, spherical shape, mixed anatase-rutile phase form and minimum surface roughness in comparison to TiO 2 nanoparticles fabricated by Cassia fistula and hydrothermal heating, which revealed smaller particle size, irregular shape, pure anatase phase form and maximum roughness. Additionally, EDS and FTIR depicted the presence of increased content of titanium and decreased level of functional groups in the TiO 2 nanoparticles synthesized by Bacillus subtilis as compared to those formed by Cassia fistula and hydrothermal heating, which showed decreased content of titanium and increased content of functional groups. This showed that the synthesis of titanium nanoparticles through Bacillus subtilis is favorably and is an easy, sustainable and biocompatible route for not only the safe production of nanoparticles but also their utilization in the advancements of nanomaterials production in dentistry. This study accomplishes the concept of environmental sustainability and supporting green dentistry.
2022-10-20T15:52:47.170Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "54ceebb394cbf9040b9cbaefd7fc32647c701f15", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/20/6972/pdf?version=1666011582", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "40b728c3f712247e7b8def3bfa3b0e41c9f1f80d", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264057908
pes2o/s2orc
v3-fos-license
THE EFFECT OF COMPANY VALUE, PROFITABILITY AND LIQUIDITY ON CSR DISCLOSURE The objective of this research is to examine the impact of enterprise value, profitability, and liquidity of the entities on the disclosure of corporate social responsibility. Quota sampling method was chosen to determine sample data. Sample for this research is entities that have SRI-KEHATI index on Indonesia Stock Exchange with financial report years 2019-2021. This paper uses linear regression for hypothesis testing. The results are independent variable (enterprise value, profitability, and liquidity) positively effect to dependent variable (CSR Disclosure). This implies that companies with higher enterprise values, profitability, and liquidity are more inclined to disclose information about their corporate social responsibility efforts Malaysia, India and Pakistan, provides an illustration that although the four countries have different disclosure concepts, they provide the same results, namely CSR disclosure has a positive impact on company performance (Natalina, 2022). Research conducted on entities with the LQ45 index in 2013-2014, tested the effect of governance and profitability on firm value by disclosing CSR as an intervening variable, yielding results that profitability affects firm value through CSR disclosure (Kamaliah, 2020).Research conducted by Korniasari & Adi (2021), using a sample of 33 consumer goods manufacturing companies for the 2018-2020 period shows that entity growth and public ownership have a negative impact on CSR disclosure, while leverage and company size have a positive impact on CSR disclosure.Another study using samples from seven food and beverage production companies from a population of 13 companies, gave profitability results and had a positive impact on CSR disclosure, but for company size it had no impact (liana, 2020) There is an assumption that governance within an entity can affect the fulfillment of the entity's social responsibility.For this reason, according to the previous author's suggestion, the researcher tried to use other independent variables, namely firm value, liquidity, and profitability, in measuring the effect on CSR disclosure.Multiple linear regression is the analytical method applied in this research, taking samples using quota sampling.Does liquidity, profitability, and firm value have a positive impact on CSR disclosure. Ideally, the higher the level of profitability, company value, and liquidity of an entity, the higher the opportunity for the entity to carry out CSR activities.Then do these variables have a positive influence on CSR disclosure?This is what motivated this research to be conducted, namely to prove whether the variables of company value, liquidity, and profitability, of SRI-KEHATI indexed entities on the IDX have a positive impact on the fulfilment of these entities' social responsibility disclosures. THEORETIC Company Value Company value represents the worth or intrinsic value of a business entity, reflecting its financial health, assets, liabilities, and future cash flows (Jensen, 2019). Several theories and methodologies contribute to the understanding of company value, including: Discounted Cash Flow (DCF) Theory DCF theory asserts that the value of a company is determined by the present value of its future cash flows (Sutjipto, et al., 2020).This method involves forecasting future cash flows and discounting them back to their present value using an appropriate discount rate.The underlying assumption is that the company's value is primarily driven by its ability to generate cash over time. Earnings and Earnings Growth Theory According to this theory, a company's value is closely related to its current and expected earnings.Investors often assess a company's value based on its price-toearnings (P/E) ratio and its potential for earnings growth (Endri, et al., 2020). Companies with higher earnings and growth prospects are typically valued more highly. Profitability Profitability in the context of business and finance is underpinned by several foundational theories and principles.Profitability measures a company's ability to generate earnings or profits relative to its costs and investments. Profit Margin Theory Profitability is often assessed through the concept of profit margins.Profit margin theory emphasizes the relationship between a company's total revenue and its net profit (Nariswari & Nugraha, 2020).It posits that a higher profit margin, which is calculated as net profit divided by total revenue, indicates greater profitability.Companies strive to increase their profit margins by controlling costs and improving pricing strategies. Return on Investment (ROI) Theory ROI theory evaluates profitability in relation to the capital invested in a business. It measures how effectively a company generates profits from its invested capital (Choiriyah, et al., 2020).The ROI formula divides net profit by the total capital employed, encompassing both equity and debt.A higher ROI suggests better profitability and efficient capital utilization. Sustainable Profitability Modern theories of profitability increasingly focus on sustainability (Choiriyah, et al., 2020).They argue that long-term profitability depends on ethical business practices, environmental responsibility, and social consciousness.A positive public image and stakeholder trust can contribute to sustainable profitability. CSR Disclosure Corporate Social Responsibility Disclosure is rooted in several foundational theories and principles related to corporate social responsibility and sustainability reporting.CSR Disclosure refers to the practice of companies sharing information about their social, environmental, and ethical initiatives and performance. Legitimacy Theory Legitimacy theory asserts that companies must maintain a social contract with society to be considered legitimate and socially responsible (Martens & Bui, 2023).By disclosing their CSR activities and impacts, companies seek to legitimize their operations and demonstrate their commitment to ethical and responsible behavior. Accountability Theory CSR Disclosure is closely linked to accountability theory, which argues that companies have a moral and ethical obligation to be accountable for their actions and their impacts on society and the environment (Benlemlih & Bitar, 2018).Disclosure of CSR initiatives allows stakeholders to hold companies accountable for their social and environmental performance. Relations between Variables Company Value of Disclosure of CSR, (Firmansyah, et al., 2021), revealed that the general public believes that CSR disclosure in the annual reports presented by entities can reflect a good picture, and increase market confidence in these entities, so entity managers are advised to pay attention to carrying out corporate social activities and disclosures in order to increase the value of the entity.(Wahyuningsih & Mahdar, 2018) supports the results which illustrate that the larger the company, the more it will attract the interest of the general public, this matter can motivate managers to be careful in determining policies.The policies taken by managers must be in favor of the general public, so as not to cause social problems in the future, therefore the company's value will have a positive impact on CSR disclosure. Efforts to increase the value of the company are very likely to be influenced by expenditures aimed at investment, which can increase the growth and value of the company in the future.One of the expenses that can support this is the fulfillment of social responsibility in order to increase the impression of the general public.Based on the above conclusions that form the basis of the following hypothesis: H1: Company Value has a positive impact on CSR Disclosure Profitability on Disclosure of CSR, (Platonova, et al., 2018) explains the results of his research that profitability has an influence on disclosure of social responsibility, although it is not significant.Research conducted on manufacturing companies gave the same results, namely profitability which was calculated using the Earning Per Share (EPS) ratio had an influence on CSR.EPS is an indicator of shareholders to assess the performance of an entity, an increase in EPS encourages entity managers to expand CSR disclosures in order to attract investors to make or increase investment (Aviana, 2019). The greater the level of profitability of an entity opens greater opportunities for CSR activities, as well as disclosure of the entity's social activities.This conclusion underlies the following hypothesis: H2: Profitability has a positive impact on CSR Disclosure Liquidity Regarding CSR Disclosure, high liquidity shows the ability of an entity to fulfill its current debt using current assets owned by the entity.This is evidenced in the results of research conducted on manufacturing companies, which partially have a significant positive influence on CSR disclosure (Marulitua, 2021) The more liquid an entity is, the opportunity for that entity to implement CSR and its disclosure will be higher.The above conclusion underlies the following hypothesis: H3: Liquidity has a positive impact on CSR Disclosure. In collecting data, the author uses the quota sampling method and the objects being researched are entities with the SRI-KEHATI index on the IDX and their financial reports for 2019-2021.The element studied from the object of this research is CSR disclosure. There are two variables in this study, the dependent variable in the form of CSR disclosure, and the independent variable in the form of firm value, profitability and liquidity.The analytical tool used in this research is regression analysis with its use in Simultaneous regression coefficients, the point is to know whether the independent variables have a simultaneous effect on the dependent variable.The results obtained by the F value of 18.575 (significance F = 0.000) results are smaller than the significant 0.05 in conclusion, the estimated linear regression can be used to describe the impact of firm value, profitability and liquidity on CSR disclosure.It can be explained that all independent variables have an impact on the dependent variable together.Partial test, used in partially proving the independent variable to the dependent variable. Regression equation (CSRIJ = β0 + β1 PBV + β2 ROE + β3 QR + e), CSRIJ = 0.490 + 0.038 PBV + 0.010 ROE + 0.008 QR + e.From the table above, the results of the test can be drawn as follows: (1) Constant a = 0.490 means that if the PBV, ROE, QR index is 0, then CSRD will show a value of 0.490.(2) The regression coefficient of firm value (PBV) of 0.038 shows a positive sign, illustrating that firm value has a positive impact on CSRD, meaning that if the firm value variable increases, then CSRD will increase by 0.038.(3) The profitability regression coefficient (ROE) of 0.010 shows a positive result.This explains that the value of profitability (ROE) proxied by ROE has a positive impact on CSRD, meaning that if the ROE variable increases, CSRD will also increase by 0.010.(4) The liquidity regression coefficient (QR) of 0.008 shows a positive sign, this shows that liquidity (QR) proxied by QR has a positive effect on CSRD, meaning that if the QR variable increases, then CSRD will increase by 0.008. Testing the hypothesis, testing the regression coefficient test shows the significance value of the firm's value is 0.038 (sig < 0.05).This explains that company value has an impact on CSR disclosure, so that H1 is accepted. The results of the t test also show a significant value of profitability (ROE) which is 0.010 (sig < 0.05).This illustrates that profitability (ROE) has an impact on CSR disclosure, so that H2 is accepted. The results of the t test also illustrate the significance value of liquidity (QR) 0.008 (sig < 0.05).Shows that liquidity (QR) has an impact on CSR disclosure, so H3 is accepted. Discussion of results, based on the results of hypothesis testing where hypothesis one (H1) is accepted, namely company value has a positive impact on CSR disclosure, it means that firm value has a positive influence on CSR disclosure.Research Firmansyah et al., (2021), for Food and Beverage companies, the results show that in addition to product quality, the implementation of CSR can increase positive market reactions, which have an impact on increasing company value.The results of this study support that the higher the company's value, the higher the need for an entity to show its company's impact to the public.This means that the higher the entity will also make disclosures in accordance with applicable standards. Hypothesis two (H2) Profitability has a positive impact on CSR Disclosure, from the results of data processing it shows that the H2 hypothesis test is accepted.Means that profitability also shows a positive impact on CSR disclosure.In line with the results of research conducted on manufacturing companies, which provide profitability results have an influence on CSR disclosure.Companies that can invest more in social activities are companies that have fairly strong financial performance (Wahyuningsih & Mahdar, 2018).CSR is a cost for the entity, if an entity has a low profit level then CSR activities will become an obstacle for the entity.Conversely, if the entity has a high level of profitability, the opportunity for the entity to implement CSR will be even higher. The results of hypothesis testing on hypothesis three (H3) are accepted, this shows that liquidity has a positive impact on CSR disclosure.This is in line with the results of research tested on manufacturing companies listed on the IDX for 2015-2019, which gave the result that liquidity has an influence on CSR disclosure.The higher the level of liquidity of an entity, the wider the opportunity for the entity to carry out and disclose social activities (Marulitua, 2021).It can be concluded that an entity can be said to be liquid if it has current assets that are greater than its liabilities, which means that liquidity is a picture of the ability to spend and fulfill the current obligations of an entity.The higher the level of liquidity of an entity, the ability and opportunity for the entity to fulfill its social responsibilities will also be higher. Limitations The test carried out in this study was to compare independent variable data (PBV, ROE, and quick ratio) with dependent variable data (CSRDI) in the same year for 2019 -2021.The author has limited data to carry out tests by comparing independent variable data with data dependent variable the following year.This is because the time of this
2023-10-14T15:05:52.338Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "45739bd6d9d80373db935b7c82934f429d1bc020", "oa_license": "CCBYSA", "oa_url": "https://ojspustek.org/index.php/SJR/article/download/746/530", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7ac52cc1dd9c0765e8ac6247a1ce32176e9933a7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
237354910
pes2o/s2orc
v3-fos-license
DARNet: Dual-Attention Residual Network for Automatic Diagnosis of COVID-19 via CT Images The ongoing global pandemic of Coronavirus Disease 2019 (COVID-19) poses a serious threat to public health and the economy. Rapid and accurate diagnosis of COVID-19 is crucial to prevent the further spread of the disease and reduce its mortality. Chest Computed tomography (CT) is an effective tool for the early diagnosis of lung diseases including pneumonia. However, detecting COVID-19 from CT is demanding and prone to human errors as some early-stage patients may have negative findings on images. Recently, many deep learning methods have achieved impressive performance in this regard. Despite their effectiveness, most of these methods underestimate the rich spatial information preserved in the 3D structure or suffer from the propagation of errors. To address this problem, we propose a Dual-Attention Residual Network (DARNet) to automatically identify COVID-19 from other common pneumonia (CP) and healthy people using 3D chest CT images. Specifically, we design a dual-attention module consisting of channel-wise attention and depth-wise attention mechanisms. The former is utilized to enhance channel independence, while the latter is developed to recalibrate the depth-level features. Then, we integrate them in a unified manner to extract and refine the features at different levels to further improve the diagnostic performance. We evaluate DARNet on a large public CT dataset and obtain superior performance. Besides, the ablation study and visualization analysis prove the effectiveness and interpretability of the proposed method. I. INTRODUCTION The Coronavirus Disease 2019 , caused by the severe acute respiratory symptom coronavirus 2 (SARS-CoV-2), is spreading rapidly across the world through extensive person-to-person transmission [1]. The World Health Organization (WHO) officially declared the COVID-19 a pandemic on 11 March 2020. As of 23 August 2021, the COVID-19 has infected more than 211 million people in more than 192 countries and territories and caused more than 4.43 million deaths [2]. Due to the high infectivity and fatality rate, the COVID-19 pandemic has had a devastating impact on public health and the economy. It is of great importance to conduct early diagnosis of COVID-19, for preventing the further spread of the disease and delivering proper treatment regimen. The real-time reverse transcription-polymerase chain reaction (RT-PCR) test is the golden standard for the diagnosis of COVID-The work is supported by the National Key Research and Development Program of China(GrantsNo. 2017YFB0202002). 19 infection [3]. However, the high false-negative rate [1] of RT-PCR may delay the diagnosis of potential cases. As a complementary strategy, Chest X-ray and Computed Tomography (CT) are widely used in the early diagnosis of patients suspected of SARS-CoV-2 infection [4]. Compared with X-ray images, chest CT scans have higher sensitivity in diagnosing COVID-19 infection, and can provide more detailed information about the lesion, which is helpful for quantitative analysis [5]. Early investigations have observed typical radiographic features on chest CT images such as ground-glass opacities (GGO), multifocal patchy consolidation, and vascular dilation in the lesions [6]- [9]. However, detecting COVID-19 from CT images is demanding and prone to human errors as some early-stage patients may have normal imaging features. Besides, the similar imaging findings between COVID-19 cases and common pneumonia (CP) cases on the image make it difficult to differentiate. Recently, many deep learning methods have been applied to the automatic diagnosis of COVID-19 using chest CT images and achieved impressive performance. Some keyframe-based methods [8], [10] use local abnormal slices rather than 3D images to make diagnostic decisions, while [11]- [13] focus on segmenting the lesion area and then extract specific features for diagnosis. Despite their effectiveness, most of these methods provide a multi-phase framework, which means that the errors in upstream tasks will propagate backwards. For instance, the keyframe-based methods highly rely on the accurate classification of abnormal slices, otherwise incorrect results will negatively affect subsequent tasks. Furthermore, these methods usually have high requirements for annotation data because of the additional upstream tasks. Based on traditional 2D neural networks, other methods [14], [15] make efforts on extending them to classify 3D CT images and obtain promising results. However, the simple network transformation has limitations in taking full advantage of the 3D properties of CT images, resulting in the diagnostic performance that may not meet actual clinical needs. To this end, in this paper, we propose a dual-attention residual network (DARNet), to automatically diagnose COVID-19 from CP and healthy people using CT images. In DARNet, the 3D variant of ResNet-18 [16] is used as the backbone network, which takes a full 3D chest CT image as input. To fully leverage the 3D spatial information, we design a dual-attention module to extract and refine the representation features at different levels. The module mainly consists of two parts: 1) channel-wise attention and 2) depth-wise attention. The former is first proposed in [17], and we implement its 3D extension. In this study, we develop the latter, which can adaptively assign depth-level weights to each feature map during the training. We evaluate our method on the largest public CT image dataset, to the best of our knowledge. The experimental results show that DARNet is superior to existing methods. We further provide ablation studies and prove the effectiveness of the proposed dual-attention module in improving the classification accuracy and the interpretability of the model. As a summary, our work has three major contributions as follows: • We propose DARNet to realize automatic and accurate diagnosis of COVID-19 using 3D chest CT images. In addition to superior classification performance, our method is more sensitive to the location of the lesion regions in visual attention. • To make full use of 3D spatial information of CT images, we design a dual-attention module, which can refine the learned features at different levels. The experimental results prove the effectiveness of this module in improving the classification performance and the interpretability. A. Automatic Diagnosis of COVID-19 Recently, the successful application of artificial intelligence (AI) in medical image analysis [18] has promoted the development of radiological diagnosis technology. To combat the current pandemic, plenty of research efforts had been carried out over the past few months to design an AI system for the early diagnosis of COVID-19 via radiological imaging. [19]- [21] employed convolutional neural networks (CNNs) to automatically identify COVID-19 infection from chest Xray images and obtained impressive results. However, these methods are still limited due to the low contrast and the lack of significant features caused by the high overlapping of ribs and soft tissues. Compared with a single X-ray image, a chest CT scan composed of hundreds of 2D slices can reflect more detailed radiographic features about the lesions, such as GGO and consolidation. To simplify the computation, several keyframebased methods [8], [10] were proposed to diagnose COVID-19 in CT images and achieved promising results. But these methods underestimated the 3D spatial information of CT images and highly relied on the accurate detection of abnormal slices. [11]- [13] proposed the segmentation-based approaches that can generate more specific lesion information, such as the number and volume of lesions, which was valuable for the quantitative analysis in COVID-19 diagnosis. However, obtaining large amounts of CT data with segmentation labels is the primary challenge of these methods. Besides, most of the above methods provide a multi-stage framework, which means that these methods may be affected by error propagation. [14], [15] directly transfer 2D neural networks to classify 3D CT images, but their performance may not meet actual clinical needs. We thus develop DARNet to diagnose COVID-19 in an end-to-end fashion, which takes a complete chest CT image as input and can achieve competitive classification performance. B. Attention Mechanism Attention mechanism is an effective way to improve network performance by enhancing the learned features. Hu et al. [17] proposed the channel-wise attention (CA) to refine the hidden features in the channel level during training, which can make the network more focused on the important regions. In other words, the CA module amplifies the difference between channel features by highlighting the features with a greater response, and suppressing the others. The most important is that this adjustment mechanism is completely dynamic and learnable. The effectiveness of the CA module has been proved in many applications [22]- [24]. At the same time, there have been many variations and extensions. For example, [25], [26] proposed a joint attention module based on the CA module, which brings a significant improvement in segmentation performance. These studies show that multi-attention fusion has great potential in improving network performance. Inspired by this, we design a novel attention mechanism called depthwise attention (DA) to recalibrate the depth-level features. By combining this module with the CA module, we construct a dual-attention module to improve the representation ability of the 3D neural networks. A. Overall Architecture As shown in Fig. 1(a), the overall architecture of DARNet mainly consists of three submodules: 1) input module, 2) dual-attention module, and 3) output module. Considering the computation complexity and GPU memory capacity, we use the 3D ResNet-18 [16] as the backbone network. Specifically, the input module is composed of a 3D convolutional layer (Conv3D) with a kernel size of (3, 7, 7) and a stride of (1, 2, 2), a batch normalization layer (BN), and a ReLU activation layer. Besides, unlike naive ResNet-18, we remove the max-pooling layer. In this way, the input 3D CT image is downsampled by a factor of 8 in the depth dimension and a factor of 16 in the other two dimensions. The higher-resolution feature maps retain more contextual information, which is also conducive to visual analysis. In the feature extraction part, a total of 8 dual-attention modules with residual connections constitute the main structure. Each dual-attention module consists of two consecutive convolutional layers with a kernel size of (3, 3, 3), followed by BN, ReLU, and two attention mechanisms: 1) channel-wise attention and 2) depth-wise attention. More detailed information about this module is introduced in the next subsection. For the output module, the global average pooling layer (GAP) is first used to squeeze the input features. Then a followed fully connected layer with a softmax layer generates corresponding prediction probabilities. Finally, the network returns the predicted category based on the probabilities. B. Dual-Attention Module A complete CT image is usually composed of hundreds of 2D slices stacked in sequence. These slices have high spatial continuity and content relevance, constituting the complete contextual information of the lungs. Moreover, we observe that the lesions of various sizes appear randomly in the lungs, resulting in only a portion of the slices containing visible disease characterizations. The spatial correlations of different dimensions and the inter-slice information will be entangled by a 3D convolution operator when using 3D CNN to directly classify CT images. To refine the hidden features, Hu et al. [17] proposed the channel-wise attention module to enhance channel independence and thereby improve the performance of the networks. But this module has limitations in our task, due to the sparse distribution of lesion features at the depth level. Motivated by this observation, we design a complementary mechanism called depth-wise attention module for 3D CNN to recalibrate the depth-level features, which can make the network more sensitive to the important regions of the images. By integrating DA and CA modules, we construct the dualattention module used in DARNet. 1) Channel-wise Attention Module: We implement the 3D version of CA module based on the origin idea in [17], as shown in Fig 1(b). Firstly, the input features are squeezed by a GAP layer. Considering the input feature map F in ∈ R C×D×H×W and F in = [f 1 , f 2 , ..., f C ], where C, D, H, and W are the input channels, depth, height, and width, respectively, and f i ∈ R D×H×W . The output of the GAP represented by Z ∈ R C×1×1×1 with its element Above operation embeds the global spatial information in vector Z. This vector is transformed to the weight vector Z = σ(W 2 (ξ(W 1 Z))), with W 1 ∈ R C r ×C , W 2 ∈ R C× C r being the weights of two fully-connected layers, the ReLU function ξ(·) and the sigmoid function σ(·). The parameter r refers to the reduction ratio and is set to 16 in this study. The recalibrated output vector is ( Each element inẐ indicates the importance of the corresponding channel and is used to dynamically amplify or suppress the input response. In this way, the CA module can enhance the important features and ignore the irrelevant ones. However, it is limited to directly extend this module in 3D neural networks to classify CT images. Due to the sparse distribution of lesions, the information between slices varies greatly. The performance improvement achieved by differentiating channel-level features alone is not very significant. Therefore, we design the DA module to make up for this defect. 2) Depth-wise Attention Module: For DA module, as same as CA module, the spatial information is aggregated first along the depth axis by GAP layer, as shown in Fig. 1(c). Considering the input feature map U in ∈ R C×D×H×W and U in = [u 1,1 , u 1,2 , ..., u i,j , ..., u C,D ], and u i,j ∈ R H×W . The output of the GAP represented by T ∈ R C×D×1×1 with its element Then, a gating mechanism is designed to the learn non-linear and non-mutually-exclusive relationships in the depth dimension. The gating mechanism is parameterized by two fullyconnected layers and two non-linearity activation functions. The output isT = σ(W 2 (ξ(W 1 T))), with W 1 ∈ R CD r ×CD , W 2 ∈ R CD× CD r being the weights of two fully-connected layers. The parameter r here is equal to the number of input channels. Finally, the resultant tensor is used to refine U in to U out = [t 1,1 u 1,1 ,t 1,2 u 1,2 , ...,t i,j u i,j , ...,t C,D u C,D ]. (4) The DA module recalibrates the depth-level features by adaptively assigning weights, which can make the network more focused on the important regions distributed sparsely along the depth dimension. This module makes up for the deficiency of the CA module. Then, we develop the dualattention module of DARNet based on the serial combination of the two, which can refine the learned features at different levels. IV. EXPERIMENTS We conduct experiments on a public dataset provided by the China Consortium of Chest CT Image Investigation (CC-CCII 1 ) [11] to evaluate our method. In this section, the 1 http://ncov-ai.big.ac.cn/download?lang=en construction of the dataset used and implementation details are described first. Then, we compare different networks in terms of the diagnostic performance, and perform ablation studies to validate the effectiveness of the proposed dualattention module in improving the performance. Finally, class activation mapping (CAM) [27] is employed to visualize the discriminative regions of these networks in diagnosing COVID-19, which can help to explore the interpretability of different methods. In this paper, we evaluate our proposed method on a large publicly available CT dataset provided by CC-CCII. The CT dataset contains a total of 4,178 chest CT images from 2,742 patients, including 1,544 CT images from 929 COVID-19 patients, 1,556 CT images from 964 CP patients, and 1,078 CT images from 849 healthy controls. As shown in Table I, we separate the dataset into two parts. The first part (Training set) is used for training, which includes 1,245 COVID-19 images, 1,137 CP images, and 856 images of healthy controls. The second part (Test set) serves for independent testing, including 299 COVID-19 images, 419 CP image, and 222 images of healthy controls. In particular, the split is done on patient level, which means the images of same subject are kept in the same set of training or testing. A. Dataset and Metrics In the training stage, the training set is randomly divided into five folds on patient level for cross-validation. For evaluating, we use five different classification metrics, including the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1-score, to evaluate the performance of different networks. The mathematical expressions of accuracy, sensitivity, and specificity are shown below. Specif icity = T N T N + F P . True positive, true negative, false positive, and false negative are denoted by TP, TN, FP, and FN respectively. B. Implementation Details Pytorch is adopted to implement our proposed method. For training the networks, we use Adam optimizer [29] to minimize the cross-entropy loss with an initial learning rate of 10 −3 . The convolutional layer weights are initialized by the Kaiming Normalization [30] and the biases are set to 0. Besides, we apply the multi-step decay strategy to control the change of the learning rate during training. The learning rate is reduced every 30 epochs with a decay factor of 0.1. All the models are trained from scratch using 2 NVIDIA Tesla P40 graphic processing units. Given the limitation of GPU memory, the batch size is set to 8 and the size of all images is fixed to 64 × 224 × 224 by under-sampling or up-sampling. In each fold, the model is evaluated on the validation set at the end of each training epoch, and finally the best model within 80 epochs is evaluated on the independent test set. To alleviate the overfitting problem, we conduct online data augmentation including random flipping, rotation, translation, and scaling. The codes used in the experiments is available 2 . C. Overall Performance We compare the performance of DARNet with four existing methods. For a fair comparison, the test sets used by these methods are also from the same CT dataset provided by CC-CCII, and we directly quote the results reported in related papers. As shown in Table II, we can see that DARNet achieves the best performance on four indicators with sensitivity of 96.86%, specificity of 97.19%, F1-score of 95.49%, and AUC of 0.995. As for the accuracy, the performance of DARNet is a little bit lower than that of [15]. In particular, [15] proposed an ensemble learning method using multiple classifiers to make the diagnostic decision. Although this method has high accuracy, it is also demanding on the classifier design and integration strategy. [14] provides a benchmark for COVID-19 detection using deep learning models. The benchmark tests multiple models and we select the best performing one for comparison. According to its results, we observe that it is limited to directly transfer 2D neural networks to classify 3D CT images. The main reason is that this method ignores the rich spatial information preserved 2 https://github.com/shijun18/COVID-19 CLS in the 3D structure. Moreover, [11] and [28] are segmentationbased methods, which highly rely on accurate segmentation of the lesions. However, these multi-stage frameworks often suffer from error propagation. For example, the incorrect segmentation results can directly make a negative impact on subsequent tasks. In contrast, DARNet is an end-to-end model that can avoid this problem. Besides, the proposed dualattention module can effectively improve the feature extraction ability of the model, which helps to obtain higher classification performance than the naive CNN-based methods. The results in Table II prove the superiority of DARNet in identifying COVID-19 from CP and healthy people. D. Ablation Study The overall experiments have proved the superiority of DARNet. However, which module plays a more important role in performance improvement is still unclear. Therefore, we conduct an ablation study to validate the effectiveness of each module, including CA, DA, and the dual-attention modules. Table IV quantitatively compares the performance of different networks on the independent test set. For COVID-19 versus the other two classes (CP and healthy controls), DARNet achieves the highest AUC, accuracy, sensitivity, and F1-score. Meanwhile, DARNet obtains the best results on all performance indicators for the three-way classification. The results of the ablation experiments reveal the importance of each part. According to the results, we can observe varying degrees of model performance decline. Among all of them, the dual-attention module has the biggest impact on the model performance. By applying the dual-attention module, DARNet has a significant improvement on all performance indicators, while the parameter is only increased by about 6.4% as shown in Table III. Moreover, removing CA or DA module will also have a negative impact on network performance. These observations further prove the effectiveness of the dual-attention module. E. Visualization Analysis To further explore the interpretability of DARNet, we employ CAM [27] to visualize the discriminative regions of different networks in diagnosing COVID-19. Fig. 2 shows the visualization results on three COVID-19 cases with different degrees (mild, moderate, and severe) of infection, highlighting the regions that the network focuses on when making deci-sions. We observe that DARNet can accurately locate lung lesions that vary greatly in size and distribution. However, after removing CA or DA module, the positioning ability of the network has declined significantly. For instance, for the severe COVID-19 case in Fig. 2, we can see diffuse lesions in both lungs, consolidation of the lower lobe of the left lung. When we remove the CA and DA modules in turn, the highlighted area in the right lung gradually shrinks. Especially, the network without these two modules has very low sensitivity to the lesions, and may even be disturbed by the information outside the lung area. The above results demonstrate that DA and CA modules can enhance the learned features to ensure that the decisions made by the network depend mainly on the infection regions to a certain extent, rather than the irrelevant parts of the images. More importantly, the results also show that DARNet has better interpretability and reliability in diagnosing COVID-19. V. CONCLUSION In this work, we proposed a dual-attention residual network that can realize the automatic and accurate diagnosis of COVID-19 using 3D chest CT images. In our method, we constructed the dual-attention module by combining CA and DA modules to refine the hidden features by adaptively assigning weights during training. This module can effectively improve the classification performance and interpretability of 3D ResNet, while only slightly increasing the computational complexity. We evaluated our method on a large public CT dataset, achieving state-of-the-art results. To further understand the decision of the proposed method, we showed the visual evidence to reveal the discriminative regions used in the model for diagnosis. In future work, we will further investigate the generalization capability of the proposed method. Besides, more work is still devoted to analyzing the relationship between these discriminative regions and the image findings.
2021-05-17T01:16:13.212Z
2021-05-14T00:00:00.000
{ "year": 2021, "sha1": "11f4fdc114391e22016687cffb5315d24a417a71", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "11f4fdc114391e22016687cffb5315d24a417a71", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
57420791
pes2o/s2orc
v3-fos-license
Wavelength Assignment Algorithm for Optical Networks Wavelength assignment problem is one of the important problem in optical networks as on the first stage the route of the optical network is to be selected and after the route is selected then the wavelength is to be assigned to that route. In this paper we have proposed a wavelength assignment technique for the better performance of the optical network. The results have proved it better than the conventional algorithms. Wavelength Division Multiplexing Theoretically, fiber has extremely high bandwidth (about 25 THz [terahertz]) in the 1.55 low-attenuation band and this is thousands times of the total bandwidth of radio on the planet Earth [3]. However, only speed of a few gigabits per second is achieved because the rate at which an end user (a workstation) can access a network is limited by electronic speed, which is a few gigabits per second. Hence, it is extremely difficult to exploit all the bandwidth of a single fiber using a single high-capacity wavelength channel due to optical-electronic bandwidth mismatch or "electronic bottleneck." The recent breakthroughs (Tb/s) are the result of two major developments: WDM, which is a method of sending many light beams of different wavelengths simultaneously down the core of an optical fiber and the EDFA, which amplifies signal at different wavelengths simultaneously regardless of their modulation scheme or speed. WDM is essentially same as frequency division multiplexing (FDM), which has been used in radio systems for more than a century. WDM systems use a carrier wave which is higher than that of an FDM channel by a million times in frequency (THz versus MHz). Within each WDM channel, it is possible to have FDM where the channel bandwidth is subdivided into many radio frequency channels each at a different frequency. This is called subcarrier multiplexing. A wavelength can also be shared among many nodes in a network by electronic time division multiplexing. Note that WDM eliminates the electronic bottleneck by dividing the optical transmission spectrum (1.55-micron band) into a number of non-overlapping wavelength channels. These channels coexist on a single fiber with each wavelength supporting a single communication channel operating at a peak electronic speed. The attraction of WDM is that a huge increase in available bandwidth can be obtained without the huge investment necessary to deploy additional optical fiber. The DWDM technique effectively increases the total number of channels in a fiber by using very narrow spaced channels [54]. Typically channel spacing ranges from 0.4 nm to 4 nm. WDM Optical Network WDM Optical Network is a network of computers in which the backbone is optical fiber cable and the mode of transmission is wavelength division multiplexing. The information stream from multiple sources is optically combined by the star and the signal power of each stream is split and forwarded to all nodes through their fibers. Communication between source and destination may either be singlehop or it may be multi-hop. 2. Literature Review Z. Zhang et. al. [3], presented a heuristic algorithm for effective assignment of a limited number of wavelengths among the access stations of a multi-hop network where the physical medium consists of optical fiber segments which interconnect wavelength elective optical switches. PoompatSaengudomlertet. al. [4],developed an on-line wavelength assignment algorithm for a wavelength-routed WDM tree network. The algorithm dynamically supports all k-port traffic matrices among end nodes. Implementation of proposed wavelength assignment algorithm was also demonstrated using a hybrid wavelength-routed/broadcast tree with only one switching node connecting several passive broadcast sub-trees. Junjun Wan et. al. [5], proposed a wavelength assignment algorithm, which was based on the method called Dynamic Preferred Wavelength Sets (D-PWS). Also, they described the basic architecture of the optical burst switching network based on Dynamic Wavelength Routing (DWR), under which the guarantee of the quality of service in the DWR-OBS network was discussed. Then they focused on two aspects: the transmission latency of the data packets and the blocking probability, which leads to a quantitative description of the transmission latency and the size of the edge node buffer. F. Matera et. al. [6], showed how to obtain a wavelength assignment in a wide geographical transport network connecting the main cities of Europe, when all optical wavelength converters are introduced in the network nodes. They also reported an investigation on 40 Gb/s transmission performance in the presence of all optical wavelength converters based on four wave mixing in semiconductor optical amplifiers and on different frequency generation in periodically poled lithium niobate waveguides. Anwar Alyatama [7], used random and first-fit wavelength assignment approach for presenting an approximate analytical method and evaluated the blocking probabilities in wavelength division multiplexing networks without wavelength converters. The new approach viewed the WDM network as a set of different layers (colours) in which, blocked traffic in one layer is overflowed to another J u l y 2 0 , 2 0 1 3 layer. Analysing blocking probabilities in each layer of the network is derived from an exact approach. A moment matching method was then used to characterise the overflow traffic from one layer to another. Raja Dattaet. al. [8], presented a wavelength assignment algorithm which was used for optimal assignment of a single wavelength to single-hop traffic in a tree topology. The work was further extended for the wavelength assignment in a general graph. This polynomial time algorithm gave an optimal solution to the routing and wavelength assignment problem in a tree topology. P. Rajalakshmiet. al. [9], proposed a new wavelength assignment technique called wavelength reassignment algorithm in which when the new call gets blocked due to wavelength continuity constraint the already established calls were reassigned the wavelength, so as to create a wavelength-continuous route in order to accommodate the new call. During wavelength reassignment the routes for all calls remain the same, i.e. no rerouting was done. The problem of enhancing the blocking performance, in the circuit-switched wide-area optical wavelength-division multiplexed networks with no wavelength conversion at the nodes was also considered. I. Alfouzanet. al. [10], introduced two new wavelength assignment reconfiguration algorithms, the One-Directional Transfer (1DT) and the Two-Directional Transfer (2DT) algorithms. The simulation results for both these algorithms were shown to outperform the existing algorithms in terms of the trade-off. Abhisek Mukherjee et. al. [11], proposed a new wavelength conversion algorithm in a DWDM network using online routing. The model for the algorithm has been theoretically developed and the corresponding call connection probability has been calculated. The limitation on the number of wavelength conversions has been addressed by fixing the maximum number of wavelength conversions allowed for the transmissions of a single packet over the network. Nen-Fu Huang et. al. [12], proposed an efficient distributed Wavelength Reusing/ Migrating /Sharing Protocol (WRMSP) for the Dual Bus Lightwave Networks (DBLN). This protocol constituted of three efficient schemes for carrying out the wavelength reusing, migration and sharing respectively. Mahesh Sivakumaraet. al. [13], studied the effect of wavelength conversion on the blocking performance of connections with multiple rates. The blocking performance of the TDM wavelength routing network was evaluated through simulations. Proposed Algorithm In this section we have proposed an improvement of least used wavelength assignment algorithm. In this algorithm least-used wavelength assignment algorithm is executed until blocking. When the call is blocked wavelength conversion is introduced and hence blocking probability is reduced. If the full wavelength conversion is used after least-used wavelength assignment algorithm the blocking probability is reduced to a very large extent and its value reduces to a minimum possible value. As full wavelength conversion is costlier than sparse wavelength conversion so the sparse wavelength conversion is employed in this proposed algorithm. The least-used wavelength conversion algorithm can be easily explained with the following steps. Results and Discussion In this section, the simulation results of proposed improved wavelength conversion algorithm have been shown. Also, the blocking probability of proposed algorithm is compared with the conventional algorithms. The simulation is carried out on simulation software MATLAB 7.2 of Mathworks. The blocking probability of network is compared depending upon number of channels, load and the number of links. The Improved wavelength conversion algorithm has been proposed for wavelength assignment and the performance of this wavelength assignment algorithmis evaluated in terms of blocking probability and fairness. The results are shown in figure 1 -9. In the first phase we have varied the number of wavelengths by keeping the other parameters constant. We have fixed the number of channels to 20; total number of links in the network to 20 and maximum load per unit link to 10 Erlangs and increased the number of wavelengths used from 20 to 50 respectively in figure 1 to 4. The results shown in figure 1 -5 prove that the blocking probability of the proposed algorithm decreases with the increase in the number of wavelengths. As the number of wavelength is increased the blocking probability is decreased. Further, in figure 6 -9 the load per unit link is increased keeping the other parameters constant. The results show that as the load is increased the blocking probability of the network increases for the proposed algorithm keeping other parameters constant. The results shown in figure 6-9 shows that as the load in increased the blocking probability of the network is increased. Conclusion This paper deals with the wavelength assignment algorithm and in this paper we have proposed an improved wavelength conversion algorithm. The performance of this algorithm is computed on the basis of blocking probability and fairness. The results show that the proposed algorithm is better in terms of blocking performance. In future this algorithm can be compared with the conventional algorithms.
2019-01-23T16:49:48.208Z
2013-07-30T00:00:00.000
{ "year": 2013, "sha1": "614eabbb7f6835c453c725014480d3272119f249", "oa_license": "CCBY", "oa_url": "https://rajpub.com/index.php/ijct/article/download/4168/pdf_113", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "011e15dd125c0525a4c7f2f9a997577b3e87fcbd", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
267762827
pes2o/s2orc
v3-fos-license
Immunotherapy targeting the obese white adipose tissue microenvironment: Focus on non-communicable diseases Obesity triggers inflammatory responses in the microenvironment of white adipose tissue, resulting in chronic systemic inflammation and the subsequent development of non-communicable diseases, including type 2 diabetes, coronary heart disease, and breast cancer. Current therapy approaches for obesity-induced non-communicable diseases persist in prioritizing symptom remission while frequently overlooking the criticality of targeting and alleviating inflammation at its source. Accordingly, this review highlights the importance of the microenvironment of obese white adipose tissue and the promising potential of employing immunotherapy to target it as an effective therapeutic approach for non-communicable diseases induced by obesity. Additionally, this review discusses the challenges and offers perspective about the immunotherapy targeting the microenvironment of obese white adipose tissue. Introduction Obesity, a chronic disease characterized by the excessive accumulation of adipose tissue, is frequently underestimated.Consequently, the prevalence of obesity has been continuously rising over the years, and it is projected that more than 1.5 billion adults will be categorized as obese by 2035 [1].Furthermore, obesity is a multifactorial disease that arises from multiple factors of etiology and manifests in various pathological features.The etiology of obesity includes obesogenic environments, psycho-social factors, and genetic variants [2].The pathological manifestations of obesity extend beyond the boundary of white adipose tissue (WAT) and include various other tissues, including bone, nerve, and intestinal epithelial tissues.Obesity has been found to promote the formation of osteoclasts, leading to a 37.5 % reduction in bone strength among obese patients [3,4].Moreover, the brain of obese patients exhibits a significant alteration in the structure of the white matter due to the occurrence of obesity-induced demyelination [5].Obesity not only affects the structure of neural tissue but also leads to alterations in the structure and permeability of the intestinal epithelial tissue.The observed phenomena may be attributed to the obesity-induced proliferation of intestinal stem cells and claudin-2-mediated restructuring of tight junctions [6,7]. The major pathological manifestation of obesity is most prominent in WAT, which defines obesity as a disease linked to the abundance of adipose tissue.Furthermore, the crucial role of WAT in determining the health consequences of obesity in an individual has demonstrated that the impact of obesity on WAT is more significant than its impact on other tissues [8].A recent study has demonstrated that when WAT overcomes overnutrition by undergoing adipocyte hyperplasia, which involves an increase in the number of cells, it leads to obese patients exhibiting normal levels of inflammatory markers [9].These patients are classified as having metabolically healthy obesity (MHO), a distinct phenotype observed in obese patients who do not exhibit the typical indications of dyslipidemia or hypertension [10].Conversely, when WAT overcomes overnutrition by enlarging the size of adipocytes, these hypertrophic adipocytes trigger immune system responses by secreting pro-inflammatory factors [11].As a consequence, the WAT microenvironment (WATME) undergoes a substantial alteration, shifting from an anti-inflammatory state, which is discernible in lean WATME, to a pro-inflammatory state, characteristic of obese WATME [12]. The strong correlation between inflammation occurring within obese WATME and the resulting impact on the development of obesity-related comorbidities, particularly non-communicable diseases (NCDs) has been extensively studied [13,14].Empirical evidence unequivocally demonstrates that obesity significantly affects the development of NCDs.Obesity, at varying degrees of severity, decreases the number of NCD-free years by 3-9 years in persons aged 40 and above [15].Furthermore, the 'WHO Acceleration Plan to Stop Obesity' received official endorsement from the World Health Organization (WHO) during the 75th World Health Assembly in 2022.The objective of this approach is to accomplish Sustainable Development Goal 3.4, which involves a 33 % reduction in premature deaths caused by NCDs by the year 2030 [16].The proposition is supported by empirical data revealing that in 2019, obesity accounted for approximately 18 % of all preventable NCD-related mortalities and was the direct cause of the premature death of approximately five million individuals [17].Additionally, obesity demonstrates a remarkable correlation with the development of type 2 diabetes (T2D), coronary heart disease (CHD), and breast cancer (BC), in comparison to other NCDs [18][19][20]. Accordingly, immunotherapy that works at the site of immune system response, referred to as "WATME-specific immunotherapy", offers outstanding potential as a therapeutic approach for treating obesityinduced T2D, CHD, and BC (Fig. 1).WATME-specific immunotherapy can be classified into two categories depending on how it affects the immune cell composition and inflammatory state of obese WATME.There is direct immunotherapy that works directly on cells or signaling pathways and indirect immunotherapy that works indirectly through targeting the secreted cytokines.Notwithstanding the encouraging prospects, there has been a notable absence of a comprehensive review focusing on the implementation of WATME-specific immunotherapy. Hence, in this review, we provide a concise summary of the mechanisms by which inflammation manifests in obese WATME, which eventually leads to the development of obesity-induced T2D, CHD, or BC.This review includes the status of WATME-specific immunotherapy and the implementation of this approach in clinical studies.Lastly, we discuss the existing challenges encountered in the field and the perspectives for WATME-specific immunotherapy. Pathogenesis of obesity-induced NCDs The strong correlation between obesity and the development of type 2 diabetes (T2D), coronary heart disease (CHD), or breast cancer (BC) is supported by statistical evidence indicating a significant majority, surpassing 80 % of individuals afflicted with T2D or CHD, also suffer from obesity [18,20].Furthermore, empirical data indicates that premenopausal women who are obese face a significantly elevated risk of developing BC, with the risk being 80 % higher compared to non-obese women [19].The primary reason for the development of these NCDs in obese patients is the direct involvement of obese WATME in facilitating insulin resistance, atherosclerosis, and upregulating estrogen levels through the secretion of pro-inflammatory factors [21][22][23]. Obese WATME Obesity has been widely recognized as a significant contributor to the expansion of WAT as a result of the increased demand for storing excess nutrients [8].In obesity, hypertrophic adipocytes experience mechanical stress as a consequence of persistent expansion of WAT via adipocyte hypertrophy, despite being constrained by the complex arrangement of adipocytes within a densely interconnected extracellular matrix (ECM) [12,24].Furthermore, it has been observed that hypoxia frequently accompanies the expansion of WAT [25].Hypoxia exerts significant impacts on hypertrophic adipocytes by inducing multiple cellular stresses.In addition to its well-known role in causing oxidative stress, hypoxia also leads to mechanical stress by increasing the cross-linking degree of collagen fibers in the ECM through the upregulation of lysyl oxidase enzyme [26].The obesity-induced cellular stresses have been observed to cause notable alterations in the secretomes of hypertrophic adipocytes and subsequently lead to the death of these cells [12].Adipocyte death initiates the inflammatory signaling cascades, which then induce the infiltration of immune cells and ultimately lead to alterations in the secretory profile of WAT [11].The interrelated consequences of these chain reactions eventually culminate in inflammation, which is a distinctive characteristic of obese WATME (Fig. 2). The death of hypertrophic adipocytes in obese WATME triggers metabolic activation of the nearby anti-inflammatory phenotype (M2like) macrophages into the pro-inflammatory phenotype (M1-like) macrophages [36].This polarization is facilitated through the toll-like receptor 4 (TLR4)/MyD88/IκB kinase (IKKβ)/NF-κB pathway, which are activated in response to damage-associated molecular patterns (DAMPs) and free fatty acids (FFAs) [37,38].Furthermore, the dead hypertrophic adipocytes leave behind large lipid remnants that hamper efficient efferocytosis by a single M1-like macrophage [39], hence stimulating the recruitment of other immune cells.Circulating monocytes are recruited to the site through a complex interaction involving dead adipocytes, M1-like macrophages, and CD8 + T cells, resulting in the formation of a crown-like structure (CLS) [33,40].Additionally, obese WATME contains a substantial accumulation of M1-like macrophages, which comprise as much as 60 % of immune cells, compared to a mere 10 % in lean WATME [21,41].The accumulation of M1-like macrophages is facilitated by neutrophil-derived elastase, interferon-gamma (IFN-γ) secreted by NK cells, the differentiation of recruited monocytes into M1-like macrophages, the proliferation of M1-like macrophages mediated by CCL2, as well as increased tissue retention of M1-like macrophages mediated by netrin-1 [33,[42][43][44]. These inflammatory responses observed in obese WATME establish a positive feedback loop that ultimately contributes to the development of chronic systemic inflammation.The collaboration between M1-like macrophages and mast cells plays a significant role in the promotion of fibrosis, resulting in increased adipocyte death [45].The rise in adipocyte death leads to the elevated secretion of DAMPs and FFAs, subsequently triggering the activation of the NF-κB pathway.The activated NF-κB upregulates the expression of CCL2, IL-6, IL-12, IL-1β, IL-18, and TNF-α [46].These cytokines possess the capability to stimulate the NF-κB pathway, thereby leading to elevated secretion of pro-inflammatory cytokines.As a result, the initiation of the positive feedback loop occurs when these cytokines attract circulating monocytes, which subsequently undergo differentiation into M1-like macrophages.Consequently, M1-like macrophages facilitate the development of chronic inflammation in the obese WATME, promoting the infiltration of additional immune cells.These immune cells contribute to the development of chronic systemic inflammation through the secretion of pro-inflammatory factors, intercellular adhesion molecule (ICAM), and vascular cell adhesion molecule (VCAM) [12,31,33,44,47,48]. Obese WATME-induced NCDs Obese WATME-induced chronic systemic inflammation plays a crucial role in the development of obesity-induced type 2 diabetes (T2D), coronary heart disease (CHD), and breast cancer (BC) (Fig. 3).Prolonged cellular stress ultimately leads to the death of these cells, triggering the recruitment of immune cells surrounding the dead adipocytes, thereby resulting in the formation of crown-like structure (CLS).Moreover, the secretion of free fatty acids (FFA) and damage-associated molecular patterns (DAMPs) by these adipocytes initiate the activation of nuclear factor kappa B (NF-κB) signaling.The activation of the NF-κB downstream signaling pathway leads to the infiltration and stimulation of immune cells to secrete pro-inflammatory cytokines.These alterations in the cellular composition, secretory profile, and inflammatory state of WAT result in obese WATME. The inflammatory responses within obese WATME significantly contribute to the development of insulin resistance, which in turn lead to the depletion of β-cells and ultimately results in chronic hyperglycemia and the onset of T2D [49].Obesity-induced CHD typically occurs through the development of atherosclerosis, which is facilitated by chronic systemic inflammation induced by obese WATME [22,50].In the context of obesity-induced BC, the pro-inflammatory factors secreted by obese WATME can upregulate the expression of aromatase enzyme [51,52].Aromatase enzyme is responsible for the production of estrogen, a hormone closely associated with the development of estrogen receptor (ER)-positive breast cancer [53]. Obese WATME facilitates the development of obesity-induced T2D by reducing insulin sensitivity of adipocytes, liver, and skeletal muscle by activating mitogen-activated protein kinase (MAPK) and Janus kinase (JAK)/signal transducers and activators of transcription (STAT) pathway (Fig. 3a) [54,55].Both TNF-α and IL-1β secreted by obese WATME have been shown to activate the MAPK pathway, but their inhibition of the insulin signaling pathway operates through distinct molecular mechanisms.TNF-α inhibits the insulin signaling pathways by affecting phosphoinositide-3 kinase (PI3K)/protein kinase B (PKB) signaling through the activation of MAPK/c-Jun N-terminal kinase (JNK) [56,57].The inhibitory effects of IL-1β on the insulin signaling pathways have been observed to be mediated by downregulating the expression of insulin receptor substrate-1 (IRS-1) through the MAPK/extracellular-signal-regulated kinase (ERK) signaling pathway [58].Meanwhile, the JAK/STAT signaling pathway is activated by IL-6 and IFN-γ.IL-6 activates the JAK/STAT3 pathway, which causes an upregulation of the expression of suppressor of cytokine signaling-3 (SOCS3) [59,60].On the other hand, IFN-γ activates the JAK1/JAK2/-STAT1 pathway, resulting in the increased expression of SOCS1 [61,62].These SOCS proteins promote the degradation of IRS and inhibit the insulin signaling pathway [63].The reduced insulin sensitivity by obese WATME leads β-cells to secrete elevated levels of insulin to counteract the insulin resistance [64].However, persistent insulin resistance due to obese WATME-induced chronic systemic inflammation ultimately results in the exhaustion of β-cells and the development of T2D caused by obesity [49]. The pathogenesis of obesity-induced CHD is a complex process involving multiple mechanisms facilitated by obese WATME that ultimately lead to the development and progression of atherosclerosis.Obese WATME induces chronic systemic inflammation that triggers endothelial cells (ECs) to secrete leukocyte adhesion molecules such as ICAM1 and VCAM1.These leukocyte adhesion molecules can facilitate circulating monocytes, T cells, and B cells to adhere and migrate into the intima layer of the blood vessel wall [65][66][67], thereby initiating the formation of atheroma.Additionally, obese WATME contributes to the progression of atherosclerosis by inducing ECs dysfunction and vasoconstriction through the secretion of pro-inflammatory factors (Fig. 3b).IL-6 is discovered to substantially contribute to obese WATME-induced ECs dysfunction by promoting the production of reactive oxygen and nitrogen species (RONS).IL-6, IL-1β, and IFN-γ stimulate reactive nitrogen species (RNS) production by activating the NF-κB pathway, which leads to the upregulation of inducible nitric oxide synthase (iNOS) enzyme expression [68].Moreover, IL-6 along with angiotensin II (Ang II) promotes reactive oxygen species (ROS) production by increasing the expression of NADPH oxidase 2 (NOX2) enzyme [69].These RONS inhibit the function of endothelial NOS (eNOS) enzyme to produce nitric oxide (NO), resulting in the inhibition of vasorelaxation and angiogenesis [70].In the context of obese WATME-induced vasoconstriction, it is discovered that vasoconstriction is facilitated by TNF-α, which promotes the synthesis of a strong vasoconstrictor called endothelin-1 (ET-1) through the MAPK/ERK pathway [71,72]. In obesity-induced BC, obese WATME secretes IL-6 and TNF-α, which work through distinct cellular pathways to upregulate the expression of aromatase enzymes.IL-6 triggers BC cells to secrete prostaglandin E2 (PGE2), which subsequently leads to the upregulation of aromatase enzyme expression in breast adipose stromal cells (ASCs) [51,73,74].On the other hand, TNF-α directly affects breast ASCs by activating the MAPK/ERK1/2 signaling pathway [52,75].This obese WATME-induced upregulation of estrogen levels facilitates the development of estrogen receptor (ER)-positive BC through both genomic and non-genomic mechanisms [76].Furthermore, pro-inflammatory factors secreted by obese WATME not only stimulate the development of BC but also facilitate the progression and metastasis of BC (Fig. 3c).It is discovered that when IL-1β binds to its receptor, it facilitates angiogenesis by inducing BC cells to secrete vascular endothelial growth factor (VEGF) through the activation of MAPK/p38 and phosphoinositide 3-kinase (PI3K)/protein kinase B (PKB) signaling pathways [77,78].Moreover, TNF-α and IL-6 promote the progression and epithelial-to-mesenchymal transition (EMT) of BC cells by activating the JAK/STAT3 signaling pathway [79,80].The activated STAT3 pathway induces the expression of target genes associated with apoptosis, proliferation, angiogenesis, invasiveness, and metastasis [81,82].Furthermore, upon phosphorylation, STAT3 translocates to the nucleus and elicits the transcriptional upregulation of TWIST and SNAIL genes, which are recognized as key regulators of the EMT in cancer cells [83,84]. Taken together, the inflammatory responses that occur in obese WATME play a crucial role in the development of obesity-induced T2D, CHD, or BC.The interrelationships between this inflammation and obesity-induced NCDs emphasize the potential of targeting the obese WATME as a promising immunotherapy target for treating these diseases. WATME-specific immunotherapy for obesity-induced NCDs WATME-specific immunotherapy, which modulates the inflammatory responses in obese WATME to alleviate the chronic systemic inflammation, demonstrates therapeutic efficacy for obesity-induced type 2 diabetes (T2D), coronary heart disease (CHD), and breast cancer (BC) (Table 1).The efficacy of WATME-specific immunotherapy in targeting obese WATME is readily apparent based on the reduction in body weight (BW) or fat mass, as well as alterations in the secretomes, cellular composition, and signaling pathway cascades of obese WATME.Here, recent advances in WATME-specific immunotherapy for the treatment of obesity-induced NCDs are reviewed. WATME-specific immunotherapy for obesity-induced type 2 diabetes (T2D) In order to effectively treat obesity-induced type 2 diabetes (T2D), it is essential to suppress the secretion of pro-inflammatory cytokines including TNF-α, IL-6, IL-1β, and IFN-γ by obese WATME.These proinflammatory cytokines facilitate the development of insulin resistance, subsequently resulting in the occurrence of hyperinsulinemia.The prolonged hyperinsulinemia because of chronic systemic inflammation induced by obese WATME leads to β-cell dysfunction and ultimately to the development of T2D.Therefore, the inhibition of the secretion of pro-inflammatory cytokines by employing WATME-specific immunotherapy, which involves inhibiting inflammatory signaling pathways, enzymes, receptors, and senescent cells, has emerged as an effective approach for treating obesity-induced T2D (Fig. 4, Table 1). Sulforaphane (SFN) Sulforaphane (SFN), a phytochemical present in cruciferous vegetables, demonstrates promising therapeutic potential of direct WATMEspecific immunotherapy for obesity-induced T2D.Observations have shown that the SFN treatment effectively reduces insulin resistance, as indicated by a decrease in fasting blood glucose levels in the SFN group [127].SFN enhances insulin sensitivity through working directly on adipocytes in obese WATME through the inhibition of JAK2/-STAT3/SOCS3 signaling pathway.Additionally, SFN inhibits the expression of sterol regulatory element-binding protein-1c (SREBP-1c), a transcription factor responsible for lipid and cholesterol synthesis [128].This inhibition results in the suppression of adipocyte hypertrophy, improvement of lipid profile, and reduction in body weight.SFN also ameliorates the inflammatory responses in obese WATME by inhibiting NF-κB signaling pathway, therefore decreasing the secretion of pro-inflammatory factors such as IL-22, IL-6, and leptin [129,130]. Formononetin (FNT) Formononetin (FNT), an estrogen-resembling compound derived from plants, inhibits adipogenesis by inhibiting the activity of various adipogenic genes such as PPAR and CCAAT/enhancer-binding protein alpha (C/EBP-α).Also, it inhibits adipocyte hypertrophy by suppressing intracellular triglyceride accumulation [85].Furthermore, FNT treatment is found to downregulate the secretion of pro-inflammatory cytokines such as TNF-α, IL-1β, IL-6, and IFN-γ, while upregulating the secretion of anti-inflammatory cytokine IL-10 from obese WATME.This effect is achieved by inhibiting the MyD88 or TRIF/MAPK/ERK and MAPK/JNK pathways.Accordingly, the effects of FNT on obese WATME lead to weight loss, increased energy consumption, and improved lipid profile [131].In addition, it stimulates upregulation of SIRT1 expression in the pancreas organ, resulting in a synergistic effect on the treatment of obesity-induced T2D through mitigating insulin resistance and hyperglycemia [86].These findings demonstrate that FNT is an efficacious direct WATME-specific immunotherapy for the treatment of obesity-induced T2D. Coffee silverskin (CSE) and husk (CHE) Coffee silverskin (CSE) and husk (CHE) aqueous extracts demonstrate anti-inflammatory properties in obese WATME by inhibiting the crosstalk between M1-like macrophages and hypertrophic adipocytes.Both cells in obese WATME secrete less pro-inflammatory cytokines, like TNF-α and CCL2, because the crosstalk is suppressed through the NF-κB and JNK pathways [88].Moreover, treatment with CSE and CHE extracts inhibit the formation of hypertrophic adipocytes in obese WATME by upregulating the expression of PPARG coactivator 1 alpha (PGC1α) and uncoupling protein 1 (UCP1) genes, which promote browning and increase thermogenesis [132].Furthermore, these extracts directly enhance insulin sensitivity in adipocytes by stimulating the PI3K/AKT signaling pathway [88].The promising therapeutic potential of direct WATME-specific immunotherapy for the treatment of obesity-induced T2D is demonstrated by the ability of phenolic compounds found in CSE and CHE extract to mitigate insulin resistance in obese WATME by alleviating the inflammatory responses in obese WATME. Ramulus mori (Sangzhi) alkaloids (SZ-A) Sangzhi alkaloids (SZ-A) can inhibit the formation of hypertrophic adipocytes in obese WATME by upregulating the expression of lipolysisrelated enzymes such as adipose triglyceride lipase (ATGL) and hormone-sensitive triglyceride lipase (HSL) [89].Also, SZ-A treatment inhibits the p38 MAPK, ERK, JNK, and TLR signaling pathways of M1-like macrophages, leading to an alleviation in inflammation in obese WATME.SZ-A treatment can induce improvements in the lipid profile, decreases in body weight, and reductions in levels of inflammatory biomarkers such as plasminogen activator inhibitor-1 (PAI-1), angiotensin II (Ang-II), and leptin.In addition, SZ-A treatment increases the levels of anti-inflammatory factors including IL-4, IL-10, IL-13, and adiponectin.Accordingly, SZ-A has been demonstrated to be an effective direct WATME-specific immunotherapy, leading to the amelioration of inflammation in obese WATME and the treatment of obesity-induced T2D. WATME-specific immunotherapy for obesity-induced coronary heart disease (CHD) Obese WATME-induced chronic systemic inflammation plays an important role in the development of obesity-induced coronary heart disease (CHD).WATME-specific immunotherapy that targets obese WATME to alleviate chronic systemic inflammation by directly affecting cells or indirectly by working on the secreted pro-inflammatory cytokines has emerged as a promising therapeutic approach for obesityinduced CHD (Table 1).Direct WATME-specific immunotherapy demonstrates efficacy in the treatment of obesity-induced CHD by targeting the inflammasome, inflammatory signaling pathways, and receptors.Conversely, indirect WATME-specific immunotherapy exhibits favorable outcomes through targeting pro-inflammatory factors, including TNF-α, proprotein convertase subtilisin/kexin type 9 (PCSK9), and apolipoprotein B100 (ApoB100) (Fig. 5). Bazedoxifene Bazedoxifene, a selective estrogen receptor modulator, has the potential to improve insulin sensitivity [133,134].Bazedoxifene enhances insulin sensitivity and prevents abnormal lipid buildup in the liver and skeletal muscle by attaching to the estrogen receptor found on enlarged fat cells, which leads to increased fat oxidation and energy expenditure [135].These results are achieved through the downregulation of lipogenesis-related genes such as fatty acid synthase, lipoprotein lipase, acetyl-coenzyme A (CoA) carboxylate-α and -β, stearoyl-CoA desaturase, fatty acid desaturase and PPAR-γ [136].Additionally, a recent investigation reveals that Bazedoxifene inhibits the IL-6/IL-6R/STAT3 signaling pathway in obese WATME, hence disrupting the progression of atherosclerosis in HFD-induced mouse models [137].It is worth mentioning that Bazedoxifene treatment significantly reduces the concentrations of IL-6 and TNF-α as well as the atherosclerotic plaque.Therefore, Bazedoxifene as direct WATME-specific immunotherapy demonstrates significant therapeutic potential in treating obesity-induced CHD. Etanercept (Enbrel®) Etanercept is a biologically engineered human soluble TNF-α receptor protein that efficiently inhibits the action of TNF-α [138].Administering Etanercept through subcutaneous injection in a model of diet-induced obesity (DIO) rats effectively alleviates cardiac fibrosis [112].This is accomplished by inhibiting the activation of JAK/STAT3, a crucial signaling pathway in fibrosis, which is stimulated by TNF-α in obese WATME [139].Also, Etanercept inhibits the upregulation of the secretion of pro-inflammatory cytokines, including TNF-α, IL-1β, IL-6, and NF-kB from obese WATME in high-fat diet (HFD)-fed rodents [140].Therefore, the alleviation of inflammation in obese WATME through the suppression of TNF-α by Etanercept demonstrates the effectiveness of indirect WATME-specific immunotherapy for treating obesity-induced CHD. WATME-specific immunotherapy for obesity-induced breast cancer (BC) It has become increasingly evident that inflammation in obese WATME plays a crucial role in the progression and metastasis of obesityinduced breast cancer (BC).Obese WATME secretes pro-inflammatory cytokines, including IL-6 and TNF-α, which have significant implications in the development, advancement, and metastasis of obesityinduced BC.WATME-specific immunotherapy that aims to mitigate inflammatory responses in obese WATME through targeting inflammatory signaling pathways (NF-κB and STAT) and pro-inflammatory factors (IL-8 and CCL2) has emerged as a potentially effective therapeutic approach in the treatment of obesity-induced BC (Fig. 6).WATME-specific immunotherapy has made notable advancements in the treatment of obesity-induced BC (Table 1). Niclosamide Niclosamide, an anthelminthic drug approved by the FDA, inhibits the epithelial-mesenchymal transition (EMT) induced by adipocytes, thereby exerting its anti-breast cancer effects [141].STAT3 is activated in the tumor microenvironment in response to increased IL-6 secretion.This process contributes to the formation of tumors by regulating important genes that are crucial for apoptosis, survival, proliferation, and metastasis.Therefore, therapeutic approaches encompass the inhibition of the IL-6/STAT3 signaling pathway and STAT3 phosphorylation in breast cancer cells, such as Niclosamide can inhibit the adipocyte-induced EMT [142].Also, it inhibits adipogenesis of pre-adipocytes present in obese WATME through stimulating AMPK, leading to an elevation in fat oxidation and inhibition of adipocyte hypertrophy [143].Niclosamide, as a direct WATME-specific immunotherapy, has the potential to effectively treat obesity-induced BC by suppressing EMT and inhibiting the formation of hypertrophic adipocytes. BZ26 BZ26, a specific antagonist of PPAR-γ, reduces the proliferation and invasion of obesity-induced BC by inhibiting the transformation of mature adipocytes into cancer-associated adipocyte (CAA) cells [144].Delipidated and reprogramed CAAs secrete an excess of inflammatory cytokines and proteases to promote tumor survival and growth, thereby fostering an environment that enhances the invasiveness and hostility of cancer cells.Mature adipocytes in HFD-fed mice exhibit the same phenotype as an intriguing CAA [145].However, PPAR-β inhibition of BZ26 can prevent the differentiation of mature adipocytes into CAAs, thus impeding the progression of obesity-induced breast cancer as well as their metastasis.Also, BZ26 can significantly decrease the levels of inflammatory factors such as IL-1β, IL-6, and CCL2, as well as an inhibition of NF-κB activity.These effects collectively diminish the inflammatory response in obese WATME [146].In addition, BZ26 has a regulatory effect on the browning of WAT via alterations in PPAR-γ activity.When BZ26 is administered to HFD-fed rodents, brown adipose-related genes (PPARα, Cidea, and Otop1) are gradually Clinical trials employing WATME-specific immunotherapy for obesity-induced NCDs Clinical applications of WATME-specific immunotherapy have not yet been explored extensively despite the encouraging outcomes observed in the animal models (Table 2).Numerous clinical trials investigating obesity-induced NCDs do not consider the interrelationship between obesity, inflammation, and those diseases.These are clear from the fact that many of these trials only simply evaluate the pathological condition of the diseases and do not incorporate adiposity measurement or assessment of inflammatory biomarkers.Consequently, our approach is to focus on clinical trials that provide substantial evidence of the efficacy of WATME-specific immunotherapy when treating obesityinduced T2D, CHD, and BC.Also, the efficacy of WATME-specific immunotherapy in treating multiple obesity-induced NCDs is investigated. Type 2 diabetes (T2D) NCT02964572 trial demonstrates that direct WATME-specific immunotherapy shows a positive outcome in the treatment of T2D in humans [147].Empagliflozin, a sodium-glucose transport protein 2 (SGLT2) inhibitor, is administered to 29 T2D patients for 60 days.This treatment leads to an improvement in insulin sensitivity, as evidenced by a decrease in fasting serum insulin levels and a reduction in the homeostasis model assessment-IR (HOMA-IR) index.In comparison, 32 patients with T2D who receive sulfonylurea do not experience the same improvements.Although administered systemically, Empagliflozin effectively targets obese WATME, as evidenced by the weight loss observed in the Empagliflozin group.Empagliflozin is found to improve the inflammation in obese WATME by suppressing the NLR family pyrin domain containing 3 (NLRP3) inflammasome, hence reducing the secretion of IL-1β and IL-18 from obese WATME.Although the decrease in circulating cytokine levels in the Empagliflozin-treated group does not reach statistical significance, it can significantly inhibit IL-1β secretion by primary macrophages isolated from the group exposed to the NLRP3 inflammasome agonist. Coronary heart disease (CHD) The Canakinumab anti-inflammatory thrombosis outcome study (CANTOS) trial (NCT01327846) demonstrates the efficacy of indirect WATME-specific immunotherapy in the treatment of atherosclerosis [148].In this trial, patients diagnosed with CHD and experiencing a prior myocardial infarction receive a therapeutic intervention with subcutaneous administration of Canakinumab, a completely humanized monoclonal antibody (mAb) that specifically targets IL-1β.The purpose of this trial is to evaluate the efficacy of Canakinumab in reducing the incidence of revascularization due to atherosclerosis compared with placebo.As a result, when Canakinumab is administered at all doses, the annual incidence of revascularization per 100 people is 2.53, which is significantly reduced compared to 3.61 in the placebo group.Additionally, IL-1β inhibition leads to a decrease in the levels of inflammatory biomarkers in all groups receiving Canakinumab.Higher doses of Canakinumab result in greater reductions in high-sensitivity C-reactive protein (hsCRP) and IL-6 levels.The impact of canakinumab on obese WATME can be observed through an improvement in the lipid profile, characterized by a greater increase in high-density lipoprotein (HDL) levels compared to low-density lipoprotein (LDL) levels. Breast cancer (BC) Results from the upcoming NCT06150898 trial are anticipated to provide valuable insights into the effectiveness of direct WATMEspecific immunotherapy as a potential treatment for breast cancer.Ketorolac, a non-steroidal anti-inflammatory drug (NSAID) that inhibits the cyclooxygenase-2 (COX-2) enzyme, will be administered orally to 28 breast cancer patients (14 of them being obese) five days before their surgery.The primary objective of this trial is to evaluate the potential effectiveness of ketorolac in lowering systemic inflammation, metastasis-related biomarkers, and immune cell recruitment in BC.Furthermore, the effect of Ketorolac on obese WATME can be evaluated by assessing adiposity measurement, including body mass index, fat percentage, and waist-to-hip ratio, regardless of the systemic administration of Ketorolac. WATME-specific immunotherapy for multiple obesity-induced NCDs Empagliflozin can be used as a multi-target WATME-specific immunotherapy in the treatment of patients with T2D and high risk of atherosclerotic cardiovascular disease (ASCVD) [149].NCT01131676 trial shows that oral Empagliflozin is effective in reducing the risk of ASCVD outcomes and ASCVD mortality compared with placebo.Empagliflozin can reduce the incidence rate of 3-point major adverse cardiovascular events (3-point MACE) per 1000 patients-years by 6.5 %.Furthermore, all dosages of Empagliflozin can reduce glycosylated hemoglobin (HbA1c) levels, which may be due to the improved insulin sensitivity.The effect of Empagliflozin on obese WATME can be observed through greater weight loss in the Empagliflozin groups N.A: Not applicable, AE: adverse event, HR: hazard ratio. L. Priscilla et al. compared to the placebo group.Although this trial does not evaluate the effect of Empagliflozin on alleviating inflammatory responses, recent evidence from the NCT02964572 trial can support the effectiveness of Empagliflozin in the treatment of both CHD and T2D due to its ability to alleviate the inflammation in obese WATME. Challenges The employment of WATME-specific immunotherapy for treating obesity-induced NCDs is a relatively new and developing field.Recent studies have demonstrated the promising therapeutic potential of WATME-specific immunotherapy for obesity-induced T2D, CHD, and BC.However, the efficacy of WATME-specific immunotherapy is impacted by immunological toxicities, which involve multiple organs and exhibit variability [151].Although WATME-specific immunotherapy targets specific cytokines, inappropriate inhibition of cytokines can disrupt essential immune responses, thereby elevating the susceptibility to infection or exacerbating certain diseases such as cancers, skin lesions, and influenza-like symptoms (e.g., shivering, fever, headache, lethargy, anorexia, nausea, and vomiting) [152,153].Also, more than 30 % of patients treated with immunotherapy suffer from skin toxicities such as rash and mucositis [154].Moreover, cytokine inhibition can result in unforeseen consequences on the immune system, including the induction of autoimmune responses or immune deficiencies [155].As immunotherapy causes a significant number of adverse events, it has been essential to continuously improve treatment approaches to reduce the incidence of adverse effects and optimize treatment effectiveness.Immunomodulators have often been used alongside immunotherapy to mitigate these adverse effects.Immunomodulators alter the immune system to enhance the functionality of the immune response to treat diseases [156].Additionally, the severity of side effects and the suitability of immunotherapy may vary from person to person.To adapt immunotherapy to the distinctive features of everyone, personalized immunotherapy can be employed by exploiting sequencing technology, metagenomics, and metabolomics [157][158][159]. Although WATME-specific immunotherapy is witnessing a tremendous pace in the testing and approval of new medicines along with the continuous discovery of innovative strategies to effectively engage the immune system [160,161], WATME-specific immunotherapy is still in its early stage with limited clinical evidence supporting safety, efficacy, and long-term benefits.Therefore, it is essential to increase understanding and knowledge about the clinical manifestations, diagnostic approaches, and treatment strategies used to manage the adverse effects of WATME-specific immunotherapy.Also, designing clinical trials for WATME-specific immunotherapy presents a unique set of challenges, including selecting appropriate endpoints, accurately defining response criteria, and carefully considering the inherent variability within patient populations. Perspectives Traditional obesity treatment generally consists of a variety of treatments aimed at preventing, maintaining, or treating obesity through lifestyle changes, behavioral adjustments, medicines, or surgical procedures [162].However, obesity is a complicated condition with numerous causes and variables contributing to its development.Moreover, the immune system plays important role in regulating metabolism, inflammation, and fat accumulation in obesity [163].As previously shown, many epidemiological investigations have demonstrated a strong association between obese WATME and the development of non-communicable diseases (NCDs), including type 2 diabetes, coronary heart disease, and breast cancer.As a result, WATME-specific immunotherapy is a premised therapeutic approach for obesity-induced NCDs through alleviating inflammation in obese WATME by targeting cells, signaling pathways, and secreted cytokines. Studies of pharmacotherapy for obesity-induced NCD have provided valuable insights into successful immunotherapy.Moreover, obesity and obesity-induced NCDs may be treated concurrently through a synergistic effect when pharmacotherapy and WATME-specific immunotherapy are combined.Concurrent administration of pharmacotherapy aimed at regulating appetite, adipogenesis, or fat metabolism, as well as immunotherapy that specifically targets pathways or cells implicated in inflammation or metabolic dysregulation associated with obesity, could be performed [164].Integrating immunotherapy and pharmaceuticals in the management of obesity addresses multiple aspects of the condition, including metabolic dysregulation, inflammation, adipose tissue function, and potential obesity-related comorbidities that are affected by obesity-related inflammation. In addition to NCDs, obesity also affects various metabolic processes in the liver.It is associated with the progression of various liver diseases, including alcoholic liver disease and non-alcoholic fatty liver disease (NAFLD) as well as the development of associated inflammation and steatosis [165].Chronic liver disease is often attributed to NAFLD, which has also become a major cause of hepatocellular carcinoma (HCC) with the highest growth rate [166].The development of NAFLD includes the accumulation of TG in the liver [167].Obesity-related insulin resistance compromises adipose tissue fat storage, resulting in an accumulation of FFAs in the liver and the development of obese liver [168].Hypoxia induced by the obese liver results in the apoptosis of adipocytes, an augmentation of M1-like macrophages infiltration, and the secretion of pro-inflammatory cytokines, including TNF-α, IL-6, and CCL2 [169].These pro-inflammatory cytokines induce activation of inflammatory pathways such as NF-κB and JAK/STAT, thereby expediting the progression of liver injury and the disease [170]. In some pharmacotherapies, therapeutic agents that improve insulin sensitivity, including thiazolidinediones (TZDs) such as pioglitazone, have been applied to treat NAFLD [171].Additionally, bile acid regulation drugs such as obeticholic acid (OCA) and farnesoid X receptor (FXR) agonists have been investigated together with glucagon-like peptide-1 (GLP-1) receptor agonists [172].Although a variety of treatment options exist, efficacy endpoints may not always be appropriate for the treatment of this multisystem disease affecting multiple extrahepatic organs and diseases.Also, despite the high global prevalence and detrimental effects of NAFLD on life expectancy, there is currently no licensed pharmacotherapy for this liver disease.Therefore, given the intricate and diverse nature of these diseases, the primary goal should be to develop a treatment that efficiently targets distinct aspects of the disease.We suggest that this can be achieved through the design of WATME-specific immunotherapies that can specifically target inflammatory cytokines, cells, or pathways associated with obese WATME, as we have done in this review. Conclusion Obesity, which is a prevalent condition in modern society, can be recognized as a chronic systemic inflammatory disease that may give rise to the development of NCDs.The increase in the prevalence of obesity has greatly increased vulnerability to NCDs around the world.As an important connection between obesity and NCDs, the perception of inflammation of obese WATME is a survivable treatment approach to obesity-induced NCDs, thereby opening the door for the exploration of WATME-specific immunotherapy.Considering the remarkable progress achieved in WATME-specific immunotherapy, other areas require additional investigation and development. Ethics approval and consent to participate Manuscripts reporting studies involving human participants, human data or human tissue must include a statement on ethics approval: Not applicable. Studies involving animals must include a statement on ethics approval: Not applicable. Fig. 1 . Fig. 1.Illustrative representation of WATME-specific immunotherapy for obesity-induced non-communicable diseases.Obesity changes the inflammatory state of the white adipose tissue microenvironment (WATME), leading to a pro-inflammatory state by triggering inflammatory responses mediated by hypertrophic adipocytes.This obese WATME facilitates the development of type 2 diabetes (T2D), coronary heart disease (CHD), and breast cancer (BC) in obese patients.Correspondingly, through the mitigation of inflammation in obese WATME, WATME-specific immunotherapy has the potential to treat obesity-induced noncommunicable diseases.Direct immunotherapy: targeting cells or intracellular signaling pathway; Indirect immunotherapy: targeting secreted cytokines. Fig. 2 . Fig. 2. Schematic illustration of obese WATME.Obesity induces cellular stress, leading to the secretion of pro-inflammatory factors by hypertrophic adipocytes.Prolonged cellular stress ultimately leads to the death of these cells, triggering the recruitment of immune cells surrounding the dead adipocytes, thereby resulting in the formation of crown-like structure (CLS).Moreover, the secretion of free fatty acids (FFA) and damage-associated molecular patterns (DAMPs) by these adipocytes initiate the activation of nuclear factor kappa B (NF-κB) signaling.The activation of the NF-κB downstream signaling pathway leads to the infiltration and stimulation of immune cells to secrete pro-inflammatory cytokines.These alterations in the cellular composition, secretory profile, and inflammatory state of WAT result in obese WATME. Fig. 3 . Fig. 3. Illustrative representation of molecular mechanisms of obesity-induced non-communicable diseases.a, obesity-induced type 2 diabetes.(i) The insulin signaling pathway begins with phosphorylated IRS1/2 activates the PI3K-Akt/PKB and MAPK signaling pathways which regulate metabolic processes and growth signaling.Pro-inflammatory factors secreted by obese WATME trigger the activation of (ii) the MAPK and (iii) the JAK/STAT signaling pathways, which subsequently inhibit insulin signaling, impair insulin sensitivity, and ultimately result in insulin resistance.b, obesity-induced coronary heart disease.Proinflammatory factors secreted by obese WATME exert atherogenic effects by generating RONS through the activation of (iv) the IKK/NF-κB, JAK/STAT/NF-κB, and (v) the Ang II signaling pathways.Additionally, the pro-inflammatory factors generate vasoconstrictors through activating (vi) the MAPK/ERK signaling pathway.c, obesity-induced breast cancer.Obese WATME facilitates the progression and metastasis of breast cancer by activating (vii) the MAPK/p38, the PI3K/ PKB, and (viii) the JAK/STAT3 signaling pathways. Fig. 4 . Fig. 4. Schematic diagram of WATME-specific immunotherapy for obesity-induced type 2 diabetes (T2D).Inflammation in obese WATME plays a crucial role in the development of insulin resistance which leads to the development of obesity-induced T2D.These are mediated by the inflammatory signaling pathways, enzymes, and senescent cells.Several WATME-specific immunotherapies targeting those have been discovered.Some examples of these WATME-specific immunotherapies are Sulforaphane, Formononetin, SZ-A, coffee silver skin and husk. Fig. 5 .Fig. 6 . Fig. 5. Schematic diagram of WATME-specific immunotherapy for obesity-induced coronary heart disease (CHD).Obesity is associated with increased secretion of pro-inflammatory cytokines, especially IL-1β, TNF-α, and IL-6, by obese WATME.There is established evidence linking these cytokines to the occurrence of atherosclerosis.Several WATME-specific immunotherapies have been identified for the treatment of atherosclerosis-mediated development of obesity-induced CHD.These WATME-specific immunotherapies target the NLRP3 inflammasome, TNF-α, and inflammatory signaling pathways.Some examples are Bazedoxifene and Etanercept. Table 1 Summary of WATME-specific immunotherapy for obesity-induced NCDs.
2024-02-21T16:04:06.038Z
2024-02-19T00:00:00.000
{ "year": 2024, "sha1": "b3203937e7b223c5480b7a9530b3881055fe2ce1", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e4c6fdd874d74df7e61754a3280f01e0cb129876", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
145580295
pes2o/s2orc
v3-fos-license
Relinquishing the Practices of a Lifetime : Observations on ageing , caring and literacies This paper draws on ethnographic and case study data from a variety of sources to explore the changing social practices of literacy across the lifespan. It explores the new literacy demands that people encounter with age when dealing with life events in a range of social domains. These include increased leisure; travel; changing family and peer relationships as a result of death and loss; issues of health and disability and accessing new technologies. It reveals how literacy is implicated in peoples' changing sense of time, place and history; how the older person’s identity as a literate actor may be interrupted by both institutional and informal processes of caring and their disengagement from spheres of activity that were previously central markers of their identity. Ageing thus involves both expansion and retreat from familiar literacy practices. Introduction Older adults are one significant group who have been defined as being 'outside' of contemporary literacy policy because they are not seen to be relevant to goals of economic productivity.This is despite the fact that populations of post-industrial societies are becoming progressively older and that adults' literacy skills -when measured in cross-sectional surveys -appear to decline systematically with age (see, for example Weinstein-Shr 1995, OECD 2000).What consequences do literacy changes across the lifespan have for individuals and societies?What are the implications for policies and programs serving older adults?In what ways do the measured differences in literacy skills correspond with changes in the literacy practices of adults as they become older? This paper draws on ethnographic and case study data from a variety of sources to explore the changing social practices of literacy across the lifespan.It illustrates some of the new literacy demands that people encounter with age in the legal and financial domains, in dealing with life events linked with changing family and peer relationships; death and loss; increased leisure; travel; and new technologies.It reveals how literacy is implicated in people's changing sense of time, place and history; how the older person's identity as a literate actor may be interrupted by the processes of caring and their disengagement from spheres of activity that were previously central markers of their identity.It suggests that ageing thus involves both expansion and retreat from familiar literacy practices. Some key issues in the field of literacy studies are thrown into new relief when examined through the lens of ageing.These include the role of literacy in relations of interdependency and mediation, especially the delicate balances of inter-generational support and control, negotiating the boundaries of public and private knowledge, and the importance of considering trust, fear and respect as factors in supporting literacy practices among older people that are acceptable to them. Picturing literacy and ageing from above and below Explanations in the survey literature of declining literacy performance with age are typically couched in terms of cognitive change in functioning or changing educational opportunity over the last century.Factors such as disuse of skills, or disinclination to learn in old age are cited (see Weinstein-Shr 1995).Whilst these factors are undoubtedly part of the picture, my aim is to examine what additional insights into the relationship between literacy and seniority can be gained from applying a social practice perspective. The features of the social practice approach to literacy studies have already been rehearsed in general terms in the introduction to this volume by Sondra Cuban.For the purposes of this paper it is important to note that this approach aims to present literacy and ageing from the perspective of those experiencing it directly using a detailed, ethnographic method and focusing on the dynamics of individual subjects within a complex of contextual factors.Such studies differ from traditional surveys of need in focusing not just on the deficits of those with few formal educational achievements.Highly educated adults who have made their mark on life also experience change and have to re-negotiate the risks and positive benefits of literacy in older life.The ways in which they do so are just as important to explore if we are interested in understanding textually mediated social worlds. A social practice approach can usefully document three identity-related aspects of literacy: 1) older peoples' subjective experiences of literacy, 2) the changing social networks and affiliations that are significant to older people, and 3) the ways in which older people are positioned by their literacymediated encounters with individuals and with social institutions.Such data can offer rich accounts of subjectivities and the social meanings of literacy. This social practice approach forces us to take account of aspects of literacy experience other than the purely cognitive.A focus on ageing leads inevitably to consideration of embodied practices and the changing materialities of how a person, as a subject, engages with literacy.This is not just a result of the changing materialities of literacy technologies, a theme that is already well explored in the literature (see for example Snyder 1997, Kress 2003) but the changing materiality of the subjects themselves in sometimes rapid and extraordinary ways due to changes in memory, sensory changes in sight and hearing, joint stiffness, lack of mobility, strength and energy.There are positives, too, in embodied experience: a different sense of time and pace; the availability of more, slower, time; and a breadth of emotional experience and understanding built over the lifespan and invested in particular literacy practices and artefacts.These positive aspects are often drawn on as resources by other people involved in the networks and organizations in which older people participate.As Janet Isserlis puts it: elderly people have made spaces in the world, have interacted with people and events that many of us, who are younger, may be familiar with or not but have not experienced in the way that someone who was alive before the advent of television, the internet, fruit leather or space exploration might.Older people know things that young people don't know and they know things differently (Isserlis 2003). Data This paper draws on existing ethnographic and interview case study data from a variety of sources to identify a set of themes suggested by the social practice perspective.It is a speculative first exploration of this topic and the data has mostly been generated in studies that have aimed to document literacy practices more generally.Few of these studies have focused specifically on ageing but they have picked up relevant material incidentally by looking at textually mediated lives in a range of settings and conditions.Obviously, the details and literacy practices of older people will differ considerably under different social, cultural, economic, political and geographic conditions.The studies I quote from do not represent an even geographical spread and there is an emphasis on the UK and my own locality in the North West of England.However, my assumption is that the conceptual framework of literacy practices and the elements and processes identified through it can usefully be applied to the experience of older people in other contexts.The studies I have drawn on for this paper include Local Literacies (Barton and Hamilton, 1998, based in Lancaster England), Changing Faces (Hamilton and Hillier 2006, based on a national English sample), and practitioner research projects (e.g.Milioti 2000, Isserlis, 2003 both from North America).Some quotations are also drawn from a collaborative study of Changing Literacies and Changing Technologies across the Lifespan i currently being carried out with a group of Senior Learners at Lancaster University, England (referred to in this paper as the 'Senior Learners Project').This project and the present paper are preliminary steps toward more systematic study. Literacy practices expand with age This section discusses the expansions of literacy that take place in old age as a result of the accumulated experience and mastery of a lifetime.These expansions result from the sense of time and place that many people arrive at or strive for in old age, a sense of history that is at once both individual and collective and which draws on reservoirs of cultural and linguistic knowledge.This can be seen clearly in the example of family history.Documenting family trees and crafting stories related to family history that can be passed onto new generations is a common interest among older people.The status and identity that come from being the eldest in a family or community network is constantly refocused as friends and relatives die, people move into new roles and intergenerational dynamics change.There is never total closure or certainty, rather a degree of flexibility is always present, and literacy practices and interests shift accordingly. Harry (see Barton and Hamilton 1998:81) was 66 years of age, a younger elder, when we met him as part of the Local Literacies project.He was already a grandfather, retired from the fire service and interested in writing his life history, especially his memoirs from the war.He was a man who had learned his literacies in his adult life through his work, his networks and interests, and he held a respected place in the local community.Despite his lack of formal education, he was frequently asked for advice and to write references, he wrote letters to the local paper and used the library regularly.He was an officer in several local community groups. Although we did not treat Harry particularly as an 'old man' in our analysis there are several aspects of his literacy practices that seem, with hindsight, to be age related.He told stories of the limited educational opportunities that left him aware of the greater formal proficiencies of his own children and grandchildren.Such stories are typical of his generational cohort (see Antikainen et al 1996), Field and Malcolm 2005, but there is also something more general to notice about the ways that he was using his writing and reading to make sense of a life that he can already look back on and draw lessons from, for himself and for others.Whilst his wife was still alive, Harry, like many other people had already begun to document his family history, using written artefacts and a range of other media: collecting books, photos and family records; looking up names in church record books; and accompanying his wife on visits to cemeteries and libraries. The advent of the internet has made visible the extent of such ancestor hunting activities.In the UK, the extent of interest with genealogy first became clear in 2002 when the data from the 1901 census was posted online.The website was overwhelmed, crashing under the weight of 30 million hits a day as people looked up their ancestors (Rudd 2008).This passion, utilised in reminiscence work in literacy programs and reflected in the popularity of local and family history adult education, may not be equally shared across social groups.I would suggest that it may be identified particularly with older adults and has special appeal to those who have been displaced from their familiar cultural context, not just by time, but through war, economic migration and so on.Younger people may also engage in family history activities in the context of intergenerational exchange as found in one example drawn from the Senior Learners Project.Inspired by the 2006 Football World Cup, 84 year old Roy's grandson helped his grandfather research and assemble documents about his great grandfather who was a one of the first professional footballers and trainers in England.Between them, they collected a mixture of family photographs, old programs, newspaper cuttings and internet information. Literacy in different domains of older life The example of family history is one domain where literacy expands with age and raises a set of interesting questions about sense-making and identity that can then be asked of younger and other social groups.The following section looks at several more key domains in which both expansions of, and withdrawal from literacy-related activities occur.It looks in turn at leisure, financial matters, use of the mass media and new technologies, and finally at the domain of caring which leads into a consideration of institutional responses to older people. Leisure Old age often brings substantial continuities with earlier interests, activities and expertise, though some domains, such as legal, financial and health, may become ever more salient and generative.For some people community-based networking and local political activities such as lobbying, organizing or serving on committees become more central as time is freed up from other commitments.Other activities (such as those related to employment, participation in rock music festivals or extreme sports) may recede, although many people maintain spectator links with these via social networks and the mass media.In the Senior Learners project, the importance of music -recording and listening to it -was one of the first domains to emerge in discussions about new technologies.In the Local Literacies project, Cliff Holt and his step-sister, Rose searched out a variety of local entertainment and leisure activities within the constraints of their very limited budgets.About six months before we interviewed them, they had developed a common interest in horse racing and betting and explained in detail their attempts to get to grips with the practices associated with placing bets.These included looking at newspapers, tote books, betting slips, the 'tick-tack' signaling system used on the race course and information on television and computer monitors (Barton and Hamilton 1998:140). Financial M atters New literacy-related demands in legal and financial domains are faced by many older people when they deal with events such as death, loss and managing property and inheritance.Older people act as sponsors and executors for others, as well as rearranging their own affairs with a view to dispersing rather than accumulating assets that is an earlier life preoccupation.In Janet Isserlis's portrait of her aunt Lil (2003), despite failing faculties and energy Lil remained, almost to the moment of her death, concerned about, and connected to, ensuring that her papers were in order.She remembered, for example, a cheque that she needed to write for a relative, maintaining well organized records and procedures for carrying this out.She adapted these procedures to incorporate an increasing degree of collaboration with others, reducing the burden of literacy that she needed to carry herself. Early in the afternoon of my birthday, she had my mother write the date, my name, the figure and words: Lil then signed the cheque.I was struck by how dramatically Lil's signature had changed; the letters were scribbly; her usual characteristic writing had visibly changed.I no longer remember how much, if any, writing she'd done during her last month in hospital.I don't think she did much, beyond, maybe, circling items on a daily menu…I do know (or think I know) that during her last few weeks, she'd lost interest in reading.I think she found it exhausting after having been an avid reader for as long as I'd known her (Isserlis, 2003). M obility and M edia Old age may bring either expansion of, or withdrawal from, the domain of travel and mobility, depending on circumstances.Expansion may occur due to increased time, eligibility for discounted services and sometimes available money after retirement.For many, however, health and reduced financial resources place new restrictions on mobility that have to be accommodated.One result of this for many older people, especially those confined to the home or institutional care is the importance of the mass media as an information source, for entertainment and social connection.A recent report (Office of Communications 2007) found that older people in the UK access familiar rather than newer media and that hours of television watching are highest in the oldest age groups and among those with disabilities.Literacy may increase in importance as a mediator of social communication as face-to-face contact with others becomes more difficult to maintain.A common motivation for learning to use e-mail among participants in the Senior Learners Project was to communicate with distant relatives, especially children for whom this was the obvious way to keep in touch. Living a life inevitably brings with it encounters with new technologies and it is well worth considering how older people approach new technologies in distinctly different ways from, for example, children and young adults.The most obvious aspect of this is the overlaying of new competencies on old, whether it is the change in a system of measurement or a currency that renders people effectively bi-numerate or the displacement of an old technology such as the typewriter or the postcard, with a replacement for the same task.These changes bring both advantages and disadvantages for the user and render some of their existing skills obsolete.In some cases the encounter with a new technology is a sudden experience in old age, but in many other instances it is part of a lifelong adaptation to change and we should not underestimate the resources people can bring to this process, the meta-level knowledge it generates and the flexibility with which new practices can be incorporated into daily routines.In this sense, the skilled literacy user is constantly relinquishing their established knowledge and practices in the pursuit of everyday goal.Pam recounts with humor learning to text from her daughter as, in part, having to learn to re-compose the wordy message she would normally write, to something shorter that can -is expected to besent in an instant: We went to H-------on Bank holiday Monday and on the way back I got a text and I was trying to send one back and she said 'mother are you writing a four page letter'?Because it was taking me that long she said 'give it to me, what do you want to say?' (Hamilton and Hillier 2006:53) There are some interesting studies emerging of older people's incorporation of health technologies into their domestic routines, for example the use of alarm pendants (see Domenech and Lopez 2007) or hearing aids. A key dimension here is the identities that people build for themselves in relation to health risks.These determine how intimacy with a protective device develops as the device continually questions a person's existing view of their self-efficacy.A fit, but at risk, older person may not wish to identify with the information and images in a brochure showing someone like themselves in a dangerous position, for example after a fall.Intimacy with a protective alarm may paradoxically make them feel more at risk, less in control, rather than safer with the result that they 'lose' or reject the device.Where a device increases a sense of control and expertise, however, it may be actively incorporated into daily routines.An example of this from the Senior Learners Project was self-monitoring of blood pressure using a device that can be bought from the pharmacy.This enabled more frequent testing and a double check on information given by the doctor.These examples highlight how use and understanding of information about health artefacts -including drugs -is much more complex than simply whether people can read the instructions given out by a health professional. Caring The domain of caring, is one that is frequently highlighted in studies of older, adults.On the one hand, many older people become increasingly involved as carers themselves.Partly as a result of the ageing process itself, partly because in withdrawing from the world of paid public work, older people have time to give attention to younger generations.If they are physically able and close by, they may be closely involved as babysitters and in childcare more generally as Pam, the informant quoted earlier, describes as part of her reciprocal family relationships (see Hamilton and Hillier 2006:53).Many grandparents are primary carers of children (see Mission Australia 2007, Suarez 2007).On the other hand, as time goes on, many older people become increasingly dependent on others for their own personal care. Institutional positioning of older people The examples given in the previous section have touched on issues of identity and literacy and this becomes more important as we think about how the increasingly textualised world interacts with the experience of old age, benefiting some, marginalizing others.By moving from the world of employment into retirement, old age and pensions, many people find themselves, for the first time for many years, in contact with the state bureaucracies delivering caring and welfare services directly on their own account.Such agencies are, to use Deb Brandt's term, major sponsors of literacy in old age and they intrude into the privacy of domestic life (Brandt 2001).In data from the Senior Learners project, Roy smiles as he tells how the social worker came with an independent living assessment form with many questions to fill in.He submits to this because he feels entitled to some aids that are available free through the local council.'She asked me if I had my own teeth and how often I clean them.I told her of course I had my own teeth, who else's would I have?But what did she need to know that for?'In fact, he has had dentures since the age of 23 but this piece of personal, intimate information is not given freely to someone he has never met before, sitting in his home with a clipboard. Roy, like others of his generation, has also had to accommodate to changes in collecting his weekly pension from a system that required no writing (the post office cashier tore out the weekly voucher from a book, date stamping both the voucher and the remaining stub and handed over the cash) to one that uses a SMART card that requires memorizing a PIN and punching this into a machine quickly and accurately, in a public place, in order to obtain the money from his bank account. In her account of her grandmother's move into a care home due to her deteriorating eyesight, Deana Milioti (2000) describes how this move interrupted both her grandmother's social networks and her established literacy practices which had enabled her to communicate with friends and family of all ages, to cook food, and to get news and information.In a noisy environment, with busy staff, surrounded by more needy patients than herself, the routines of the care home exacerbated her problems rather than mitigated them.Milioti (2000) observes the high reliance on oral communication with the patients whilst the organizational aspects of the care home, that co-ordinated the staff's actions (for example around medication and meal times) were largely written and not shared with patients.Care homes also tend to limit residents' access to communication technologies such as phones and computers. The mediating functions of family and community networks The institutional encounters with literacy described above often involve a three-way transaction between the older person, institutional staff and family carer.The role of the family carer may vary from that of onlooker to active intermediary and advocate.Many of the procedures and paperwork designed by caring agencies have to take account of this -respecting the autonomy of the older person yet their need for support by trusted others is reflected in the form-filling procedures demanded by medical and other services.A common example is a scene in the hospital outpatient's waiting room where the patient discusses and interprets with relatives the form that the doctor has asked them to fill-in and the implications of giving consent to a medical procedure. Paperwork can become overwhelming or its significance is unrecognised.In one example, Robert's daughter discovers months of unpaid bills and correspondence in his study, left unopened.Though he has been a distinguished doctor and is a highly literate person, the paperwork entailed by these letters is too exhausting or uninteresting to engage him anymore.However, the example of a collaboratively written cheque in Janet Isserlis's case study of Lil illustrates the crucial power of the signature to the exercise of adult autonomy (as Mace 2002 argues).Giving power of attorney even to a well-known friend or family member is a most significant one to a person's sense of identity and control.The fear of fraud and deception can colour even apparently cordial family relations.Relinquishing the right to sign is symbolic of an identity shift from being an authoritative and autonomous member of the family and community as provider and carer of others, a writer and decision-maker, to that of being cared for. In the negotiation of these situations by elderly people and their friends, family and institutional care services, the necessary letting go of control reveals them to be complex, fraught with power relations.The site of old age thus highlights the emotional sensitivities of networks of interdependency, the importance of trust, fear, respect and the large amount of work that must be done to achieve a balance between support and control.The metaphor of social capital captures little of the texture of the subtle exchanges that constitute acceptable behaviors between consenting adults and these have been better explored in anthropological studies of reciprocity (Hyde 2006). Conclusions and implications To date, studies of literacy and ageing have tended to be prompted by the changing demographic of age and concerns for social inclusion among a group who had fewer opportunities for schooling than the current generation.These older people may now experience increasing isolation from the networks of friends and family that have sustained them across their lifespan.Whilst I share these interests and concerns, an ethnographic eye on the literacies of old age proves to be productive in its own right.It highlights domains, relationships, new questions and issues that we can take back into literacy studies more widely.In particular it focuses attention on the ways in which older people are repositioned by the institutional encounters that accompany changes of status and rites of passage such as retirement from paid work, becoming a grandparent, eligibility for services and pensions, or becoming formally defined as disabled.These encounters are often mediated by literacy.A fine-grained exploration of them underscores the complexity of exchanges within networks of support.Trust, intimacy, respect for autonomy; reciprocity and economic exchange; negotiation of the boundaries of public and private space -all these are significant dimensions of people's experience.The ethnographic eye also emphasises the dimensions of embodiment and materiality involved in resisting or managing changes in both the ageing human agent themselves and the technologies involved in communicating across the lifespan. There are thus a number of lessons for policy that can be drawn from this paper.Firstly, a lifespan approach to literacies is important.There is a great deal of life after retirement and after the age of 50.Policy needs to differentiate more clearly among 'younger' and 'older' elderly.There is a need for cross-disciplinary policy responses and conversations, for example between education, health and social services.Secondly, we need to pay attention to the potential of the mass media and communicative resources as well as formal educational opportunities to enhance literacy use, new learning and autonomy among older people.Paradoxically, with decreased mobility older people have a higher reliance on the mass media and technological solutions to keep in touch with other people and the world at large.Third, we need better understanding of the complex mediating roles of family, peers and institutions (often all implicated at the same time) in negotiating and supporting changes in literacy practices over time.Intergenerational learning programs that span several generations, including grandparents as active participants, have great potential to promote age desegregation and understanding of these issues.
2018-12-12T09:30:43.207Z
1970-01-01T00:00:00.000
{ "year": 1970, "sha1": "979de2223255de8dfeb67d1d697d00d141012694", "oa_license": "CCBY", "oa_url": "https://epress.lib.uts.edu.au/journals/index.php/lnj/article/download/1278/1453", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "979de2223255de8dfeb67d1d697d00d141012694", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
126507088
pes2o/s2orc
v3-fos-license
Economics of Fish Production at Chitwan District, Nepal A study was conducted in 2016 to analyze the economics of fish production at Chitwan District of Nepal. Three study sites: East, West and South part of Chitwan were selected purposively. A total of 90 households, 30 from each study site were selected randomly and were interviewed by using pre-tested semi structured questionnaire. Secondary data needed for the study were obtained from DADO, MOAD, NARC and other related organizations working on fisheries and aquaculture sector. Descriptive statistics and extended Cobb Douglas production function was used to accomplish the study objectives for which MS-Excel and SPSS 16 were used. The B/C ratio is obtained dividing the gross return by total variable cost incurred. The total cost of production per ha of the pond area was Rs. 743798 per year with 79 and 21 percent variable and fixed cost components, respectively. Feed cost (28 %) was largest cost item followed by cost for labour (25 %), fingerlings (10 %), maintenance (6 %), manure cum fertilizers (5 %), fuel cum energy (3 %) and limestone and others (2%). The average gross return and net profit realized per ha were Rs. 1223934 and Rs. 480135 respectively. The cost, return and profit were calculated to be highest for east Chitwan with highest B/C ratio followed by west Chitwan and south Chitwan. The B/C ratio for the district was found to be 1.63. The return to scale was found to be decreasing with value of 0.654 indicating that 1 percent increment in all the inputs included in the function will increase income by 0.654 percent. Production function analysis, including five variables, showed significant effect of human labour, fingerlings and fuel cum energy cost but feed and manure cum fertilizers cost were insignificant. INTRODUCTION Nepal has rich fresh water resources including snow fed rivers, lakes, ponds and torrential hills stream. It is blessed with three major river systems -Koshi, Gandaki and Karnali. This vast water resource has been supporting several indigenous fish species which play a great role in income generating activities of landless and marginal farmers (DoFD, 2013). Fisheries have been practiced in Nepal since a long time and have a strong tradition in Nepal. Although it is not a main agricultural activity, is an important supplement to the daily diet in rural areas of Nepal contributing about 2.47 to AGDP (Rai, Clausen, & Smith, 2008). Fish farming in Chitwan has started since 2037/38 BS (DADO Chitwan,2072). Nearly 1255 farmers from 58 farmers group or cooperatives of this district were involved in fish farming. The total numbers of fish ponds were about 2073 with total area of more than 854 ha and water area of 539 ha. The annual fish production in the district was more than 264 mt with productivity of 4.2 mt per ha (Karki, 2016). The fish mission program has being implemented in the district since 2064/65 B.S.. Major fish breeds being cultivated in the district are Silver carp, Big head carp, Rohu, Naini, Common carp, Grass carp, Bhakur, Pangas etc. Along with local market, currently, fish produced in the district is being marketed to Kathmandu, Pokhara, Dhanusha, Siraha districts of the country (DADO Chitwan, 2072). In Chitwan district, lots of farmer groups and cooperatives are involved in production and marketing of fish. It ranks second position among the ten highest fish producing districts of the country (Karki, 2016). The demand for fresh fish is increasing day by day due to increased consciousness of people towards their health and nutrition. Meeting the fish demand through capture fisheries and importing may not be sustainable; therefore promotion and management of pond aquaculture could be the only one alternative for the sustainability of this enterprise in Nepal. Findings from this study will guide producers and marketing institutions for efficient utilization of resources with proper production plan. The objectives of the study were to analyze the economics of fish production at Chitwan District of Nepal, estimate the cost and return of fish production in study area, and analyze the profitability. Return to scale was calculated for assessing the elasticity of production for which Cobb Douglas production function was used. METHODOLOGY The study was conducted in Chitwan district. Three potential sites of Chitwan namely Ratnanagar and Khairahani municipalities from east Chitwan, Chitrawan municipality from west Chitwan and Madi municipality from south Chitwan, were selected purposively based on the fish production potentiality in consultation with stakeholders involved in fish production and marketing. A total of 90 households, 30 from each site were selected randomly and were interviewed by using pre-tested semi structured questionnaire. Secondary data needed for the study were obtained from DADO, MOAD, NARC and other related organizations working on fisheries and aquaculture sector. The information collected from survey was coded, tabulated and analyzed using SPSS 16 and MS excel (2007). The total cost of production was calculated by summing total variable cost (TVC) and total fixed cost (TFC) incurred in the production process. The cost incurred for fingerlings, feed, energy cum fuel, manure cum fertilizers, labour (including hired and family labour), maintenance cost, lime and other cost were considered as variable cost. Whereas the expenses on land rent, interest payment and depreciation of farm tools and machineries were included under fixed cost. The benefit cost ratio (B/C ratio) was calculated by dividing gross return with gross cost i.e. B/C ratio = Gross return/Total variable cost. The return to scale was calculated from the Cobb Douglas production function as: On taking log on both sides; ,Q \ ,Q $ 1 ln x 1 2 ln x 2 3 ln x 3 4 ln x 4 5 ln x 5 Where, Y = Gross/Total return (Rs. /ha), A = Constant or Intercept of the function, X 1 = Labour cost (Rs. /ha), X 2 = feed cost (Rs. /ha), X 3 = Fingerlings cost (Rs./ha), X 4 = Fertilizers and manure cost (Rs./ha), X 5 = fuel and energy cost (Rs./ha), 1 2 ....... 5 =Coefficient of respective variables, ln = Natural logarithm The summation of the all production coefficients indicates return to scale. Returns to scale reflects the degree to which a percent change in all inputs caused change in the output. Socio-demographic characteristics of the respondents The socio demographic characteristics of the respondents include population and gender distribution, ethnicity, family size, economically active population, education, occupation, land holding size, experience on fish farming. The mean age of the respondents in the study site was 41. 54 % of the respondents were male whereas 46 % were female. The average number of schooling of years was 9. The average experience of fish farming in the study site was 12 yrs. 36.7% of the respondents belong to Brahmin/ chhetri ethinicity. 58.9 % of the respondents belong to janajati and dalit community whereas remaining others were 4.4%. The average area of upland, lowland and pond size was 2.9, 31.64 and 13 kattha respectively (Table 1). Cost of fish production per ha of the pond area per year. Table 2 presents the cost of fish production per ha of pond area per year. The total cost (TC) of fish production per ha of the pond area per year was Rs. 743798. The total variable cost was Rs. 585724.58 which was about 79 percent of the total cost. Variable cost in the production of fish comprises cost for fingerlings, feed, labour, fertilizers and manure, limestone, fuel and energy and miscellaneous cost. Of the total cost, feed cost is found to be the largest cost item. It incurs about 28 percent of the total cost. Feed cost is followed by the labour cost which is 25 percent of the total cost. The cost for the purchase of fingerlings was about 10 percent of the total cost. Cost for maintenance, manure cum fertilizers, fuel cum energy, limestone and others respectively occupy about 6, 5, 3, 1 and 1 percent of the total cost. The total fixed costs of fish production per ha of pond area is Rs. 158169.2 which is about 21 percent of the total cost. The major headings under fixed costs are rental value of land, interest on long term loan and depreciation of tools which incur about 16, 3 and 2 percent of the total cost, respectively (Table 2). Cost and return per ha of pond area The total cost of fish production per ha of pond area in study area was found to be Rs. 743798 (Table 3). It varied in between Rs. 252562 to Rs. 2791600. The total return (TR) and net profit realized per ha were Rs. 1223934 and Rs. 480135, respectively. The maximum level of TR and net profit realized per ha were Rs. 3880000 and Rs. 1759140, respectively. The minimum TR per ha was found to be Rs. 131250. The result showed that some farms have attained negative profit. It means some farms were in loss and the maximum loss realized per ha was Rs. -496290 (Table 3). (Table 4).In some areas of south Chitwan, especially in buffer area where farmers are supported by TAAL project for pond digging, the main purpose of pond digging was to prevent wildlife rather than fish farming. They used very less amount of inputs in the pond and similar was the fish harvest. Also, the commercial fish farmers of this study site used to harvest fish once a year while the farmers of east and west Chitwan harvest fish once in 8 months. Due to these reasons, the cost, return and profit were found to be lowest for south Chitwan as compared to east and west Chitwan. Benefit-cost ratio It gives an idea about recovery of cost incurred during production process by return obtained from sell of product. The B/C ratio was found to be 1.63 for fish farming in Chitwan district. The respondents were sampled randomly from three different part i.e. east, west and south Chitwan. Among them the B/C ratio was found to be higher for east Chitwan i.e. 1.74 followed by west and south Chitwan i.e. 1.72 and 1.42, respectively (Table 16). The B/C ratio was found to be greater than unity. Thus, we can conclude that fish production in Chitwan district is profitable. Production function analysis Lots of inputs are required for fish production. Each input has certain degree of effect on the quantity of fish produced or the quantity of fish produced is result of the effect of inputs used. For estimation of such effect of inputs, in this study, extended Cobb-Douglas production function was applied and the result obtained is expressed in Table 6. To determine the effect of variable inputs, Cobb-Douglas production function was used (Table 6). Five variables were estimated to show their effects on production of fish such as human labor cost, feed cost, fingerlings cost, manure and fertilizers cost and fuel cum energy cost. Out of the five variables three variables such as human labor cost, fingerlings cost, energy cost were significant at 1 percent level and other two variables such as feed cost and manure cum fertilizers cost were not significant. The sum of the coefficients of different inputs was calculated to be 0.654 for fish production. This indicates that the production function exhibited a decreasing return to scale and implies that 1 percent increment in all the inputs included in the function will increase income by 0.654 percent. The coefficient of multiple determinations R 2 of the model was 0.615 for fish production. It indicates that about 61.5 percent of variation in gross return was caused by the explanatory variables, which were included in the model. The value of adjusted R square was 0.592 indicating that after taking into account the degree of freedom (df) 59.2 percent of the variation in the dependent variable was explained by the independent variables included in the model. The F-value was found to be 26.79, which is highly significant (i.e. significant at 1%) indicating that all the inputs included in the model were important for explaining the variation in total revenue of fish production in the study area. DISCUSSIONS The total cost of production was found to be Rs. 743798 and that of estimated by DOFD was Rs. 792740 (Byabasayik Matsya Palan Prabidhi, 2072). It implies that the production cost in the study area was in consistent with the cost estimated by DOFD. Variable cost constituted about 79 percent of the total cost of production. Within the variable cost, feed cost was the largest cost item with 28 percent contribution to total cost of production. According to Oluwasola and Damilola, in Nigeria (2013) variable cost accounted for 78 percent of total cost of production. Similar result was found by Akinyele John (2011) in Nigeria in which variable cost accounted 74 percent of the total cost of production and feed cost accounted about 24.72 percent of the total cost of production. Similarly, Olasunkanmi, 2012 from Nigeria found that variable cost accounted for about 87 percent of the total cost of production and feed cost incur 34 percent of the total cost of production. This result is also in consistent with the result of the research done by Penda et al., 2013 in Benue state Nigeria. Awoyemi and Ajiboye (2011), also reported feed cost as the largest cost item with 17.7 percent contribution to total cost of production. Among variable cost, labour cost (25%) was second large cost item followed by fingerlings (10%), maintenance cost (6%), manure and fertilizers (5%), fuel cum energy (3%), limestone (1%) and miscellaneous cost (1%), respectively. The expense on fingerlings was 10 percent of the total cost. Similar result i.e. 12.4 percent of expenditure on fingerlings was found by Awoyemi and Ajiboye (2011) in Nigeria. Therefore, in the study farmers expend more on feed, labour and fingerlings. Fish production in Chitwan district seems to be a profitable business as indicated by the B/C ratio of 1.63. The B/C ratio calculated by Oluwasola and Damilola, in Nigeria (2013) was 1.5. Similarly B/C ratio calculated by Olaoye, 2013 in Nigeria was 1.69.The B/C ratio for fish production performed by Olasunkanmi in the Osun state of Nigeria was found to be 1.65. The effect of labour, fingerlings and fuel cum energy on gross revenue was statistically significant at 1 percent. The sum of elasticity of variables included in the model was found to be 0.654 indicating diminishing returns in nature. Akinyele John, 2011 in Nigeria found similar kind of result in which the coefficient of production is 0.781 which implies that production occurs in second stage of production function. This finding is also consistent with that of Olagunju et al. (2007) in their study on economic viability of cat fish production in Oyo state, Nigeria. The research done by Penda, et.al, also showed the decreasing returns of scale with sum of the coefficient of production to be 0.591. CONCLUSION Being a country of sufficient water resources with diverse agro-climatic zones and species diversity, Nepal has great opportunity of growing different fish species from terai to hilly region. The study was conducted among 90 fish farmers who were randomly selected from three different sites i.e. east, west and south Chitwan, 30 from each study site. With annual production cost of Rs.743798, the total return and net profit realized per ha per year were Rs. 1223934 and Rs. 480135, respectively. Out of total cost, about 79.00 percent was variable cost and remaining 21.00 percent was fixed cost. Feed cost, largest cost component, accounted for about 28.00 percent followed by cost for labour, fingerlings, maintenance, manure cum fertilizers, fuel cum energy, limestone and others, respectively occupying 25.00, 10.00, 6.00, 5.00, 3.00, 2.00 of the total cost. The fixed cost components i.e. rental value of land, interest on long term loan and depreciation of fixed assets occupied about 16.00, 3.00 and 2.00 percent respectively of the total cost. The cost of fish production per ha of pond area of the study area was found to be Rs.743798. Differences in Cost, return and profit has been found in different sites. The production cost has been found to be highest for east Chitwan (Rs. 978652.1) followed by west Chitwan and south Chitwan of Rs. 630382.6 and Rs. 622360, respectively. Total return for east, west and south Chitwan was found to be Rs. 1700307, Rs. 1087222 and Rs. 884272, respectively. Despite the higher production cost, the return and profit were found to be higher for east Chitwan. Similarly, the B/C ratio was found to be greater i.e. 1.74 for east Chitwan followed by west (1.72) and south Chitwan (1.42). Fish enterprise was found to be profitable in the study area as indicated by the B/C ratio of 1.63. The return to scale was found to be 0.654 i.e. decreasing return to scale. Among five variable considered, three variables such as human labor cost, fingerlings cost, energy cost were significant at 1 percent level and other two variables such as feed cost and manure and fertilizers cost were insignificant even at 10 percent level of significance.
2019-04-23T13:21:40.503Z
2018-12-09T00:00:00.000
{ "year": 2018, "sha1": "6dae7b6d789f2cc9368708461fe66fd5b979f4d3", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/janr/article/download/22219/18946", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "69deb342dc37224db01455e129e1d6835a0fc034", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
15566390
pes2o/s2orc
v3-fos-license
Aceito sob recomendação do Editor Associado This work analyses the application of heuristic algorithms in the transmission network expansion planning problem using the Hybrid Linear Model (HLM). The HLM is a relaxed model which still has not been fully explored. The work presents a detailed study of a constructive heuristic algorithm for the HLM and proposes an extension of the model and a solution technique for multi-stage planning. Quality evaluation of solutions found by HLM and the possibilities of application of the model for transmission system planning are also discussed. Finally, tests with the most known systems of the specialized literature are presented. INTRODUCTION The main objective of the electric system transmission expansion plan is specify transmission lines and/or transformers that should be built in order for the system to operate in an adequate form in a specified planning horizon.Problem data are: base year topology, candidate circuits, generation and load data in the specified planning horizon, investments restrictions, etc.The planning solution specifies the location, the moment and the quantity of new equipaments.When static planning is under consideration there is a single planning horizon.Generalization of these concepts leads to the multi-stage planning with the splitting of the planning horizon into several stages.This work analyses two types of planning approaches with models from network synthesis applied for long-term expansion planning.Usually, topologies found for long-term planning are further analyzed using techniques of short-term planning such as AC power flow, stability analysis, among others, however our work relies only on issues related to the long-term planning. The long-term transmission system expansion problem is usually represented by a mathematical model called DC model.The DC model is a non-linear mixed integer problem which is difficult to solve for large-scale systems.Others models that have been used are relaxed models such as the transportation model and hybrid models.This work analyzes the hybrid linear model and proposes the use of constructive heuristics to solve it.More details of mathematical modeling can be found in Romero et alii (2002). When the power grid is represented by the DC power flow model, the mathematical model of the transmission system expansion static planning problem is formulated as follows: where c ij , γ ij , n ij , n 0 ij , f ij and f ij represent the cost of a circuit that can be added to right-of-way i-j, the susceptance of that circuit, the number of circuits added in right-of-way ij, the number of circuits in the base case, the power flow and the corresponding maximum power flow in right-of-way i-j, respectively, v is the investment, S is the transpose branchnode incidence matrix of the system, f is a vector with elements f ij , g is a vector with elements g k (generation in bus k) whose maximum value is g, n ij is the maximum number of circuits that can be added in righ-of-way i-j and Ω is the set of all right-of-ways. Constraint (2) represents the conservation of power in each node for the Kirchhoff's Current Law (KLC) constraint models in the equivalent DC network.Constraint (3) represents the Kirchhoff's Voltage Law (KVL).Moreover, these constraints are nonlinear. In specialized literature many solution approaches for the transmission planning problem have been proposed.They can be joined into three groups: (1) constructive heuristic algorithms, (2) classic optimization algorithms such as Benders decomposition and branch-and-bound, (3) metaheuristics such as simulated annealing (SA), genetic algorithms (GA), tabu search (TS), GRASP, etc.This work is part of a revision process of the mathematical modeling used in long-term transmission system planning as well as a revision of constructive heuristics used to solve them.We propose a detailed analysis of such algorithms, specially those that use the solution given by relaxed models after relaxing the integrality constraint of investment integer variables.That kind of strategy is more promising and can be easily extended to the multi-stage planning.This conclusion become clear when the best known constructive algorithm, the Garver's algorithm for the transportation model (Garver, 1970), was intensively analysed and improved in Romero et alii (2003).Therefore, the proposal represents a natural extension of the proposal presented in Romero et alii (2003) for the transportation model.In this case, the optimization technique for the hybrid model, which is an intermediate model between the transportation model and the DC model, has been developed. In the next section the hybrid linear model is introduced and one type of constructive heuristic algorithm is presented.The solution technique is also extended to the multi-stage planning.Another aspect considered in the paper is the relevance of the relaxed models and constructive heuristic algorithms in the present context of the transmission network expansion problem.Finally, tests are presented with the systems referred to in specialized literature. When the constraints (3) that represents the Kirchhoff's second law are relaxed in the DC model, the transportation model, which takes into account only the Kirchhoff's first Law and circuits operation constraints, is obtained.Any intermediary model between the DC model and the transportation model is called hybrid.It must be noticed that the transportation model is linear and the DC model is non-linear.Therefore, any model that considers only part of the constraints (3), represents a hybrid model.In this context, it is possible to formulate a hybrid linear model or a hybrid nonlinear model.This work considers the linear approach which is formulated as follows in (5). In this formulation, S 0 is the transpose incidence nodebranch matrix formed by circuits and buses of base topology; f 0 is the vector of power flow through the circuits of the base topology with f 0 ij elements; S is the transpose incidence matrix of the entire system and f is the vector of flows through added circuits with f ij .elementsThe Ω 0 symbol represents base case circuit indices and Ω the set with indices of all circuits. In the hybrid model, flows that go through circuits which belong to the base case were represented separately from flows of added circuits.For example, consider that there is one circuit that connects path i-j in the base case and the optimization process adds one circuit in paralel.Flows in the old circuit are represented by f 0 ij and in the new circuit are represented by f ij , which means that the values could be different among them.In this model, only circuits of the base topology must follow the KVL and this requirement is represented by constraint (7).When this condition is imposed, the hybrid model is linear. The proposed HLM is a mixed integer linear problem, and it is possible to find solutions for systems with small and medium complexity using a branch-and-bound algorithm as presented in Haffner et alii (2001).However, in large scale problems with high complexity, the branch-and-bound algorithms demands a proibitive computation time.The advantage in using the HLM is that it is easier to solve than the DC model but the solution can be far from the DC model's optimal solution.However, the infeasibilities tend to be less than those found by the transportation model. The HLM can be extended to the multi-stage planning applying the same strategy introduced in Romero et alii (2003).In this case, an expansion investment plan should be determined for the referred base year.Considering an annual discount rate of I, the present values of the investment costs and operation for the t 0 base year, with a horizon of T years, are the following: where Using the relation above, the multi-stage planning for the hybrid model presents the formulation (12), show in the next page. In this formulation, v is the present value of the expansion cost of the system, δ t inv is the discount factor for determining the present investment value in stage t.Other variables are the same as static planning with the addition of index t, which indicates the planning stage.It must be observed that constraints ( 13) and ( 14) are the only constraints that relate different planning stages.These constraints do not allow decoupling of the multi-stage planning into t independent problems. HEURISTIC ALGORITHM FOR THE HLM A constructive heuristic algorithm (CHA) is an iterative solver for finding a good quality solution of a complex program through a step-by-step process.For the transmission expansion problem, a new circuit, which can be a transmission line or a transformer is added at each step.The circuit is chosen by using a sensitivity indicator specified by the CHA.The sensitivity indicator represents the main feature by the CHA.The iterative process stops when a feasible solution is found and usually the solution is of good quality, i.e., there is no need for more circuit additions.The CHA is robust and usually rapidly converges to large scale and complex systems and these algorithms converges to good quality solutions only. The sensitivity indicator is defined based on the optimal solution given by the hybrid linear model.It must be observed that if integrality constraints of investment variables are relaxed, i.e., n ij ≥ 0, the system (5) becomes a linear programmimg (LP) problem.The LP optimal solution provides optimal solution of the relaxed problem, i.e., considering the case that fractional circuits could be added.Furthermore, LP solution can be used to identify the best circuit to be included in the system.The sensitivity indicator is the power flow in the circuit with n ij = 0 for the LP.The circuit to be added is identified by the following sensitivity indicator: where n ij is the LP solution after relaxing the HLM's integrality constraint.The topology is updated at each CHA's step.The current topology is formed by circuits of base topology and from circuits added during the iterative process. One of CHA's most favorable characteristics (which relaxes investment variable integrality constraints) is that the algorithm aims at finding the most important circuit in terms of investment and operation constraints.Thus, at each step that an integer n ij solution is found by LP, the algorithm finds the global optimal solution.The major drawback of that type of algorithm comes up at the end of the iterative process when almost all of the n ij given by the LP solution presents low fractional values.In this case, the algorithm becomes inefficient because decisions taken by comparing small values of n ij can produce serious deviations when, in a practical situation, an integer number of circuits is introduced.This deviation is partially dealt with by incorporating an irrelevant circuit removal procedure after finishing the addition phase. Another option is to change the sensitivity indicator. We can use Garver's fundamental idea applied to the transportation model to propose a novel constructive heuristic algorithm for the transmission system expansion static planning problem using the hybrid linear model.The Base Constructive Heuristic Algorithm assumes the following form: BCH Algorithm 1. Assumes a base topology as current topology and use the HLM. 2. Solves an LP for the HLM using current topology.If LP indicates that the system is operating adequately with the added circuits, go to step 4 (a solution was found for HLM).Otherwise go to step 3. 3. Use the sensitivity indicator (15) to identify the most attractive circuit that should be added to the system.Update current topology with the chosen circuit and go to step 2. 4. List the added circuits in descent order of costs.Using an LP, verify wether a circuit removal keeps the system at an adequate operation point at each iteration.If the system operates adequately, remove the circuit.Otherwise, keep the circuit.Repeat the process, simulating each circuit removal until all circuits have been analyzed.The remaining added circuits represent CHA's solution. In the proposed CHA there is a fundamental aspect related to the behavior of the added circuits in the base topology.The circuits added during the iterative process must follow KCL only or both KCL and KVL depending on the situation.If the objective is to find a solution fitted to the HLM presented in (5), the added circuits must follow the KCL only.However, if one intends to determine the final feasible solution for the DC model, all added circuits must follow both of Kirchhoff's laws.There is an intermediary case in which circuits added in paralel with the existing circuits in the base topology must follow both of Kirchhoff's law and those added in new paths must follow the First Law only.Therefore, there are three ways of implementing the CHA, which results in different final topologies with different algorithm performance.Whatever the case, an LP is solved at each iteration.This work analyzes only one algorithm and the first and third option are analyzed in another paper. Modified Algorithm for the Hybrid Linear Model The objective of the modified algorithm is to find a more restrictive topology than HLM.As a consequence, topologies found by the algorithm should be more expensive than the HLM optimal solution.Circuits added in parallel to the existing ones during the iterative process must follow both of the Kirchhoff's circuit laws.Circuits added in new paths follow the KCL only.The advantage of using this optimization strategy relies on the fact that the final solution presents less unfeasibilities when tested in the DC model.This strategy partially preserves performance and consistency of the algorithm.The circuits that follow only KCL have an excellent performance.However, circuits added in parallel considering the indication based only on the KCL may not present good performance when added in parallel to an existing circuit because in this situation both laws must be followed. In the algorithm, the LP has the mathematical formulation of ( 16).The circuits added during the iterative process are stored in different sets.Consequently, if the selected circuit corresponds to path (i,j) ∈ Ω 0 , then an element of n 0 ij is updated, otherwise, an element of n 1 ij is updated.For this reason, Ω 1 represents the set of new paths added by addition of new circuits during the iterative process. Another theme related to algorithm is the possibility of exchanging the sensitivity indicator.The sensitivity indicator can be substituted or changed by using other proposals given by the LP corresponding solution.Two alternatives are: (1) After solving the first LP corresponding, add the integer part of n ij for all circuits with n ij ≥ 1 and continue the process as usual; (2) At each step, add the circuit with the largest n ij .These modifications lead to slightly different algorithms and for large and complex systems they will provide different solutions. HEURISTIC ALGORITHM FOR THE MULTI-STAGE PLANNING The constructive heuristic algorithm for the static planning can be extended to the multi-stage planning using the same approach for the transportation model presented in Romero et alii (2003).In multi-stage planning, the planning problem for different stages must be solved in an integrated way.In this section, an extension of the previous CHA to the multistage planning, is presented.The critical point of a CHA for the multi-stage planning is the choice of sensitivity indicator.The proposed CHA for multi-stage planning using HLM presents the following form: 1. Consider base topology as current topology and use the hybrid linear model.Make current stage k = 1. 2. Solve the LP corresponding problem (12) for current topology.If n k ij = 0 ∀ (i,j) ∈ Ω then the addition phase in stage k is over.If a local search procedure must be implemented, go to step 4. Otherwise, go to step 3. 3. Use a sensitivity indicator to find the most attractive circuit in stage k.Update current topology with chosen circuit and go to step 2. 4. Execute step 4 of the BCH algorithm for stage k.Thus, those circuits which are not removed will represent the solution to stage k.Go to step 5. In step 2, the algorithm solves an LP for current topology. The LP is slightly different from problem (12) because of the addition, which must be taken into account separately.If the modified algorithm for the hybrid model is used in step 2 of the algorithm, the LP becomes: where n 1t ij and n 0t ij represent circuits inserted in stage t in new paths (not belonging to the base topology) and the circuits inserted in parallel paths with the base topology respectively.Ω 1t represents the set of new paths in which new circuits have been added in stage t. Step 2 has to be analyzed in detail considering two basic aspects: (1) the addition logic, and (2) the use of a sensitivity indicator.The problem that appears to be solved in the first place is the transmission capability of initial stages.In other words, the algorithm performs all necessary circuit additions in order to provide the first stage with adequate operation conditions, then goes to stage 2, and so on, until the last stage is reached.This methodology represents the most logical one and was employed in Romero et alii (2003) due to the fact that the first added circuits work in the next stages, therefore they reduce the need for more investments in the following stages. The proposed sensitivity indicator identifies the most attractive circuit in path (i,j) with n ij = 0 in stage k as the circuit that carries the largest amount of power flow considering all circuits added by LP subroutine for all stages t ≥ k.Consequently, for each path with is the power flow in path (i,j) in stage k+1.All of these values become available through LP solution. Other alternative formulations of CHA can be implemented for multi-stage planning.One of the options consists in performing a local optimization (step 4), after finishing the addition process in all stages.It must be observed that the circuit removal phase in the proposed CHA comes at the end of each planning stage.Another option is to change the sensitivity index.For example, in stage k, the most interesting path could be a path with n k ij = 0 and with the largest value of There are others CHAs employed in the transmission system planning that present good performance, some of them were presented in Villasana et alii, 1985;Monticelli et alii, 1982;Pereira and Pinto, 1985;Levi and Calovic, 1991;Dechamps and Jamoulle, 1980.These algorithms also can be extended to the multi-stage planning. TESTS WITH PROPOSED ALGORITHM The proposed algorithm was implemented using FORTRAN programming language and MINOS 5.3 was employed as an LP subroutine.Only two systems, which are known in specialized literature, were tested and partial results are presented in detail. Brazilian 46-Bus Southern System The southern system has 46 buses, 79 circuits and 6880 MW of demand.The data consists of static planning (Romero et alii, 2003).For this system, the planning is considered with generation redispatch only. The proposed algorithm must find the following topology: Planning with an algorithm: v = 87,617,000 US$ The Table 1 presents the solution process through iterations.v LP is the investment given by LP and v is the partial invest-ment resulted from additions provided by the program. The sequence of circuits additions performed by modified hybrid algorithm shown in It should be noticed that the optimal solution for the analyzed system considering the DC model is of v = 72,780,000 and v = 63,163,000 for the hybrid model [1].The optimal solution for the hybrid model is formed by circuits determined by algorithm (Table 1) except circuit n 19−21 . Observing the results obtained by the modified hybrid algorithm, the following aspects have to be mentioned: (1) The modified algorithm cannot find the optimal solution of the hybrid model because circuits that are added in parallel to the existing ones must follow Kirchhoff's Second Law, (2) the algorithm performs optimal additions until iteration number 8 when only circuit n 19−21 maintains value 0.016, (3) additions executed until interation 8 are optimal for the hybrid linear model and for the DC model.Although an optimal solution for the hybrid linear model has been found, the algorithm indicates the need for a new circuit due to KVL, which must be followed, (4) in the last step there is an erroneous choice of a circuit indicated by LP, which has an insignificant value, and (5) finally, the final topology is feasible for the DC model. Brazilian North-Northeastern System The system has 87 buses, 183 circuits and 29748 MW of load for the entire planning horizon.The system data is in Ruben et alii (2002) and the available data allows planning without generation rescheduling and a multi-stage planning with two stages.The system is very complex and the optimal solution is unknown.1998 is considered as the base year and Plan P1 in 2002, and Plan P2 in 2008.Consequently, the needed circuits for 2002 are considered as cost of 1998 (original values) and the needed circuits in 2008 are considered as being built in 2002 and their costs are updated for the base year.In tests the factor of discount used is I = 10%.Therefore, the costs for a transmission line added in P2 are multiplied by 0.683. Considering Plan P1 and Plan P2 separately, the modified hybrid algorithm finds the following topologies: In the multi-stage planning the algorithm converges after solving 175 LPs and removing 1 circuit from the first stage's phase 2 and 1 circuit of second stage's phase 2. The total investment cost is v = US$ 2,405,256,000. ANALYSIS OF THE RESULTS The proposed constructive heuristic algorithm for the hybrid linear model finds good quality topologies for the tested systems.Considering small or medium sized systems, the proposed CHA finds quasi-optimal solutions.However, the CHA's performance becomes less efficient when system complexity increases.It must be noticed that the critical phase occurs in the final stage when all n ij = 0 given by LP solution are small.In this condition, the used sensitivity criterion becomes inefficient because small n ij = 0 can introduce expensive circuits when path i-j is made n i−j = 1.This problem was already observed for the CHA presented in Ruben et alii (2003).Getting around this problem in an adequate manner would represent a significant contribution to the CHA that uses the solution of relaxed model as a sensitivity indicator. The topologies found by CHA present many attractions for the present context of the transmission system planning: (1) The proposed solutions become closer to DC model's optimal or sub-optimal solutions and consequently, present less load shedding than HLM's optimal solution, (2) these solutions can be used as bounds in branch-and-bound algorithms to find HLM's optimal solution, and (3) the topologies can be used for generating high quality initial topologies for meta-heuristic methods such as the genetic algorithm, or elite topologies for tabu-search algorithm (Da Silva et alii, 2000 and2001;Gallego et alii, 2000).Additionally, a detailed bibliography on the planning problem can be found in Latorre et alii (2003). CONCLUSIONS This paper presented one version of constructive heuristic algorithm (CHA) for the hybrid linear model (HLM).The proposed algorithm was extended to the multi-stage planning, where excellent results were presented.The tests showed an efficient performance of the algorithm and also the pitfalls that can occur with this kind of algorithms.The version presented itself as fast and robust.Finally, it is important to observe that beside its conventional use in the transmission network synthesis, the CHA is specially suited for generating good quality initial topologies for evolutive algorithms or elite topologies in a tabu meta-heuristic search.Another application is the generation of excellent bounds for branchand-bound algorithms.
2014-10-01T00:00:00.000Z
2007-03-01T00:00:00.000
{ "year": 2007, "sha1": "e9e05039e01a06ec43234888d681f0268a819488", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ca/a/3zC5DYYhnBZhfkNK5XxmgMK/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e9e05039e01a06ec43234888d681f0268a819488", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
248661517
pes2o/s2orc
v3-fos-license
Caring animals and care ethics Are there nonhuman animals who behave morally? In this paper I answer this question in the affirmative by applying the framework of care ethics to the animal morality debate. According to care ethics, empathic care is the wellspring of morality in humans. While there have been several suggestive analyses of nonhuman animals as empathic, much of the literature within the animal morality debate has marginalized analyses from the perspective of care ethics. In this paper I examine care ethics to extract its core commitments to what is required for moral care: emotional motivation that enables the intentional meeting of another’s needs, and forward-looking responsibility in particular relationships. What is not required, I argue, are metarepresentational capacities or the ability to scrutinize one’s reasons for action, and thus being retrospectively accountable. This minimal account of moral care is illustrated by moral practices of parental care seen in many nonhuman animal species. In response to the worry that parental care in nonhuman animals lacks all evaluation and is therefore nonmoral I point to cultural differences in human parenting and to normativity in nonhuman animals. Introduction Are there nonhuman animals who can be credited with morality? The empirical strand of the animal morality debate points to the existence of (proto-)moral capacities such as empathy, altruism, and inequity aversion, and prosocial behaviors like B. Wrage helping, consolation, fair-play, and cooperation in a range of nonhuman animal species (for discussions see de Waal 1996;Bekoff & Pierce 2009;Vincent et al. 2019;Andrews 2020b, ch. 9). The philosophical strand of the debate on morality in nonhuman animals, however, struggles to affirm these observations as evidence of morality, because morality is traditionally conceived of in anthropocentric and highly intellectualistic terms. Much of the Western ethical tradition has taken morality to be based on the ability to understand and scrutinize moral rules, principles and duties, which limits it to humans. Reflection on one's moral motivation, which involves metacognition, is thought to be a crucial feature of morality. This leaves the debate on nonhuman animal morality at an impasse: On the one hand, we have what some take as compelling empirical evidence of behavior that deserves the label 'moral' in nonhuman animals. A core argument here is to evaluate human and nonhuman behavior by the same standards. On the other hand, our philosophical concepts of morality fail to account for morality in nonhuman animals, because they assume human superiority. Some headway has been made towards acknowledging morality in nonhuman animals by scientists and philosophers who argue for the de-intellectualization of these traditional Western notions of morality (e.g. Waller 1997; Bekoff & Pierce 2009;Rowlands 2012;Monsó 2015;Monsó et al. 2018; Rutledge-Prior 2019; Behdadi 2021; Monsó & Wrage 2021). They argue for minimal accounts of moral agency that do not require moral judgement as metarepresentational scrutiny of reasons for action. Some accounts instead acknowledge emotionally motivated behavior as potentially moral and suggest we turn our focus to empathy as a moral motivation (Waller 1997;Rowlands 2012;Andrews 2013;Andrews & Gruen 2014;Monsó 2015;. In this paper, I propose that instead of making traditional moral theories bend to this de-intellectualized concept of morality to include nonhuman animals, there is a more direct way to move the animal morality debate forward: Care ethics as a sentimentalist moral theory acknowledges moral emotions and situates morality in particular relationships, instead of in the realm of abstract and impartial reason. It may thus be more open to the inclusion of nonhuman animals as moral agents. Care ethics is an ethical framework that claims that care, or the intentional 1 meeting of another's needs, is central to morality. In this framework emotions such as empathy and sympathy play the main role in moral motivation; paradigmatically, we provide care for others because we care about them emotionally (e.g. Noddings 1984Noddings /2013Tronto 1993, 102f;Held 2006, 30). Moreover, care ethicists argue that care is not just one among many moral practices, but that caring relationships are the biological root of any moral concern (e.g. Baier 1987Baier /2002. This argument regarding the biological origins of morality provides a straight-forward opportunity to talk about morality in nonhuman animals. However, care ethics cannot be applied to nonhuman animals 'off the shelf,' because it is situated in a larger anthropocentric framework. Thus, I will examine care ethics and extract what I take to be its core requirements for moral care: emotional motivation that enables the intentional meeting of another's needs, and forward-looking responsibility in particular relationships. What is not required, I argue, are metarepresentational capacities or the ability to scrutinize one's reasons for action, and thus being retrospectively accountable, which would exclude (most) nonhuman animals. I illustrate this minimal account of moral care by considering parental care in many nonhuman animal species. The structure of the paper is as follows: In Sect. 2, I review how care ethicists understand moral care. I identify the claim that moral care requires second-order reflection as an obstacle to the inclusion of caring animals, and point out potential pitfalls of this standard. I propose to widen this account to make it more inclusive of human as well as nonhuman animal practices of care that are emotionally motivated in Sect. 3. Moral care in a care ethics framework, I propose, minimally consists of empathically motivated meeting of another's needs and being prospectively responsible in relationships. I use the terms 'emotional care' or 'empathic care' to refer to this nonreflective form of care. Section 4 reviews the empirical data on this care in nonhuman animals, focusing on parental care. I conclude that many nonhuman animals can be credited with moral care thus understood, and give an outlook on possible ethical implications in Sect. 5. On this note, before I begin, I want to acknowledge that some of the research I cite in this paper to illustrate nonhuman animals' caring capacities is ethically problematic, because it lacks respect for caring animals as such and for their relationships. I refer to this research with regret and in hopes that my argument inspires care ethical criticism of this lack of care for caring animals. Moral care In this section, I give a brief introduction to care ethics and summarize how care ethics understands moral care, or 'what it takes' to participate in care as a moral practice. I then propose to widen this account to make it more inclusive of human practices of care as well as care in nonhuman animals. I mainly refer to foundational texts of care ethics, as these texts established what it means to be a moral caregiver. These standards are the relevant frame of reference in arguing for moral care in nonhuman animals, although they were developed with humans in mind, sometimes even in explicit distinction from other animals. Moreover, as I write this paper as a contribution to the interdisciplinary debate on nonhuman animal morality, I limit myself to a few main positions in care ethics to give an overview. I view the care ethical literature focused on nonhuman animals as supplementary here. This is because it mostly focuses on the moral standing of nonhuman animals as recipients of human caring, not on nonhuman animals' own agency in the practice of care (see Donovan & Adams 1996;2007 2 ). An exception is Gruen (2015), who points out the value of caring relationships that do not involve humans in chimpanzees, and thus hints at nonhuman animals' own 'entanglement' in their intraspecific relationships of care. Indeed, an important upshot of my argument for moral care in nonhuman animals is the respect that humans can be argued to owe to caring animals and their relationships. However, considering these ethical implications is, unfortunately, beyond the scope of this paper. Moral care according to care ethics Care ethics is a relatively young moral theory that emerged as a critical response to traditional moral philosophy in the 1980s. Feminist moral philosophers began to criticize the traditional focus on the entitlements of autonomous agents and argued for a more relational approach to morality that revolves around care and responsibility. Care ethicists view care as a moral practice that is fundamental to morality. Care is a practice in that paradigmatic care is not just an internal state or attitude, caring about, but entails behavior that is motivated by this internal state, caring for (Noddings 1984(Noddings /2013Tronto 1993, 102f;Held 2006, 30). This practice is an integral part of human social lives, and parental care has a special place in this practice, as it is crucial for children's social development (e.g. Feldman 2011). We need good parental care to become capable of being invested in others' wellbeing, and thus become moral beings. Therefore, care is fundamental to a moral society in two ways: as a continuous practice that responds to others' needs, and as a precondition for being able to engage in this practice. In its conception of care as a moral practice, care ethics decidedly centers on real life and aims to capture everyday care, not the exceptional. Within this everyday care, care ethicists acknowledge emotion as a valid (part of) moral motivation (e.g. Baier 1987Baier /2002Held 1993, 52). In fact, familial relationships and the entailed emotional attachments are taken to be the wellspring of moral concern and the paradigm of moral care by care ethicists: These particular relationships, Baier (1987Baier ( /2002 explains, "[give] rise to moral obligations more self-evident than any obligation to keep contracts." The moral obligations that emerge in close relationships are so self-evident, so compelling, precisely because they stem from sentiment instead of pure reason. Accordingly, Held (1993, 52) writes, care ethics "will embrace emotion as providing at least a partial basis for morality itself." With this acknowledgment, care ethics stands in the sentimentalist tradition. Sentimentalists such as Hume, Baier points out, "[endorse] the emotional response to a fully realized situation as moral reflection at its best, not as one of its underdeveloped stages" (Baier 1987(Baier /2002. The emotions care ethicists usually refer to as adequate motivation for care are sympathy (Noddings 1984(Noddings /2013Baier 1987Baier /2002Collins 2015, 23ff) or empathy (Held 2006;Slote 2007). Although each author gives reasons for preferring one over the other, and there is a larger debate on the distinctions between them, for space reasons I will focus on the common ground: the emotion motivating care is crucially a 'feeling with the other' that centers their needs. For the sake of simplicity, and industry. Thus, they develop the idea that humans owe care to nonhuman animals, and that this can be the basis for animal ethics beyond individual rights. because I want to bring empirical data on care in nonhuman animals into this debate, I will henceforth use the term 'empathy' to refer to the emotional motivation that is crucial in care ethics. 3 Empathy is generally a heterogenous concept that can be understood in highly intellectualistic terms to involve Theory of Mind (e.g. considering another's mental states in comparison to one's own), projection (putting oneself in the other's shoes) or other forms of second-order reflection (Decety et al. 2016, 2). However, it can also be understood in less intellectualistic terms. For instance, emotional contagion, which is widely regarded as a minimal form of empathy (de Waal 2008;Decety et al. 2016;Bartal et al. 2011), merely involves the involuntary 'catching' of another's emotion, and does not involve any cognitively complex capacities like perspective-taking (Meyza & Knapska 2018). Care ethicists tend to occupy a middle ground between these two poles in their understanding of empathy, and emphasize direct perception (e.g. Noddings 1984Noddings /2013 2), attentiveness (e.g. Tronto 1993, 127f), and responsiveness (e.g. Tronto 1993, 134f;Held 2006, 15;ibid., 24) over projection as central features of empathic care. In fact, some care ethicists argue that projective perspective-taking without actual 'feeling with the other' bears the risk of overshadowing the other's actual needs and is thus less desirable (Noddings 1984(Noddings /2013Gruen 2015, 56f). Crucially, empathy must be about the other, not about oneself, and this mark can be missed with a cognitively 'too primitive' as well as a 'too sophisticated' form of empathy. Capturing this desired blend of emotional involvement and other-directedness, Noddings (1984Noddings ( /2013 calls the state that caring ideally induces in the caregiver "engrossment" or "motivational displacement." This is a notion of empathy that may not necessarily require metarepresentation (more on this in subsection 3.1), however, care ethicists hold that care must involve some degree of reflection to count as moral. While emotion is argued to be at the root of morality, rational reflection refines this emotion, as it enables one to question the adequacy of care. Held (2006, 10) holds, for instance, that "raw emotion" cannot be a guide to morality, "feelings need to be reflected on and educated." Moreover, care needs to be "subjected to moral scrutiny and evaluated" (ibid., 11, emphasis in original). This 'moral judgment' is intended to help avoid misguided or misplaced care, since purely emotional care is at risk of becoming excessive or controlling (ibid., 11), and is unreliable as a moral compass. Baier (1987Baier ( /2002, referencing Hume again, calls the core moral capacity "corrected sympathy," i.e. an emotional motivation that has been reflected on, and understands mere emotional motivation as a proto-moral capacity (ibid., 246-7). Ruddick (1980,347) calls the crucial blend of emotion and reflection in what she takes to be paradigmatic moral care "maternal thinking," which is informed by "the intellectual capacities [one] develops, the judgments [one] makes, the metaphysical attitudes [one] assumes, the values [one] affirms." In her introduction, Noddings (1984Noddings ( /2013, 3) even promises not to "bog down in sentiment." Thus, while emotion is at the root of morality according to care ethics, it is not enough for fully moral behavior but needs to be reflected on. From this rough summary we can discern that there is ground for hoping that empathic care in nonhuman animals could be acknowledged as moral by care ethics, but there are also hindrances to this proposition. On the one hand, promisingly, the evolutionary account of morality that is implied in arguments regarding the origin of morality in affective familial relationships is open to arguments for morality in other social species. Many nonhuman animals have, as I will show, strong familial bonds and provide extensive parental care. If this setting is where morality emerges in humans, this is also where we could find morality in nonhuman animals. Moreover, care ethicists see emotionally motivated care as the paradigmatic form of care. Care ethics' relatively modest definition of empathy, and its situatedness in particular relationships instead of universal abstractions suggest a concept of moral care that is in principle attainable for nonhuman animals. On the other hand, however, care ethicists do insist on varying degrees of reflection, sometimes second-order thought, as a corrective to emotionally motivated care, which sets the cognitive bar too high for nonhuman animals. However, I think this intellectualization already causes tension in human-centered care ethics, because it seems to backpedal on the recognition of emotion as moral motivation that care ethicists themselves base their entire case on. In the next section I will therefore consider the cost of this intellectualization of care that ties morality to metacognition. Pitfalls of making moral care reflective While its fundamental reliance on emotions makes care ethics potentially inclusive of caring animals, its later recourse to second-order reflective thinking excludes (most of) them. Granted, reflective care may make for salient examples of moral care. However, this narrow account of moral care risks reducing morality to its most rarefied form even in humans and ignoring the morality of everyday social interactions. Andrews and Gruen (2014, 209) write: Once we are able to look past the most salient examples of human morality, we find that moral behavior and thought is a thread that runs through our daily activities, from the micro-ethics involved in coordinating daily behaviors like driving a car down a crowded street (Morton 2003), to the sharing of someone's joy in getting a new job or a paper published. If we ignore these sorts of moral actions, we are overintellectualizing human morality… Care ethics criticizes this reduction of morality to the extraordinary when it points out that the abstract moral realm concerned with justice and autonomous agents is impossible without the ubiquitous moral groundwork of care that produces moral beings in the first place (Held 1993, 55). However, it risks reproducing this overly narrow view of morality if it ties the morality of care to a capacity that neurotypical adult humans generally possess, but may not typically utilize in their caring. If our moral behaviors do not typically involve metacognition, it is unclear why these atypical instances should be hailed as the benchmark of morality, instead of as merely one moral mode among others. Reflection can help make up for lack of experience or when we consciously want to change how we care, but the goal seems to be to become, in fact, someone who can provide care spontaneously. It would thus be undesirable for care ethics to limit moral care to reflective care. In many cases, we simply do not need second-order reflection to provide adequate care. This is also because the adequacy of care not only hinges on the wider, oftentimes abstract context, but is something the care recipient can give direct feedback on, which is why empathic capacities that make someone sensitive to others' emotional state are so crucial for care, unlike reflection. In fact, in some cases, "the moral lustre is tarnished if deliberation intervenes," as Waller (1997, 344) puts it. Having "one thought too many" (Williams 1981, cited after Wolf 2012) is worse than having one thought too little when it comes to care, because the latter, despite being erroneous, still speaks to a good moral character. Minimal moral care If we give up reflection as the criterion for morality, what are the minimum capacities required for care so that we can still call it moral care in a care ethics framework? There are two authors outside the care ethical tradition I am aware of who have made some suggestions. First, Rowlands (2012, 38, endnote 14) mentions that his de-intellectualized notion of morality as the right kind of nonreflective emotional motivation without moral responsibility is "consistent" with "some versions of care ethics." However, Rowlands does not elaborate on this. Second, Waller (1997) sees care ethics as a framework that points to the virtuousness of 'nonreflective' intentional behaviors and thus to nonhuman animal morality. Yet, he does not identify obstacles within care ethics to this view, as his focus is on the exclusion of nonreflective moral behaviors from traditional accounts of morality. Moreover, neither of the two authors connects this to parental care in nonhuman animals or considers possible ethical implications. I hope to add to these accounts here. Based on Rowlands and Waller, I propose that the question of moral care hinges on whether a creature intentionally 4 engages in care based on the right motives. In addition, unlike Rowlands and Waller I think we need to ask whether the care ethical notion of responsibility can be applied to these caring creatures in a meaningful way. I take this as the minimal standard for what it means to fully participate in the moral practice of care. In the following subsections I thus argue for nonreflective empathy as 'the right kind of motivation' for care (3.1), and show that nonreflective caring animals can still be responsible in a sense that is relevant for care ethics (3.2). Taken together, this provides a de-intellectualized account of moral care that is consistent with care ethics despite being nonreflective. I use the term 'nonreflective' as a shorthand for 'not involving metacognition,' not to mean that there is no cognition involved, as I do understand emotion to involve cognition. I also address two objections from care ethicists against the continuity of moral caring capacities in nonhuman animals (3.3). Nonreflective empathy Empathy is noted as a core emotional motivation for care by care ethicists. In this subsection I specify the concept of empathy that is sufficient as a moral motivation for minimal moral care. This is a concept of empathy that does not require metacognition and is in line with the care ethical understanding of empathy as a 'feeling with the other' that is other-directed instead of self-directed (see Sect. 2). Moreover, as I have noted above, care ethicists view the direct perception of the other's utterances and behavior as crucial for the caregiver's empathic process. Monsó (2017) makes the case for a "minimal moral empathy (MME)" (ibid., 215) as a moral emotion that matches these ideas, and which I will therefore apply here. Care ethics already refers to moral emotions as part of moral motivation, but I think they can serve as moral motivation on their own. I understand moral emotions the following way: Moral emotions have evaluative content regarding a morally relevant feature of the world, and are experienced as the result of a reliable normative sensitivity, a "sensitivity to the good-and bad-making features of situations," such as others' happiness or suffering (Rowlands 2012, 230). Examples of (positive and negative) moral emotions besides empathy would be gratitude, jealousy, schadenfreude, or cruelty. Crucially, these emotions have a moral content whether or not the being experiencing them can intellectually entertain a relevant moral proposition such as e.g. 'Your suffering is a good thing' (schadenfreude), because moral emotions "track" moral propositions (ibid., 58ff). Tracking here denotes an asymmetric truthpreserving relation between moral propositions and moral emotions (ibid.). Following Rowlands (ibid.), a moral emotion tracks a moral proposition if the truth of said proposition guarantees the truth of said moral emotion "in virtue of the fact that there is a reliable asymmetric connection between the concepts expressed by [said proposition] and the concept expressed by [said moral emotion]." For instance, schadenfreude as 'joy elicited by another's suffering' tracks the moral proposition "Your suffering is a good thing." A being need not entertain the latter proposition to experience schadenfreude and thus to behave morally motivated by it. They only need the capacity to recognize that another is suffering, and revel in it. Thus, whenever someone acts motivated by such nonreflective moral emotions, they act morally, although they are not intellectually entertaining any relevant moral proposition themselves. On the background of this definition by Rowlands (2012), Monsó (2017, 350) defines MME the following way: Creature C possesses minimal moral empathy (MME) if: (1) C has an ability to detect distress behaviour in others, and (2) due to the action of a reliable mechanism, the detection of distress behaviour in others results in a process of emotional contagion that (3) generates a form of distress with the other's distress behaviour as its intentional object, built into which is (4) an urge to engage in other-directed affiliative behaviour. Conditions 1 and 2 constitute the normative sensitivity to a bad-making feature of a situation, conditions 3 and 4 are the features of MME that grant that it 'tracks' the moral proposition "This creature's distress is bad" (ibid., 351). MME is intentional, albeit in a nonreflective sense (condition 3): A being with MME is not just distressed when another is distressed, but distressed that another is distressed (Monsó 2017, 348). This aboutness of the distress experienced via emotional contagion is evident in the motivation to do something to alleviate it in the other, which leads to otheroriented care behavior (e.g. consolation or helping behavior), instead of e.g. removing oneself from the situation to alleviate self-directed distress (ibid., 351). Thus, the other's distress becomes a reason 5 for care behavior (ibid.). This minimal notion of empathy is highly compatible with the focus care ethicists have put on attentive behavior-reading over projective mindreading in their definition of empathy, and with the paradigmatic understanding of care as involving 'caring about' and 'caring for,' as MME entails the urge to care. Moreover, MME can be based on mere emotional contagion, the spontaneous catching of another's affect (Meyza & Knapska 2018), as a reliable mechanism to be affected by another's distress. Although emotional contagion is widely regarded as merely a basis for or the simplest form of empathy, the other conditions of MME ensure that we are speaking of a form of empathy that is actually intentional and other-directed. Nonreflective responsibility I have argued that we can renounce reflection on one's motives, and instead rely on empathy to morally motivate care. Thus, empathic animals without the capacity for second-order reflection on their motives can care morally. Without this capacity for second-order reflection one is not morally responsible in the traditional sense, namely, one cannot be held retrospectively accountable, i.e. be praised or blamed. However, responsibility is a core maxim of care ethics. How do we reconcile this? In the animal morality debate it has been suggested that the question whether a creature can behave morally is independent of the question whether they can be morally responsible (e.g. Waller 1997; Rowlands 2012). This could be applied to the morality of care, but I suspect this approach will be unappealing for care ethicists. It is precisely the achievement and merit of care ethics to shine a light on our moral obligations beyond those that are traditionally centered, namely our 'special obligations' (Walker 2007, 83). Responsibility for particular others as a result of and response to dependence is its overarching maxim (Collins 2015). However, we shouldn't prematurely assume that this notion of responsibility necessarily relies on second-order reflection and is thus beyond the reach of nonhuman animals. Responsibility according to care ethics means being responsive to others' needs and continuously tending to one's relationships; it is a caring response to factual dependence (e.g. Tronto 1993, 79ff; Held 2006, 10ff; see also Collins [2015, 88ff] for an overview of notions of responsibility in care ethics). This responsibility is primarily prospective and towards particular others, instead of retrospective and universal/ abstract, and has also been understood as responsiveness, attentiveness, or caring perception (Tronto 1993, 127ff; Gruen 2015, 3; ibid., 34). Responsibility thus understood is an inherent feature of caring relationships, because by building and maintaining relationships of care, one is behaving responsibly. Caring animals are, thus, also responsible animals. Two objections: empathic care is automatic and lacks evaluation There are two instances I am aware of where care ethicists have made an explicit effort to exclude caring animals, which I want to respond to here. If caring affect in familial relationships is the biological root of moral concern (see especially Noddings 1984Noddings /20132010;Baier 1987Baier /2002Waller 1997), a crucial implication is the issue of phylogenetic continuity, i.e. that this moral capacity may be found in other species. Noddings and Held do acknowledge this continuity between the caring capacities of human and nonhuman animals, but argue that something still sets human caring apart : Noddings (1984: Noddings ( /2013) distinguishes between nonmoral "natural care," i.e. empathic care, and "ethical care," which combines the emotional motivation seen in natural care with a reflective affirmation of that motivation. For Noddings, reflection is not primarily a corrective, but linked to a recognition of duty, which she admits to be a Kantian notion of morality (ibid., 80). To her, 'ethical caring' is meant as a failsafe for those situations where we do not care naturally, but should (ibid., 81f). Since nonhuman animals lack the capacity to reflectively motivate their care in absence of an emotional motivation, she concludes that their care is nonmoral (ibid., 79). However, by introducing this distinction we do not yet get an argument why 'natural care' would be nonmoral; it just contrasts it with a form of moral care that is salient when framed in a traditional understanding of morality. Held, in turn, argues that human caring overcomes its 'naturalness' and sets humans apart from caring animals, because human parents (consciously) educate their children morally: Human mothering is a far different activity from the mothering engaged in by other animals …. Human mothering shapes language and culture, it forms human social personhood, it develops morality. Animal behavior can be highly complex, but it does not have built into it any of the consciously chosen aims of morality. In creating human social persons, human mothering is different in kind from merely propagating a species. (Held 1993, 55, my emphasis) I see two interrelated potential problems with this response: It overemphasizes similarities across human cultural practices and underemphasizes similarities between humans and nonhuman animals. Held gives a rather narrow depiction of human parenting that best matches the fairly intellectualized parenting that has only relatively recently become necessary in increasingly individualistic cultures 6 , and that, in these contexts, some have privileged access to. The singular mother appears as the sole source of parental care, including moral education. Parents in more traditional com-munitarian cultures and societal classes with closer community support, however, may trust and rely on the knowledge of more experienced parents. Hrdy (2009, ch. 3) argues this to be the (pre-)historically older and more widespread version of parenting, because our hunter-gatherer ancestors could not have survived without relying on alloparenting, on a community of many parents, to raise infants. In such a context, where parenting does not constantly have to be 're-invented,' the commitment to any "aims of morality" may be less explicit than Held (1993, 55) makes it sound. It may rather consist of a (nonreflective) continuation and practical endorsement of social group norms, some of them moral norms. Thus, the individualistic parenting that maybe could distinguish human parenting from nonhuman animal parenting is a relatively young cultural aberration. Since communitarian parenting is not uniquely human (Hrdy 2009, ch. 6), excluding nonhuman animals from morality would mean biting the bullet of excluding humans to a highly implausible extent. Moreover, there are examples of nonhuman animals educating their young morally, albeit nonreflectively. Nonhuman animal children need to learn group norms, how to behave properly towards others depending on their status, how to play fairly and so on (Andrews 2020a). Care behavior itself is partially learned, including parental skills (e.g. Champagne & Meaney 2001) and consolation behavior (Clay & de Waal 2013). Thus, the difference between human and nonhuman animal capacity for moral education of their young is gradual. Importantly, while the capacity for reflective commitment to morality may, indeed, set some practices of human caring apart from care in other animals, this does not prove that this second-order reflection is what distinguishes moral from nonmoral care. I suspect that this assumption rests on a conflation of the capacity for moral behavior with the capacity to consciously adhere to an ethical framework. However, we do not have to be care ethicists to care morally. To borrow an analogy from Andrews et al. (2018, 90), this would be akin to setting poetry as the benchmark for language capacities, and it would backfire in the human case as well. Ultimately, both Held and Noddings' arguments point to features that may distinguish some forms of human caring from care in other animals, but they fail to prove that this distinction implies that only the former is moral. Caring animals I have made the case that nonreflective care can be acknowledged as moral in a care ethics framework when it is motivated by a minimal form of empathy. This kind of care entails responsibility understood as prospective responsiveness. In this section I will take a look at the potential for and prevalence of moral care thus understood in nonhuman animals. Because care ethicists explicitly accept that moral concern emerges in the context of kin relationships, I will turn to our current scientific knowledge about parent-offspring relationships in nonhuman animals, and show that at least some of these cases fit the criteria for moral care. I also briefly outline how parental care in nonhuman animals shapes the capacity for forms of moral care outside this primary relationship, which likewise coincides with what care ethicists value about human parental care, i.e. that it produces moral beings. Lastly, I illustrate the notion of responsibility in nonhuman animals with some examples. Parental care as moral care in nonhuman animals Parental care 7 for offspring is widespread in nature. Across many species, infants need some level of care to survive their first days, months, or even years (Pianka 1970;Decety et al. 2016) argue that extensive parent-offspring relationships are one of the primary contexts for empathy to have evolved, because increased sensitivity to offspring needs boosts the quality and effectiveness of parental care and thus leads to increased fitness. This is supported by increasing evidence that empathy is "phylogenetically ancient, probably as old as mammals and birds" (de Waal 2008, 279). Likewise, Decety et al. (2016, 1) conclude that empathy is "common to humans and many animals," with empathy understood as "the natural ability to perceive and be sensitive to the emotional states of others, coupled with a motivation to care for their well-being." This is compatible with our concept of empathic care motivated by MME. Indeed, emotional contagion as a basic form of or precursor to more complex forms of empathy has been found in many nonhuman animals, for example in pigs (Reimert et al. 2015;Goumon & Špinka 2016), chimpanzees (Parr 2001), geese (Wascher et al. 2008), dogs (Huber et al. 2017;Quervel-Chaumette et al. 2016;van Bourg et al. 2020), mice (Langford et al. 2006;Jeon et al. 2010), rats (Knapska et al. 2006;Atsak et al. 2011), prairie voles (Burkett et al. 2016), and chickens (Edgar et al. 2011). Moreover, some nonhuman animals have been speculated to possess more complex forms of empathy that involve a degree of perspective-taking, e.g. cows (Ede et al. 2020, 7), cetaceans, some primates (see Pérez-Manrique & Gomila [2018] for a review), and elephants (Bates et al. 2008). Thus, many nonhuman animals possess the empathic capacities necessary for moral care in the minimal sense that I have defended. For the case of social mammals, we know that parental care also has a strong effect on social development, and the embodied nature of parental care is key for this (Harlow 1958;Harlow et al. 1965). In fact, the effects of parental touch on development seem to generalize across mammal species in two major regards, as Monsó and Wrage (2021) point out: Firstly, parental touch has an influence on emotional selfregulation. It has an immediate soothing effect as well as a long-term effect on stress response, i.e. parental touch helps the infant regulate their arousal and predicts their capacity to self-regulate their stress response as an adult (Hertenstein et al. 2006). Secondly, parental touch has an influence on the capacity to form attachments. It is the crucial element that creates and maintains the bond between infant and primary caregiver, which, in turn, sets the stage for the capacity to form attachments more generally, and thus informs the occurrence and quality of future relationships (Feldman 2011;Hertenstein et al. 2006). Parental care thus shapes two capacities that are highly relevant for an individual's social life (see Monsó & Wrage 2021). This influence of parental care on emotional self-regulation and on the capacity to form attachments causally connects parental care to capacities of empathic care in social animals. To be able to care for another in these terms, one needs to be able to overcome a purely egoistical perspective, which requires the capacity to self-regulate (Monsó & Wrage 2021). Being overwhelmed by distress is counterproductive to empathic caring, because it prevents the individual from being able to pay attention to others and empathize with them, as demonstrated in rats by Ben-Ami , and in dogs by Sanford et al. (2018). Attention, in turn, can be further motivated by attachment. For example, paying attention to one's child or partner and the readiness to care for them comes more naturally than paying attention to and caring for a neutral stranger (van Berlo et al. 2020). Indeed, prairie voles, for example, will console their partners but not strangers (Burkett et al. 2016), rats are more likely to help familiar conspecifics than unfamiliar ones (Ben-Ami Bartal et al. 2014), and familiarity modulates emotional responses to distress calls in cockatiels (Liévin-Bazin et al. 2018). Taken together, a being with low emotional self-regulation and no or unstable attachments will be less likely to be able and motivated to care. This is also evident in the care behavior of parentally deprived nonhuman animals: Orphaned bonobos, for instance, are less likely than mother-reared bonobos to console others in distress (Clay & de Waal 2013), and parentally deprived nonhuman animals show inhibited parental care towards their own offspring (e.g. in rhesus monkeys: Arling & Harlow 1967;e.g. in rodents: Champagne & Meaney 2001;Gonzalez et al. 2001;Kikusui et al. 2005). Taken together, empathic parental care in nonhuman animals is thus itself a moral behavior, but it is also a precondition for caring animals to develop as such. This corresponds to the care ethical idea that parental care produces moral beings who draw from their own experienced care to care for others. Empathic care beyond the parent-infant relationship: helping and consolation To put the effect of parental care on nonhuman animals' caring capacities into context, I want to briefly mention two forms of emotional care beyond the parent-infant relationship: helping and consolation. Both these behaviors may plausibly be shaped by experienced parental care (see above). First, helping is defined as intentional behavior to benefit another regardless of personal gain (Cronin 2012). While cognitively demanding considerations of fairness may sometimes play a motivating role in human helping, the literature on helping across species suggests that it has its origin in more basic emotional processes, namely empathy (Marsh et al. 2014). Empathic helping has been observed in bonobos (Melis 2018), chimpanzees (Yamamoto et al. 2012), dogs (towards humans: Sanford et al. 2018;van Bourg et al. 2020), dolphins (Park et al. 2013), elephants (Bates et al. 2008), humpback whales (Pitman et al. 2017), mice (Ueno et al. 2019, and rats (Ben-Ami Bartal et al. 2011;Carvalheiro et al. 2019). Second, besides helping, another relatively well studied form of emotional care in nonhuman animals is consolation behavior, which is defined as an increase in affiliative contact towards a conspecific in distress (Burkett et al. 2016). It is usually studied as a post-conflict behavior, where a bystander approaches the loser of a fight and affiliates with them (e.g. de Waal & van Roosmalen 1979), or, in the lab, as a behavior in response to a conspecific who experienced an aversive stimulus (e.g. Burkett et al. 2016). Consolation is likewise a form of emotional care, as it is thought to be motivated by empathic processes and seems to have a calming effect on the recipient, thus meeting their needs by helping them cope with a stressful situation. Consolation is found in a range of nonhuman animals, including dogs (Quervel-Chaumette et al. 2016), dolphins (Yamamoto et al. 2015, corvids (Seed et al. 2007;Fraser & Bugnyar 2010), elephants (Plotnik & de Waal 2014), primates (de Waal & van Roosmalen 1979Palagi et al. 2004;Cordoni et al. 2006;McFarland & Majolo 2012), and voles (Burkett et al. 2016). The available data on helping and consolation as well as underlying empathic capacities in a range of nonhuman animals from rodents to great apes to some birds is substantial. This should make clear that I am not basing my case for the acknowledgment of caring animals on a rare occurrence in a few select, maybe especially human-like species. 8 Indeed, it is not my goal to argue that some nonhuman animals are exceptionally similar to us and thus worthy of recognition by care ethics. Instead, I propose that the most compelling claims of care ethics regarding the phylogenetic and ontogenetic origins of morality naturally include many nonhuman animals. It is empirically untenable to anthropo-monopolize care, a capacity so fundamental to human and nonhuman animal social life. Responsible animals I have argued that care ethics emphasizes a notion of responsibility that is an inherent feature of empathic care. Thus, all examples of empathic care in nonhuman animals are also examples of responsibility in nonhuman animals. I still want to highlight some that I find especially compelling. Foremost, responsibility is evident in the ways in which parents adapt their behavior to the abilities of their young, or generally the special consideration of adult group members towards infants, from appropriate play behavior to adjusting one's traveling speed to assuming responsibility for orphans. Furthermore, the policing of group conflicts, holding watch over others when they rest, alerting others of or teaching them about dangers, sharing food, supporting ill, injured, or disabled group members are instances of nonhuman animals taking responsibility. One striking example of responsibility in the context of allo-parenting can be found in cooperatively breeding marmoset monkeys. These marmosets form groups in which related or non-related individuals support a breeding pair in raising their offspring. Brügger et al. (2018) investigated the hypothesis that helpers "help more if group members can witness their interactions with the immatures," which would imply a motivation that has to do with social prestige or the "pay-to-stay" model, i.e. making visible contributions to the group to be granted continued membership (ibid., 1). They found that helper marmosets in fact increase their care behavior towards immatures when there are no other group members present. Brügger et al. interpret this behavior of the helpers "to reflect a genuine concern for the immatures' well-being, which seems particularly strong when solely responsible for the immatures" (ibid., my emphasis). The affidavits that were submitted by leading primatologists in support of the Nonhuman Rights Project around Steven Wise contending for the personhood of chimpanzees Tommy, Kiko, Hercules, and Leo, explicitly support this framing of care behavior as responsible behavior: Goodall (2015) describes the long-term responsibility of chimp mothers for their young, or the responsibility that is often assumed by adult male chimps for orphans (ibid., 6ff); Savage-Rumbaugh (2015) describes chimpanzees' duties towards the group in the context of food sharing; she further writes: "In the case of chimpanzees and bonobos […], duties and responsibilities (and the moral imperatives they entail) are simply a part of everyday life." (ibid., 6). Christophe Boesch speaks of chimpanzees' "social obligations" in the context of defending territory, rescuing conspecifics, hunting, helping behavior, and alerting others of danger (Boesch 2015, 6ff). -The care behavior and thus responsibility of other hypersocial animals like cetaceans, elephants, and corvids has a similar richness. To name just a few examples, dolphins have been observed to adopt infants of different species (Carzon et al. 2019), and to help a dying conspecific stay afloat by propping them up (Park et al. 2013). Elephants have been observed to make concerted efforts to help infants across difficult terrain or out of a ditch (Bates et al. 2008, 216f), and they form alliances to rescue infants who have been kidnapped (ibid., 215f). All these behaviors consist of nonhuman animals actively taking on the task of answering to others' dependence, and are thus instances of responsibility. Conclusion and outlook: what care do we owe to caring animals? I have argued that we can acknowledge nonhuman caring animals and their intraspecific relationships of care as moral in a care ethics framework, and that this already includes the parental relationships of many species. My argument is based on empirical data on empathic care in nonhuman animals, and on a de-intellectualized notion of morality, centered on emotional motivation, that has been put forth in the animal morality debate. This non-traditional understanding of morality is not only highly compatible with a sentimentalist theory such as care ethics, but care ethics becomes more plausible if it renounces its intellectualization of care. By adopting a de-intellectualized notion of morality inclusive of caring animals, care ethics avoids intellectualistic and anthropocentric bias, accounts for paradigmatic forms of moral care, such as spontaneous acts of care, more adequately, and gains a more robust standing in relation to traditional moral theory. The animal morality debate, too, stands to gain further from this connection to care ethics that I make. An account of morality that situates it in particular relationships instead of in the realm of abstract moral deliberation is more amenable to the idea of nonhuman animal morality from the start. Moreover, this opens the door for ethical reflection: Finding or assuming moral capacities in nonhuman animals should mean something for our treatment of these moral animals, which can be argued with care ethics. Interferences with nonhuman animals' relationships and individual caring capacities are ubiquitous across all contexts of human-nonhuman animal interaction, especially in systems of use (see also Monsó et al. 2018;Cooke 2021). Nonhuman animals in labs, zoos, on farms, and in our homes are routinely deprived of stable social bonds and/or autonomy in navigating their social lives, two basic conditions for care. Moreover, human activity routinely disrupts wild nonhuman animal communities. If care is a value, the relationships of caring animals as I have described them possess this value. Thus, the prevention, disruption, manipulation, or instrumentalization of caring relationships likely constitute more than subjective welfare harms, and may need to be addressed as objective wrongs. In turn, meddling with individual nonhuman animals' caring capacities may not even involve experiential harm, but it may still deprive them of leading a full moral life, from accessing other values like trust and friendship that are (only) accessible through care, or from the meaning that self-determined caring creates. This concerns nonhuman animals whose capacity to care is diminished, e.g. on purpose in animal experimentation or as a by-product of parental deprivation on farms, in zoos, the pet industry, as routine procedure in labs, and so on; but this also concerns nonhuman animals who are instrumentalized as caregivers and reduced to this capacity, for instance in dairy farming. Care ethics compellingly shows that morality is neither rare nor exceptional, but that it is, in the form of care, a basic thread that runs through our lives. We should embrace the idea that this is true for many other animals as well, and that, hence, the world is more caring than we currently recognize, but also more vulnerable when we fail to recognize this.
2022-05-10T06:23:19.530Z
2022-05-26T00:00:00.000
{ "year": 2022, "sha1": "c78f5b099a40e5b855eec0deb39b194ddcc32fdb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10539-022-09857-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "479b65285b7c1e4b8f5971f727ac895b4d909593", "s2fieldsofstudy": [ "Philosophy", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258888012
pes2o/s2orc
v3-fos-license
Transverse Momentum Distributions of Heavy Hadrons and Polarized Heavy Quarks We initiate the study of transverse momentum-dependent (TMD) fragmentation functions for heavy quarks, demonstrate their factorization in terms of novel nonperturbative matrix elements in heavy-quark effective theory (HQET), and prove new TMD sum rules that arise from heavy-quark spin symmetry. We discuss the phenomenology of heavy-quark TMD FFs at $B$ factories and find that the Collins effect, in contrast to claims in the literature, is not parametrically suppressed by the heavy-quark mass. We further calculate all TMD parton distribution functions for the production of heavy quarks from polarized gluons within the nucleon and use our results to demonstrate the potential of the future EIC to resolve TMD heavy-quark fragmentation in semi-inclusive DIS, complementing the planned EIC program to use heavy quarks as probes of gluon distributions. Introduction Hadronization -the nonperturbative mechanism that confines quarks and gluons produced in high-energy collisions into the experimentally observed color-singlet mesons and baryons -is a key aspect of virtually any process involving Quantum Chromodynamics (QCD), but its fundamental description from first principles remains elusive [1]. In the quest for this fundamental understanding, the fragmentation of bottom and charm quarks to heavy mesons can play a vital role because the mass of the heavy quark imprints as a perturbative scale on the otherwise nonperturbative dynamics of hadronization. The unique properties of heavy quarks as color-charged, but perturbatively accessible objects make them ideally suited as probes of the hadronization cascade, effectively serving as a static color source coupling to the light degrees of freedom. An improved field-theoretic understanding of heavy-quark fragmentation will also benefit the description of heavy flavor in Monte-Carlo generators for the LHC [2], where many key searches and Higgs coupling measurements involve final-state charm or bottom quarks. A rigorous field-theoretic framework in which hadronization can be studied in detail is that of transverse momentum-dependent (TMD) fragmentation functions (FFs), for which all-order factorization theorems have been established [3]. Like collinear fragmentation functions, TMD FFs depend on the longitudinal momentum fraction z H that the hadron retains from its parent quark. In addition, they describe the transverse momentum that the hadron picks up by recoiling against other fragmentation products, including the full quantum correlations with the quark polarization, which provides a three-dimensional picture of the fragmentation cascade. For processes with initial-state hadrons, TMD FFs are complemented by TMD parton distribution functions (PDFs) describing the three-dimensional motion of quarks and gluons inside the nucleon. The TMD dynamics of light quarks and gluons are a well-established field of experimental study [4][5][6][7][8][9][10][11][12][13][14], phenomenological analysis (see e.g. refs. [15][16][17][18]), and progress towards first-principle calculations using lattice field-theory [19][20][21][22][23][24][25]; for a recent comprehensive overview, see ref. [26]. Precision TMD measurements are a key physics target of the future Electron-Ion Collider (EIC) [27]. In this paper we study, for the first time, the TMD FFs of heavy quarks to heavy hadrons. Our theoretical tool to analyze the fragmentation of heavy quarks is (boosted) Heavy-Quark Effective Theory (bHQET) [28][29][30][31][32][33][34][35][36], which has previously been applied to the well-understood collinear (or longitudinal) heavy-quark FFs [37][38][39][40]. We demonstrate that applying bHQET to TMD FFs gives rise to novel, universal matrix elements describing the nonperturbative transverse dynamics of light QCD degrees of freedom in the presence of a heavy quark (i.e., a static color source). While a large part of this work is devoted to developing this new theoretical formalism, we will also consider the phenomenology of heavy-quark TMD FFs in two distinct processes, e + e − collisions and semi-inclusive deep inelastic scattering (SIDIS), which are illustrated in figure 1: (a) Heavy quarks are copiously produced in e + e − collisions. We are interested in the case where the quarks are produced relativistically, as is the case for charm quarks at existing B factories, such that their fragmentation processes are independent. The Figure 1: (a) Heavy-quark pair production in the back-to-back limit in e + e − collisions, which gives access to heavy-quark TMD FFs. (Note that arrows indicate momentum flow, not fermion flow.) (b) Semi-inclusive deep inelastic scattering with an identified heavy hadron in the final state. This process gives access to individual heavy-quark TMD FFs convolved with a heavy-quark TMD PDF that can be perturbatively computed in terms of the collinear gluon PDF. (c) Heavy-quark pair production in electron-nucleon collisions, which we do not consider in this work. This process can probe the gluon TMD PDF, but is only sensitive to longitudinal fragmentation dynamics. production rate is largest in the back-to-back region, where the cross section differential in the small hadron transverse momenta factorizes in terms of two unpolarized TMD FFs D 1 H/Q (z H , k T ). Furthermore, the heavy-quark pair is produced in an entangled transverse polarization state in central events. This entanglement imprints on the distribution of final-state hadrons as an azimuthal modulation known as the Collins effect, whose strength schematically is given by where H ⊥ 1 H/i is called the Collins function. The light-quark Collins effect has been measured in detail in pion and kaon samples by the Belle [41][42][43] and BaBar collaborations [44,45], but despite a proposal in ref. [44], a measurement (or search) of the heavy-quark Collins effect has not yet been performed. We will find that the heavy-quark Collins function encodes intricate nonperturbative physics, which motivates a dedicated measurement. 1 (b) A previously overlooked aspect of heavy-quark phenomenology in electron-nucleon collisions at the future EIC is that heavy quarks can be pair-produced in initial-state gluon splittings at small transverse momentum, with one quark e.g. going down the beam pipe and the other undergoing hard scattering and subsequent fragmentation into a heavy hadron, which in turn is reconstructed in a semi-inclusive measurement, as illustrated in figure 1 (b). Crucially, this process can be described by the standard TMD factorization for semi-inclusive deep inelastic scattering (SIDIS) when both the transverse momentum and the mass of the quark are small compared to the hard scattering energy Q. It thus retains the full sensitivity to the initial and final-state transverse momentum distribution (encoded in heavy-quark TMD PDFs and FFs) with respect to the photon direction. To make phenomenological predictions for heavy-quark SIDIS, we fill a gap in the literature and compute all polarized heavy-quark TMD PDFs by perturbatively matching them onto collinear twist-2 nucleon PDFs, extending the analysis of the unpolarized heavy-quark TMD PDF in refs. [46,47]. 2 Interestingly, while many polarized TMD PDFs are strongly suppressed for heavy quarks, we find a nonzero leading result for the so-called worm-gear L function h ⊥ 1L , encoding the production of transversely polarized (heavy) quarks from linearly polarized light quarks and gluons, which is possible because the quark mass violates chirality. Indeed, the expectation that transverse quark polarization effects are suppressed by quark masses goes back to the early days of QCD [49], and we find that heavy-quark TMDs provide an arena to make this statement precise within twist-2 collinear factorization. For future phenomenology at the EIC, this nonzero conversion rate provides an exciting avenue to observe the heavy-quark Collins function because the factorized SIDIS cross section contains an azimuthal spin asymmetry, Importantly, this asymmetry unlike eq. (1.1) involves only a single Collins function multiplying the perturbatively predicted worm-gear L function, making it possible to extract its sign. We point out that an extensive heavy-flavor physics program is already being planned for the Electron Ion Collider (EIC), which will leverage heavy-quark pair production as a hard probe of gluon TMDs [50][51][52][53][54][55] and of cold nuclear matter [56,57], as illustrated in figure 1 (c). We stress that this is not the case we consider in this paper: In case (c), the transverse momentum imbalance, production rate, and distribution of the heavy-quark pair are sensitive to the initial-state gluon TMD PDF (or the nuclear collinear gluon PDFs), but on the fragmentation side are at most sensitive to the well-understood longitudinal mass. We contrast these claims with our findings in section 4.1.1. 2 Ref. [47] also computed all secondary quark mass effects on light-quark distributions, including mass effects in the Collins-Soper kernel, which drive the renormalization of many of the objects we introduce here beyond leading-logarithmic order. The much more involved O(α 2 s ) secondary quark mass effects in the gluon TMD PDF were recently calculated in ref. [48]. momentum (z H ) distribution at leading power [53]. In contrast to this, the TMD processes in figure 1 (a) and (b) are directly sensitive to the nonperturbative transverse dynamics of heavy-quark fragmentation. 3 We note that the TMD fragmentation of light quarks to quarkonia has been studied in ref. [60], in that case by matching onto nonrelativistic QCD, and similarly for light-quark TMD dynamics in hard quarkonium production and decay in ref. [61,62]. The remainder of this paper is structured as follows: In Section 2, we analyze heavyquark TMD FFs and identify the new bHQET matrix elements and perturbative matching coefficients that characterize the fragmentation dynamics. In Section 3, we discuss the allorder structure of matching polarized heavy quark TMD PDFs onto collinear PDFs and explicitly compute the O(α s ) matching onto gluon PDFs. In Section 4, we use our results from the previous two sections to outline the prospects for heavy-quark TMD phenomenology at e + e − colliders and the future EIC. Important note on conventions: In an attempt to upset everybody equally, we will use QCD fields for writing down TMD correlators in this paper (thus hopefully making it accessible without background knowledge of soft-collinear effective theory), but will consistently make use of lightcone coordinate conventions as more commonly used in the SCET community (and also, for example, in bHQET). Specifically, we decompose fourvectors p µ in terms of lightlike vectors n µ ,n µ with n 2 =n 2 = 0 and n ·n = 2, such that e.g. p 2 = p − p + + p 2 ⊥ . As another part of this convention, collinear momenta near the mass shell will typically have large components p − ≫ p ⊥ ≫ p + . We always take transverse vectors with subscript ⊥ to be Minkowskian, p 2 ⊥ ≡ p ⊥ · p ⊥ < 0, and denote their magnitude by p T = −p 2 ⊥ . Finally, we define the metric and antisymmetric tensor in transverse space as Our convention for the antisymmetric tensor is ϵ 0123 = +1. 3 A very interesting middle ground is occupied by ref. [58], which analyzed the transverse momentum spectrum of heavy headrons inside groomed jets, also using bHQET. The resulting bHQET matrix elements in ref. [58] are predominantly longitudinal: Depending on the precise parametric regime, the dominant contribution to the transverse momentum either comes from perturbative collinear-soft modes stopping the soft-drop grooming algorithm [59], which factorize from the heavy-quark dynamics, or from bHQET modes subject to a primary soft-drop criterion and a secondary measurement on perturbative transverse momenta. (Compared to our results in this work for inclusive heavy-quark TMD FFs sensitive to the full transverse structure of the fragmentation cascade, this effect of the grooming is also responsible for the striking absence of Collins-Soper scaling reported in ref. [58].) Despite these differences in the experimental observable and theoretical structure, we find that it should be possible to establish a powerful connection between our results here and those of ref. [58]. Specifically, if O(Λ 2 QCD ) power corrections from intrinsic transverse hadronic dynamics can be resolved even within groomed jets, as outlined around their eq. (5.9), this would access a second moment of the unpolarized bHQET TMD fragmentation factor we will introduce in section 2.3. If this connection can be made precise, it would suggest that the bHQET fragmentation factors we consider in section 2.3 can appear also in scenarios where the mass has been integrated out together with additional observables (i.e., the soft-drop criterion in this case). 2 TMDs for heavy quark fragmentation into a heavy hadron Calculational setup and parametric regimes We consider the fragmentation of a (possibly polarized) heavy quark Q into a hadron H that contains the heavy quark and carries momentum P µ H . For this paper, we assume that the heavy hadron polarization is not experimentally reconstructed. We work in QCD with n f = n ℓ + 1 flavors, where the n ℓ massless quark flavors are denoted by q and the heavy quark Q has a pole mass m ≡ m c , m b ≫ Λ QCD . We decompose P µ H in terms of lightcone momenta as boosted in the frame of the hard scattering and by definition P H,⊥ = 0, coinciding with the "hadron frame" for fragmentation [3]. We are interested in the dependence of the fragmentation process on the total transverse momentum of additional hadronic radiation X into the final state, which is equal to the initial quark transverse momentum k ⊥ by momentum conservation, and Fourier conjugate to the transverse spacetime separation b ⊥ between quark fields. In position space, the TMD quark-quark correlator describing this fragmentation process is defined as where z H is the fraction of the quark's lightcone momentum retained by H, β, β ′ are the open spin indices of the quark fields, Tr denotes a trace over fundamental color indices, and b ≡ (0, b + , b ⊥ ). We have kept a sum over the possible hadron helicities h H , which are not experimentally resolved, implicit in the constrained sum over states, i.e., The Wilson line W (x) is defined as an anti-path ordered exponential of gauge fields extending to positive infinity along the lightcone directionn µ , For simplicity, we have suppressed the rapidity regulator, the soft factor, and transverse gauge links at infinity in eq. (2.2). The Wilson lines only depend on the direction ofn µ and are thus invariant undern µ → e αnµ . Taking P µ H andn µ to define n µ and tracking the α dependence through the definition of ∆ H/Q (z, b ⊥ ) implies that the "good components" of ∆ H/Q [26,63], by which we mean the components of the fermion fields that are picked out by the projector / n/ n/4 acting on ψ Q and that appear in leading-power factorization theorems, transform as ∆ H/Q → e −α ∆ H/Q under this relabeling. 4 In terms of the correlator in eq. (2.2), the bare unpolarized (D 1 H/Q ) and Collins fragmentation function (H ⊥(1) 1 H/Q ) from the introduction are defined in position space as 5 where tr denotes a trace over spin indices. The unpolarized TMD FF encodes the total rate for producing an unpolarized hadron from an unpolarized quark, while the Collins TMD FF describes the strength of the correlation between the quark's transverse polarization and the direction of the hadron transverse momentum. The leading TMD fragmentation functions have been proven to be universal between processes [68], i.e., they are independent of whether the Wilson line points to the future (e + e − → hadrons) or the past (SIDIS). Note that these scalar projections of the TMD fragmentation correlator are invariant under n µ → e αnµ by construction. Since Λ QCD ≪ m, the nonperturbative dynamics in the fragmentation process are constrained by heavy-quark symmetry in all cases, but differences arise depending on the hierarchy between these two parametric scales and the magnitude k T of the transverse momentum or, equivalently, the inverse of the transverse distance 1/b T ∼ k T . Broadly speaking, we will consider the two cases illustrated in figure 2. In case (a), which we analyze in section 2.3, k T ∼ Λ QCD is generated during the nonperturbative fragmentation process itself, while perturbative emissions at the scale m ≫ k T are suppressed. In this case, the heavy hadron carries almost all the longitudinal momentum provided by the initial heavy quark, while the k T dependence is carried by universal nonperturbative functions describing how the "brown muck" separates from other light hadronic final states. In this regime, "disfavored" fragmentation functions where the valence content of the identified heavy hadron does not match the initial heavy quark, e.g. Q →H, Q → h, or q, g → H, are forbidden by heavy-quark symmetry. To simplify the analysis, we will count i.e., we will assume that the z H measurement does not probe the precise longitudinal momentum distribution near the endpoint, which is the case e.g. if z H is integrated over 4 In SCET this symmetry under the simultaneous relabeling n µ → e −α n µ andn µ → e αnµ is known as type-III reparameterization invariance [64,65] and is a manifest symmetry of the entire correlator because the bad components (/ n/ n/4) ψQ have been integrated out. 5 We use the same symbol for transverse momentum distributions in kT space and their Fourier transforms in bT space throughout this paper, as the meaning will always be clear from the context. Our conventions for Fourier transforms and the spin decomposition of TMD correlators follow ref. [66]. Note the superscript (1) on the bT -space Collins function indicating a bT derivative that arises from integrating a term / k ⊥ in the momentum-space correlator by parts, and that is specifically required due to the conventional normalization to the hadron mass [67]. For reference, the stated definition of the Collins function in position space is equivalent to a term tr γν ] appearing in the spin decomposition of the correlator. : Parametric regimes for the fragmentation of a heavy quark Q (green) into a single heavy hadron H with a measurement on transverse momentum. In case (a), the hadron picks up transverse momentum relative to its parent quark as the "brown muck" (brown) coalesces around the quark and splits from other soft hadronic radiation into the final state (orange). In case (b), the transverse and longitudinal momentum distributions are dominated by perturbative emissions at the scale of the heavy quark mass (teal). In this regime, the nonperturbative hadronization process is encoded in a normalization factor, while its effect on the shape of the momentum distribution is subleading. a sufficiently large bin or when taking Mellin moments of the z H distribution. While relaxing this modification on the fragmentation side would be straightforward and leads to interesting fully-differential bHQET fragmentation shape functions, this would also require consistently modifying the description of the opposite collinear sector by reinstating the transverse momentum dependence in the formalism of ref. [40] for e + e − collisions, and by crossing the jet function in that reference into the initial state using the methods of ref. [69] for SIDIS, all of which we leave to future work. In case (b), which we consider in section 2.4, the distributions in transverse and longitudinal momentum are determined by perturbative dynamics at the scale k T ∼ m ≫ Λ QCD , while the dynamics of the nonperturbative bound state only contribute a normalization factor. In the case of the unpolarized TMD FF, this normalization factor admits an interpretation as the total probability for Q to fragment into H, as is well known for inclusive heavy hadron production cross sections [37,38,70]. In this case, the disfavored fragmentation functions for Q →H or q → H, and Q → h or g → H, are perturbatively suppressed by O(α 2 s ) and O(α s ) at the scale µ ∼ m, respectively. We stress that we continue to assume eq. (2.6) also in this regime, so longitudinal momentum distributions remain perturbative. Review of Boosted Heavy-Quark Effective Theory The appropriate theory that describes the dynamics at the scale Λ QCD in either case and makes the heavy-quark symmetry manifest is boosted Heavy-Quark Effective Theory (bHQET) [35,36], i.e., the application of HQET [71] to heavy quarks produced in an energetic collision. The effective theory is constructed by integrating out the off-shell fluctuations of the heavy quark field at the scale m; these in particular include its antiquark component with energy gap 2m. The dynamic degrees of freedom are heavy-quark fields h v (x) that are labeled by the timelike direction v µ , which we choose to be the velocity of the heavy hadron, The tree-level matching of the massive QCD quark field onto h v at µ ∼ m reads: The h v (x) are implemented as Dirac spinors satisfying the projection relations For external states, the matching reads |H, h H ; X⟩ = √ m |H v , h H ; X⟩, and we use a nonrelativistic normalization convention for the bHQET states. In addition, the effective theory contains light-quark and gluon degrees of freedom that have isotropic momentum p µ ∼ Λ QCD in the rest frame of the heavy hadron. The tree-level matching for these is trivial; in particular, the Wilson line W (x) takes the same form in the effective theory, but consists of gluon fields that only have support on a restricted set of modes. For reference, the leading HQET Lagrangian is given by [71] where L light is a copy of the QCD Lagrangian with n ℓ massless quark flavors. The spin degrees of freedom of the heavy-quark can be explicitly decoupled from the light dynamics at leading power in 1/m by performing a field redefinition involving static Wilson lines Y v (x) [72,73], In this way, the heavy-quark Lagrangian becomes that of a free theory, in all external operators, acting as a static source of soft gluons. Specifically, the action of h v (x) on a product state in the decoupled theory is given by where s Q = 1 2 and h Q = ± 1 2 are the spin and helicity of the heavy quark, u(v, h Q ) = u(mv, h Q )/ √ m is an HQET spinor, and s ℓ , h ℓ , and f ℓ are the total angular momentum, helicity, and flavor content of the light degrees of freedom inside the hadron. (We will specify a helicity axis in the following section.) Note that the interpolating field for the light state on the right-hand side formally contains a future-pointing Wilson line to form an overall color singlet, which we suppress. Physical hadron states of definite angular momentum s H and helicity h H also have definite s ℓ , which is a good quantum number in the heavy-quark limit. In general, they involve a coherent sum over the helicity eigenstates in eq. (2.12), where we suppressed the common f ℓ and X, and ⟨s Q , h Q ; s ℓ , h ℓ |s H , h H ⟩ is a Clebsch-Gordan coefficient. (We take the coefficient to vanish for h Q + h ℓ ̸ = h H , i.e., one sum is always eliminated in practice by helicity conservation.) For the case of inclusive fragmentation, it has been known for a long time [38,71] that the factorized form of eq. (2.12) together with parity and eq. (2.13) implies relations between the fragmentation probabilities to different hadron states within the same heavy-quark spin symmetry multiplet, i.e., with the same s ℓ = 1 2 , 1, 3 2 , . . . . As an example, at the strict leading order in 1/m, an unpolarized charm quark is exactly three times as likely to fragment into an excited spin-1 vector meson (D * ) than into the corresponding pseudoscalar state (D). The physical reason for this is that the light dynamics do not see the heavy-quark spin at leading power, and thus the same nonperturbative matrix elements with given s ℓ , h ℓ appear in several cases. This analysis is simplified by the fact that for unpolarized or linearly polarized heavy quarks, light amplitudes for different helicities cannot interfere. One key goal of the next section will be to work out the consequences of heavy-quark spin symmetry for transverse momentumdependent fragmentation functions, where transverse quark polarization will let us access this interference for the first time. Tree-level matching and discrete symmetries Using the tree-level matching onto bHQET in eq. (2.8) at the leading order in Λ QCD /m ∼ k T /m, the correlator in eq. (2.2) evaluates to where F H is a bHQET bispinor defined by Note that we have evaluated the matrix element at b + = 0, which is justified at leading order in 1/m. This is easiest to see by using momentum conservation on the first matrix element to translate the fields back to b + = 0, where k − is the total residual minus momentum of the final state, and then expanding away k − ∼ v − Λ QCD compared to the leading O(P − H ) term P − H /z H − mv − in the Fourier phase in eq. (2.14). 6 Using The leftover boost factor 1/(n · v) = 1/v − ∼ m db + is a consequence of the nonrelativistic normalization of the external state, and ensures that projections of the above result onto good components are indeed invariant under the rescaling transformationn → e αnµ discussed below eq. (2.4). To analyze the spin structure of F H (b ⊥ ), it is convenient to define the auxiliary vector which defines a unit z axis oriented along the spatial component ofn in the rest frame. Written out explicitly, F H (v, z, b ⊥ ) depends on the three four vectors v µ , i.e., the label direction in bHQET, corresponding to P µ H in the full theory, the spacelike vector z µ parameterizing the Wilson line directionn µ relative to v µ , and the spatial separation b µ ⊥ of the fields (with direction . As these three are orthogonal, they define a unique fourth unit direction y ρ = ϵ µνρσ v µ x ν z σ with y 2 = −1. There are three applicable symmetries constraining the form of F H . First, the correlator populates only the particle-particle components of the bispinor, Second, from its definition, the correlator transforms under hermitian conjugation as =h v simplifies for the particle components. On the second identity we translated both matrix elements by −b ⊥ , exploiting that the resulting phase cancels between the states. Third, since parity is conserved in QCD (and thus in bHQET), the correlator satisfies where P denotes the unitary operator implementing parity in the rest frame of the heavy meson. Note that time reversal is broken by the presence of the out states, and thus is not a good symmetry of fragmentation functions [74]. in terms of two real-valued scalar coefficient functions χ 1,H (b T ) and χ ⊥ 1,H (b T ) that can only depend on v 2 = −z 2 = 1 and b 2 ⊥ = b 2 T . By performing the traces in eq. (2.5), we can identify these two functions with the unpolarized and Collins TMD FF, respectively, These results hold at the leading order in the heavy-quark expansion 7 and up to perturbative corrections at the scale µ ∼ m (which we reinstate in section 2.3.3), but capture the exact nonperturbative dependence on k T ∼ Λ QCD within the χ 1,H and χ ⊥ 1,H . For reference, we can also take suitable traces of eq. (2.22) to obtain explicit definitions of χ 1,H and χ ⊥ 1,H in terms of bHQET matrix elements, which we dub heavy-quark TMD fragmentation factors. Note that finding a nonzero result for the Collins fragmentation function during this step crucially relies on the presence of the Wilson line, which distinguished a nontrivial reference direction z µ in the rest frame. From our analysis so far, we can conclude that both leading-power TMD fragmentation functions for unpolarized hadrons are allowed by the discrete symmetries of QCD in the heavy-quark limit at leading order in 1/m. Written as in eq. (2.24), they are also manifestly independent of the flavor and mass of the heavy quark. However, the physical interpretation of χ 1,H (b T ) and χ ⊥ 1,H (b T ) is still fairly unclear at this point, and in fact we have not yet made use of heavy-quark spin symmetry. We address these questions in the next section, where we will derive an intuitive physical picture of the TMD fragmentation factors in terms of the individual constituents of the heavy hadron, and will derive powerful relations within spin symmetry multiplets. Heavy-quark spin symmetry We now return to the full correlator F H (b ⊥ ) defined in eq. (2.15) and analyze its heavyquark spin symmetry properties, which are particularly transparent when working with sterile fields. To do so, we first decompose the out states as in eq. (2.13), For definiteness, we take the magnetization axis defining the helicity eigenvalues to be the spatial component z µ of the Wilson line direction in eq. (2.18), which points back to the hard collision. Crucially, different helicities of the quark and the light degrees of freedom in the amplitude (h Q , h ℓ ) and the complex conjugate amplitude (h ′ Q , h ′ ℓ ) can interfere with each other, as only their sum is constrained to be equal to a common h H by helicity conservation. Note that we cannot use a completeness relation for the Clebsch-Gordan coefficients because they are only summed over the hadron helicity h H , but are not summed over the total hadron angular momentum s H because we assume that the experimental measurement can e.g. tell apart D and D * mesons. Acting on these out states with sterile heavy-quark fields as in eq. (2.12) yields On the last line we defined a generalized spin-density matrix ρ H,h Q h ′ Q (b ⊥ ) for the heavyquark helicities. Its entries are determined by the soft dynamics, the experimentally reconstructed values of s H , s ℓ (i.e., H), and angular momentum conservation. To proceed, it is useful to express the outer product of spinors as where Σ µ Q is the heavy-quark spin operator acting on the nonrelativistic spin Hilbert space Evaluating the traces in eq. (2.24) then yields where y µ with y 2 = −1 is orthogonal to both b µ ⊥ and the Wilson line direction z µ , see the discussion below eq. (2.18). As expected from its relation to the unpolarized TMD FF, the Fourier transform of χ 1,H is simply the total conditional probability to produce H at rest given an initial quark momentum k T transverse to the direction of the static color source provided by the Wilson line. The bHQET analogue χ ⊥ 1,H of the Collins function on the other hand can be interpreted as a conditional density of quark spin Σ Q with respect to a magnetization axis defined by the static color source and the final-state transverse momentum. These physical interpretations roughly correspond to those of the associated relativistic fragmentation functions, which are often written in a form similar to eq. (2.28). Crucially, however, the meaning of the spin space on which the density matrix is defined is different in the heavy-quark case: For light quarks, whose spin after hadronization is an ill-defined concept, it only refers to the initial spin state in which the light quark is prepared. Heavy quarks at leading power in 1/m, by contrast, retain their initial spin state throughout the fragmentation process, and thus their spin density matrix together with the hadron spin measurement instead probes the angular momentum distribution of the final-state light constituents of the heavy hadron. 8 To make this fully explicit, let us introduce a shorthand for the spin-density matrix ρ ℓ of the light degrees of freedom ℓ ≡ {s ℓ , f ℓ }, The fact that the same light spin density matrix ρ ℓ appears for all hadrons within the same spin symmetry multiplet (same s ℓ and f ℓ , but different s H ) leads to relations between their TMD FFs in the heavy-quark limit. While it is interesting to ask how many independent nonperturbative functions the constraints on ρ ℓ from parity and hermiticity in eqs. (2.20) and (2.21) leave in principle, and how many of them are observable when reconstructing the hadron spin in addition, we now push on towards the combinations that are relevant for an unpolarized hadron and that contribute to the two fragmentation factors at hand. Unpolarized TMD FF: We begin with the unpolarized quark case and perform the trace in eq. (2.28), which sets h Q = h ′ Q and thus h ℓ = h ′ ℓ , To illustrate this, consider the pseudoscalar case, where and we have written helicities as ± ≡ ± 1 2 for short. We see that the unpolarized TMD FF encodes information about the magnitude of the amplitude for producing a given light 8 A related difference is that relativistic fragmentation functions have to be interpreted as number densities rather than probabilities due to the semi-inclusive measurement acting also on hadronic states at higher multiplicity. By contrast, the unpolarized bHQET TMD fragmentation factor upon Fourier transform, and up to renormalization effects [75], has an interpretation as a probability density because additional pair production of heavy quarks is power suppressed by kT ≪ m. helicity state. Summing over all hadrons H within the same spin symmetry multiplet M ℓ (i.e., all hadrons with identical light spin and flavor state ℓ), we further define where we used the completeness relation of the Clebsch-Gordan coefficients. Note that in the same way, this sum reduces the quark spin density matrix and the correlator to By evaluating the partial sums in eq. (2.30), it is easy to see that in terms of this baseline, the individual unpolarized TMD fragmentation factors are given by where for the purpose of this equation we used H as a shorthand for D (B) when Q = c (b), and similarly for the excited and higher-spin states. These relations are textbook knowledge in the inclusive fragmentation case (b T = 0) [71]. Our analysis shows, for the first time, that they hold without modification and point by point in the distribution when resolving the hadron transverse momentum. We anticipate that a transverse momentumdependent version of the Falk-Peskin parameter [38] appears when resolving individual (linear) hadron polarizations h H of hadrons with s ℓ = 3/2. Generalizations of the above to higher multiplets are trivial. Given that the sum over fragmentation factors within a multiplet encodes the full information on each individual one, it is interesting to ask what form the total fragmentation factor takes. Performing the complete sum over states, we have We see that the total unpolarized heavy-quark TMD FF reduces to a vacuum matrix element of two staple-shaped Wilson-line configurations along the lightlike and timelike direction, respectively. (Recall that we have suppressed transverse gauge links at infinity; the Wilson lines in the interpolating fields for the out states cancel.) This is reminiscent of the heavy quark-antiquark form factor proposed in ref. [20] to extract the TMD soft function on the lattice, but in contrast to that proposal directly relates to a physical observable, i.e., the total TMD cross section for producing heavy hadrons in e + e − collisions. We contend that this makes eq. (2.35) the theoretically simplest TMD observable possible in QCD, since it is entirely given in terms of vacuum matrix elements of Wilson loops. Collins TMD FF: A naive expectation from heavy-quark spin symmetry might be that the Collins FF should be suppressed by 1/m because it encodes a correlation between the initial quark transverse polarization vector and the transverse momentum of hadronic final-state radiation. In the case of light quarks, this correlation arises directly from the nonperturbative dynamics of the QCD Lagrangian, as illustrated in figure 3 (a), but in the heavy-quark case it naively seems to require a suppressed magnetic interaction with the heavy-quark spin. We will now see that this is not the case. As illustrated in figure 3 (b), the angle between the final-state heavy-quark and light transverse polarization vectors (i.e, the relative phase between their helicity states) determines which hadron in the spin symmetry multiplet is produced, even without a dynamical heavy-quark spin interaction taking place. Reconstructing this information experimentally thus induces a correlation between the heavy-quark and the light spin state. Crucially, spin symmetry ensures that the final-state heavy-quark spin state is identical to the one it was prepared in. The light spin state in turn is in general correlated with the transverse momentum k ⊥ of hadronic final-state radiation, since they both arise from the same nonperturbative dynamics of the light degrees of freedom, leading to a Collins effect at the leading order in 1/m. To illustrate this, it is again instructive to consider the case of the pseudoscalar meson, As expected, the Collins FF in the heavy-quark limit contains information about the strength of the interference, and hence the relative nonperturbative phases, of amplitudes for different light helicities. As a corollary, we conclude that the Collins FF must vanish at leading order in 1/m when summing over all the hadrons in the spin symmetry multiplet, This is immediate to see from the diagonal form of the summed quark spin density matrix or the full correlator in eq. (2.33). Concretely, this means that the Collins FF vanishes altogether for s ℓ = 0 baryons, χ 1,Λ Q = 0. For the next few multiplets and using the same notation as in eq. (2.34), the explicit relations are Discussion: The spin symmetry relations in eqs. (2.34) and (2.38) are the main results of this section. They hold for all values of b T , which means that they also hold point by point in k T upon Fourier transform. Furthermore, they are unaffected by renormalization, as we discuss in the next section. This makes them substantially stronger than the known sum rules for relativistic TMD fragmentation functions. For the light-quark Collins function in particular, the Schäfer-Teryaev sum rule [76] has only been rigorously proven [77] in the bare case, and requires a sum over all possible hadrons, an integral over z h , and a weighted integral over k T . This is in contrast to the novel sum rule in eq. (2.37) that we have derived for the heavy-quark limit, which only requires summing the Collins function over a subset of hadrons and holds at any value of k T and z H . (We will explicitly extend these spin symmetry relations to k T ∼ m in section 2.4.) It thus implies the Schäfer-Teryaev sum rule for heavy quarks as a corollary. We caution that as in the inclusive case, the validity of the spin symmetry relations rests on the assumption that spin symmetry violation is negligible during the entire fragmentation process. This assumptions breaks down e.g. for those H, H * produced from the decays of H 1 , H * 2 whose spin symmetry-violating mass splitting is comparable to their widths [38]. All-order matching and renormalization Using known results [78,79] for the perturbative matching of SCET onto bHQET, it is in fact straightforward to generalize eq. (2.23) to all orders in perturbation theory, Here the matching coefficient C m = 1+O(α s ) arises from separately matching the collinear ("unsubtracted") and soft contributions to the TMD FFs onto bHQET and QCD with n ℓ light flavors, respectively. Importantly, the matching is diagonal in spin and forces z H = 1 because real radiation is parametrically forbidden at the scale µ ∼ m due to k T ≪ m, and thus we can separately match the collinear fields in the two matrix elements in eq. (2.2) onto bHQET. Starting at two loops, the matching coefficient features rapidity logarithms of the Collins-Soper scale ζ over the mass as a consequence of the large boost separating the heavy hadron rest frame and the frame where the soft radiation is isotropic. The appearance of the Collins-Soper scale can most transparently be understood by organizing the matching of the collinear sector onto bHQET in terms of gauge-invariant building blocks W † ψ Q with definite large lightcone momentum ω = √ ζ, as commonly done in SCET. The soft matching is nontrivial because starting at O(α 2 s ), vacuum polarization diagrams involving the heavy quark contribute to the expectation values of soft Wilson line operators in the n ℓ +1 theory. The two-loop result for C m was obtained in ref. [78], and our notation relates to theirs as Here the dependence on the rapidity scale ν cancels between the individual matching coefficients on the right-hand side, leaving behind the dependence on ζ/m 2 . The renormalization properties of χ 1,H and χ ⊥ 1,H follow from eq. (2.39) by consistency with the bHQET matching. Making the renormalization explicit requires introducing a rapidity regulator into the bHQET matrix element definitions, e.g. by modifying the lightlike Wilson lines in a standard fashion, and canceling rapidity divergences using the known TMD soft factor in a theory with n ℓ light quarks, which leaves behind an anomalous dependence on the boost factor √ ζ/m =n · v governed by the Collins-Soper kernel. (In the following we find it convenient to use the shorthand ρ =n · v for the third argument of the TMD fragmentation factors, which physically is given by the exponential of the hadron's rapidity in the frame of the hard collision.) This proceeds in close analogy to the standard relativistic case, so we leave an explicit check to a future perturbative calculation. Note that the Wilson lines in eq. (2.15) are in fact still lightlike up to possible off-lightcone regulators (and thus feature standard rapidity divergences) despite the presence of the mass because the opposite collinear sector is boosted close to the lightcone from the point of view of the bHQET rest frame. Conversely, the bHQET dynamics inside either collinear sector are also boosted from the point of view of the central soft modes. Since TMD renormalization is multiplicative in b T and independent of the hadronic state, it acts in the same way on all terms in the spin symmetry relations in eqs. (2.34) and (2.38), which implies that they also hold for the renormalized objects point by point in b T (or k T ). Relation to bHQET fragmentation probabilities for Λ QCD ≪ k T An important property of the TMD fragmentation factors we defined above is their limiting behavior as k T ≫ Λ QCD or, equivalently, b T → 0. In this limit, the unpolarized TMD fragmentation factor χ 1,H is related to the total probability χ H for the quark to fragment into H, which has previously been analyzed in HQET [37][38][39], where the matrix-element definition of χ H [40] is equal to χ 1,H (b T = 0) at the bare level, Because χ H is not renormalized [70], we generally expect a perturbative Wilson coefficient T at leading order in the strong coupling, 9 and is analogous to the behavior of the usual TMD PDF as b T → 0 where it approaches its total momentum-space integral given by the collinear PDF, up to renormalization effects and radiative corrections [75]. Here we also assumed without detailed proof that corrections to this relation are quadratic in b T based on the azimuthal symmetry of χ 1,H (b T ). From H χ H = 1, it follows that the total TMD fragmentation factor χ 1 (b T ) defined in eq. (2.35) is purely perturbative in this regime, In contrast to eq. (2.41), the Collins TMD fragmentation factor χ ⊥ 1,H must vanish at least linearly as b T → 0 because there is no leading bHQET matrix element it could match onto in this limit. This is easiest to see by repeating the symmetry analysis of the bHQET correlator F H in section 2.3.1 at b T = 0, which only admits F H (v, z, b ⊥ = 0) = χ H (1 + / v)/2. As we will see by comparing to the limit Λ QCD ≪ k T ≲ m in section 2.5, the expansion indeed starts at the linear order, and the matrix-element definition of the relevant nonperturbative parameter at O(Λ QCD b T ), as well as its tree-level Wilson coefficient, can all be inferred from consistency. Matching TMD FFs onto bHQET for Λ QCD ≪ m ≲ k T We next consider case (b) in figure 2. In this regime, the transverse and longitudinal momentum distributions are determined by dynamics at the scale µ ∼ m ∼ k T and are fully perturbative. The nonperturbative dynamics in this case are encoded in bHQET matrix elements that involve additional gluon fields or derivatives and that can be nonlocal along the lightcone, but in contrast to the previous section are local in the transverse direction. Similar to a standard twist expansion, these bHQET matrix elements are categorized by their mass dimension, which determines their scaling as Λ QCD ≪ m, k T , i.e., their mass dimension ∼ Λ n QCD is compensated by powers of b T or 1/m in the Wilson coefficient. This story plays out differently for the unpolarized vs. the Collins TMD FF, which scale as O(1) and O(Λ QCD b T ), respectively, so we will go through the two cases separately in the 9 It is well known that the formal OPE of relativistic fragmentation functions is ambiguous due to an unconstrained choice of boundary condition at lightcone infinity [80,81]. While this fundamental issue remains present here, it is interesting to ask whether the case of bHQET TMD fragmentation factors, which are Wilson loops, can provide additional insight into this issue. following. We note that the expansion of TMD FFs in terms of bHQET operators differs from a standard twist expansion insofar as the HQET field h v encoding the interactions with the heavy valence quark remains present in all low-energy matrix elements. Unpolarized TMD FF For the unpolarized TMD FF, the unique bHQET matrix element that can arise in the infrared at the leading order in 1/m is the total fragmentation probability χ H as defined in eq. (2.42), which follows from symmetry arguments similar to those below eq. (2.43): Importantly, we have again made use of the assumption in eq. (2.6) that we are sufficiently far away from (or have fully integrated over) the endpoint regime z H → 1, as otherwise there would be a nontrivial bHQET shape function on the right-hand side [39,40]. The unique matching coefficient of χ H , which we dub the partonic heavy-quark TMD FF d 1 Q/Q (z, b T , µ, ζ), is a new object that, to our knowledge, appears in our analysis for the first time. 10 It is independent of the precise hadronic final state, carries the exact dependence on b T m ∼ 1, and can be calculated perturbatively by evaluating eqs. (2.2) and (2.5) for partonic final states including at least one heavy quark, i.e., Its rapidity renormalization is governed by the Collins-Soper kernel of a theory with n ℓ massless and one massive quark degree of freedom [47]. We leave a dedicated NLO calculation of d 1 Q/Q (z, b T , µ, ζ) to future work. Since the dependence on the hadronic final state is purely encoded in χ H , which satisfies the same spin symmetry relations as in eq. (2.34), we conclude that the unpolarized heavy-quark TMD FF satisfies for all values of b T (or k T ), including 1/b T ≳ m, up to corrections of O(Λ QCD /m). Eq. (2.44) continues to be valid for k T ≫ m, but features large perturbative logarithms of b T m ≪ 1 in this limit. Their resummation is enabled by further factorizing the physics at those two scales. To do so, we can first match the heavy-quark TMD FF onto twist-2 heavy-quark collinear FFs at the scale µ ∼ m [46], 47) 10 Curiously, the perturbative transverse dynamics of heavy-quark fragmentationb → Bc have previously been evaluated in refs. [82,83]. The complete tree-level result given in the first reference, which starts at O(α 2 s ), can be considered a very specific subset of the NNLO corrections to the TMD FF we define here if we sum over final states. If we tag on the charm instead, their result corresponds to a different perturbative TMD FF d 1 bc/b ∼ α 2 s whose renormalization, by our analysis, is governed by standard (massive) TMD evolution. where the sum runs over i = q,q, g. This matching takes the same form as the standard matching of light-quark TMD FFs onto twist-2 FFs at µ ∼ Λ QCD , except that the highest IR scale here is given by m. The Wilson coefficients J i/q (z, b T , µ, ζ) encode the perturbative process q → i in a theory with n ℓ + 1 massless flavors at the scale µ ∼ k T , with the quark retaining a fraction z of the parent's lightcone momentum, and are known to N 3 LO [84,85]. In a second step, we perform the well-known [37][38][39][40] matching of the collinear FF of a massive quark onto bHQET to separate Λ QCD ≪ m, This refactorization condition for d 1 Q/Q can serve as a cross check on future perturbative calculations, and in addition enables resumming logarithms of k T /m ≫ 1. Collins TMD FF To identify the low-energy bHQET matrix element that the Collins TMD FF matches onto in the limit Λ QCD ≪ m ∼ k T , we use a two-step matching procedure formally valid for the hierarchy Λ QCD ≪ m ≪ k T . (We will later show that the result is correct for either hierarchy.) As for the unpolarized TMD FF above, this lets us make use of well-known results for the matching of light-quark TMD FFs onto collinear FFs, which we can then further match onto bHQET. We start from the diagrammatic small-b T expansion of the bare Collins TMD FF for light quarks, which is valid for Λ QCD ≪ k T and given by [74,88,89] whereĤ h/q is a twist-3 collinear fragmentation matrix element at the scale µ ∼ Λ QCD , 51) 11 In the literature, this relation is more commonly given as a tree-level equality betweenĤ h/q and a weighted kT integral over the bare momentum-space Collins FF. Using eq. (2.65) and integrating by parts, it is easy to see that this reduces to the derivative of the bT -space Collins FF at bT = 0. The O(αs) corrections to eq. (2.50) were evaluated at finite kT > 0 in ref. [89] and involve twist-3 matrix elements that depend on two independent momentum fractions and reduce toĤ h/q in certain limits by use of the equation of motion. We anticipate that matching these more general matrix elements onto bHQET will reduce the number of independent (residual) momenta to one because the heavy-quark momentum is fixed. where σ µ− = i 2 [γ µ , / n] and we have defined as a shorthand for the insertion of a gluon field strength tensor G µν anywhere along the lightcone, with W (x, y) a straight Wilson-line segment connecting x and y. We now consider the heavy-quark Collins FF and at first assume the hierarchy Λ QCD ≪ m ≪ k T . For the matching at the scale µ ∼ k T , the mass is an infrared scale, and thus the twist expansion in eq. (2.50) immediately carries over. The collinear matrix element H H/Q takes the same form as eq. (2.51), but is now defined at the scale µ ∼ m. To implement the separation of scales Λ QCD ≪ m, we matchĤ H/Q onto bHQET. At tree level, this amounts to a replacement of the quark fields as in eq. (2.8), and after expanding the momentum-conserving phase results in where χ H,G ∼ Λ QCD is a novel subleading bHQET matrix element defined by Similar to the total fragmentation probability χ H defined in eq. (2.42), χ H,G no longer depends on b ⊥ , but is simply a constant that depends on the identified hadron H. Note that a nonzero value of χ H,G is compatible with all the symmetries of bHQET: Its defining spin correlator X(v, z) (dropping the spin trace) is hermitian by construction, satisfies P + X = XP + = X, and under parity transforms as PX(v, z)P = X(v, −z). Repeating the analysis in section 2.3.1, it is therefore proportional to 1 + / v, which has nonzero trace. In the last step, we combine eqs. (2.50) and (2.53) to arrive at our final result for the tree-level matching of the heavy-quark Collins TMD FF onto bHQET: Because this derivation assumed Λ QCD ≪ m ≪ k T , eq. (2.55) a priori is only valid up to power corrections in mb T . However, since we found a nonzero result at our tree-level working order and power corrections in mb T can only arise from real radiation in the calculation of the Wilson coefficient, eq. (2.55) as written also holds when integrating out both scales simultaneously. We note that additional low-energy matrix elements will in general be generated when performing the matching at higher orders in α s , but leave a dedicated construction of the basis of bHQET operators at this order in Λ QCD to future work. We point out that an observation of the heavy-quark Collins function in this regime would provide interesting insight into novel gluon correlations in the heavy-quark fragmentation process that are encoded in χ H,G . More specifically, χ H,G encodes a correlation between the gluon field polarization and the transverse polarization of the light constituents of the heavy hadron in the final state, which as in section 2.3.2 is indirectly resolved by reconstructing the total hadron spin, e.g. by distinguishing D and D * mesons. Conversely, χ H,G must vanish when summing over all hadrons in the spin symmetry multiplet M ℓ , This result is straightforward to prove along the lines of section 2.3.2 by decoupling the heavy quark fields in eq. (2.54) and exploiting the completeness relation of the Clebsch-Gordan coefficients, which leaves a trace of the form tr σ βα (1 + / v)/2 = 0. Combining these results at large k T ∼ m with those in eq. (2.38) we conclude that the Collins TMD FF satisfies the following relations for all values of b T (or k T ), which we have proven here up to corrections of O(Λ QCD /m) and up to radiative corrections at the scale µ ∼ k T ∼ m for large k T . We conjecture that the additional bHQET matrix elements generated by the matching at higher orders in α s will involve the same Dirac structure as eq. (2.54), i.e., an additional insertion of γ µ ⊥ , and thus will also satisfy eq. (2.56), but leave a detailed all-order analysis in this regime to future work. (We recall that for k T ≪ m eq. (2.57) holds to all orders in the strong coupling, see section 2.3.2.) Consistency between regimes for Λ QCD ≪ k T ≪ m Our results in the previous two sections share a common domain of validity when the transverse dynamics are already perturbative, Λ QCD ≪ k T , but still subject to heavyquark symmetry, k T ≪ m. In this section we analyze the consistency relations that arise from this overlap and relate the perturbative bHQET fragmentation factors to the partonic heavy-quark TMD FFs. We start with the unpolarized case. Comparing eqs. (2.39) and (2.41), which are valid for Λ QCD ≲ k T , to eq. (2.44), valid for k T ≲ m, we find the following all-order refactorization relation for the partonic heavy-quark TMD FF in the limit k T ≪ m, Here we have canceled off the common nonperturbative factors of χ H . To interpret the z dependence, eq. (2.58) says that counting 1 − z ∼ 1, d 1 Q/Q must approach δ(1 − z) up to an overall factor at the distributional level for b T m → ∞, i.e., all Mellin moments of d 1 Q/Q must become equal in this limit. We expect that eq. (2.58) will provide a powerful consistency check of future perturbative calculations of d 1 Q/Q . It also enables the resummation of large perturbative logarithms of k T /m ≪ 1, complementing the factorized result in eq. (2.49) for the opposite limit. For the Collins TMD FF we compare eq. (2.39) to eq. (2.55) and use C m = 1 + O(α s ). Canceling off the z dependence, which is trivial at tree level, this yields which can be interpreted as the leading linear term in a small-b T expansion of χ ⊥ 1,H , as anticipated in section 2.3.4. As for the Collins function at k T ∼ m, we leave a dedicated higher-order matching calculation to future work, which will involve nontrivial Wilson coefficients integrated against at least one additional O(Λ QCD ) bHQET matrix element. Model functions and numerical results For our numerical results we will assume a simple Gaussian model for the unpolarized TMD fragmentation factor, where κ H ∼ Λ QCD has units of GeV. Eq. (2.60) is valid at initial scales µ 0 ∼ m ρ 0 ∼ 1/b T of the TMD evolution and satisfies eq. (2.41) up to corrections in α s (µ 0 ). To be specific, we apply a µ * prescription [90,91] (also known as a "local" b * prescription) starting at O(b 4 T ) to ensure that µ 0 stays perturbative without polluting nonperturbative corrections at O(Λ 2 QCD b 2 T ) [75], where b 0 = 2e −γ E ≈ 1.12292 and we take µ min = 1 GeV. We take ζ 0 ≡ mρ 0 to always be equal to its canonical value, ζ 0 = (b 0 /b T ) 2 . We then use leading-logarithmic (LL) perturbative TMD evolution U q (µ 0 , ζ 0 , µ, ζ) to evolve eq. (2.60) to the overall scales µ ∼ √ ζ ∼ Q, with Q the hard scattering energy. 12 This order is sufficient for the exploratory phenomenology we have in mind, and in particular lets us use TMD evolution and β functions in QCD with n f = 5 massless flavors at all scales since the quark decoupling only induces next-to-leading logarithms of b T m. Specifically, we ignore the decoupling relations and NNLL power-like secondary quark mass corrections to the Collins-Soper kernel γ q ζ (b T , µ) that were determined in ref. [47]. We also ignore nonperturbative contributions to the Collins-Soper kernel, since they are orthogonal to the effects we are interested in here. Overall, this results in the following expression for the evolved unpolarized heavy-quark TMD FF, where for definiteness we considered the integral over z cut ≤ z H ≤ 1. To our working order, the right-hand side of eq. (2.62) is independent of z cut as long as 1 − z cut ∼ 1 in order to satisfy eq. (2.6), and also holds for any truncated z H moment of the TMD FF. Note that the single-parameter model in eq. (2.62) is also accurate at large k T ≳ m, cf. eq. (2.44), where it reduces to χ H and thus is correct up to radiative corrections. We assume a similar model for the Collins TMD fragmentation factor, but have to account for the suppression at small b T by modifying the Gaussian, see eq. (2.59), where we find it convenient to express the overall effect strength in terms of λ H⊥ = χ H,G /χ H ∼ Λ QCD , i.e., relative to the total fragmentation probability χ H . The parameter κ H⊥ ∼ Λ QCD controls the relative impact of higher power corrections and is in general distinct from κ H in eq. (2.60). Combining this with NLL n f = 5 TMD evolution as above, we find, for the evolved heavy-quark Collins function in position space, Taking appropriate Bessel integrals [67], we finally transition to momentum space, To evaluate the TMD evolution and the Bessel integrals, we use the numerical implementation of TMD anomalous dimensions, QCD renormalization-group solutions, and doubleexponential oscillatory integration in SCETlib [92]. Our results for the z H -integrated heavy-quark TMD FFs are shown as a function of k T for different values of the model parameters in figure 4. We use α s (m Z ) = 0.118 GeV as the input value for the strong coupling. We note that due to heavy quark flavor symmetry, the charm and bottom-quark TMD FFs are exactly equal at small k T ≪ m. In other words, they only depend on the universal Gaussian parameters κ H (for the unpolarized TMD FF), κ H⊥ (for the Collins TMD FF), and the Collins effect strength λ H⊥ . At large k T ∼ m, the TMD FFs remain independent of the heavy quark mass up to radiative corrections of O(α s ), which we ignore at our LL working order. These plots are thus identical for both flavors we consider. We point out that the Collins function can in general take any sign, as indicated by the yellow band scanning various values of the effect strength λ H⊥ . The effect of varying the size of higher-power corrections (κ H , κ H⊥ ) decreases as k T increases for both TMD FFs, as expected. Calculational setup In this section we consider the production of a heavy quark Q with pole mass m = m c , m b ≫ Λ QCD from light partons within a polarized nucleon N . The nucleon has momentum with P − N ≫ P + N = M 2 N /P − N in the rest frame of the hard scattering that the heavy quark participates in. Note that we again take the large component of the hadron (nucleon) momentum to be along the n µ direction to make this section self contained, but the case of ann-collinear incoming hadron (as would be consistent with the n-collinear outgoing hadron we considered in section 2) follows from n µ ↔n µ . This time, we are interested in the transverse momentum k ⊥ of the heavy quark with respect to the nucleon beam axis, which is again Fourier conjugate to the transverse spacetime separation b ⊥ between quark fields. The bare TMD quark-quark correlator between forward nucleon states that describes this process is where b ≡ (0, b + , b ⊥ ), W was defined in eq. (2.4), x is the lightcone momentum fraction carried by the heavy quark, and we have suppressed the rapidity regulator, the soft factor, and transverse gauge links at infinity for simplicity. For the explicit perturbative calculations in this section, it will also be useful to define the momentum-space version of the above correlator, The spin decomposition of eq. (3.3) in terms of scalar TMD PDFs is well known [66,93], where M N is the nucleon mass, S L is the longitudinal nucleon polarization in the Trento frame [94], and in our convention Φ Q/N (x < 0, k ⊥ ) decomposes in the same way in terms of the antiquark TMD PDFs f 1Q/N (|x|, k T ), etc. We have suppressed terms ("bad components") that do not contribute to leading-power TMD factorization theorems. As we will see in the next section, the terms proportional to the transverse nucleon polarization S ⊥ vanish for heavy quarks to all orders in the strong coupling when matched onto the leading (twist-2) collinear PDFs. We will also find that the twist-2 matching for the Boer-Mulders function h ⊥ 1 vanishes at O(α s ). The remaining TMD PDFs on the first line, for which we will find nonzero results at O(α s ), are the unpolarized TMD PDF f 1 , the helicity TMD PDF g 1L , and the so-called worm-gear L function h ⊥ 1L ; the latter will be of particular significance, and encodes the production of a transversely polarized quark from a linearly polarized nucleon. For reference, the explicit Hankel transforms relating scalar TMDs in b T and k T space read 13 Matching onto twist-2 collinear PDFs Heavy-quark TMD PDFs are different from their TMD FF counterparts because the heavy quark cannot be part of the initial-state nucleon wave function at the scale µ ∼ Λ QCD at leading power in Λ QCD /m, 14 whereas in the fragmentation case the heavy quark is always part of the final-state heavy hadron until its eventual weak decay. This means that heavy quarks must be pair-produced in initial-state gluon splittings at the scale µ ∼ m instead. In particular, this means there is at least one perturbative emission with transverse momentum ≳ m setting the scale of k T ≳ m, while the region of k T ≪ m can only be populated by several emissions with small net recoil, which is a power-suppressed configuration. In field theory terms, this means that heavy-quark TMD PDFs can be computed by perturbatively matching them onto collinear twist-2 nucleon PDFs in a theory with n ℓ light flavors, which are the only nonperturbative piece of information in this case. The matching onto twist-2 collinear PDFs is well developed for light quark and gluon TMDs, with notable results including all unpolarized quark matching coefficients through O(α 3 s ) [98,99] and results for 13 We continue to distinguish momentum and position-space functions by their argument, see footnote 5. For the meaning of the superscript (1), see also there. 14 Power corrections of this kind, which are known as "intrinsic charm" and have received substantial recent interest on the collinear PDF side [95,96], would be an interesting subject to explore in the TMD case in the future. Very recently, the TMD PDFs for charm quarks within Λc baryons, which are leading valence contributions and do not have to be produced from gluons, have been evaluated in a lightfront Hamiltonian model in ref. [97]; while these are phenomenologically inaccessible, it would be interesting to analyze these valence dynamics in the heavy-quark limit as we did for TMD FFs in section 2. polarized TMDs through O(α 2 s ) [100,101], and many of the following steps are standard, see e.g. [3]. Likewise, the O(α s ) matching of the unpolarized heavy-quark TMD PDF onto gluon collinear PDFs has been given in refs. [46,47]. We nevertheless aim for a selfcontained description, giving us the opportunity to point out the ways in which (polarized) heavy-quark TMD PDFs behave differently. The bare light-quark and gluon twist-2 collinear correlators are defined as where b ≡ (0, b + , 0) in this case and W (b, 0) denotes a straight Wilson line segment. The collinear correlators are conventionally decomposed as [3] in terms of the unpolarized (helicity) quark and gluon PDFs f i/N (g i/N ) and the transversity quark PDF h q/N . The contribution ∝ S ⊥ to the gluon correlator (i.e., the transversity gluon PDF) vanishes identically for spin-0 and spin-1/2 hadrons in the initial state due to helicity conservation [102]. The matching relation between heavy-quark TMD PDFs and twist-2 collinear PDFs holds at the operator level, and constitutes the leading term in the OPE of the former. Taking nucleon matrix elements of the bare operators, the relation for general spin indices reads where p − is the lightcone momentum carried by the light parton extracted from the collinear PDF and the sum runs over the n ℓ light quark flavors. In pure dimensional regularization, the bare matching coefficients are given by the partonic diagrams where z = xP − N /p − is the fraction of p − injected into the hard scattering process and we have indicated the heavy quark lines in red. The gray-shaded circles denote the sum of all possible QCD diagrams with these external legs, including gluon attachments to the Wilson lines that are part of the operators indicated by ⊗. We have included the respective lowest-order diagram for illustration. As is standard, matching relations between individual scalar TMD and collinear PDFs follow by inserting eq. (3.7) into eq. (3.8) and tracing the resulting Dirac bispinors (. . . ) ββ ′ against the relevant Dirac structures. Flavor conservation in QCD implies that a single fermion line has to connect the external light-quark states in eq. (3.9). It follows that contractions with the quark transversity PDF involve an odd number of Dirac matrices on the light-quark line and vanish to all orders, i.e., flavor conservation and chirality for light quark flavors imply that all terms ∝ S ⊥ vanish at twist-2 level in eq. (3.4). This is distinct from e.g. the light-quark transversity TMD PDF, which receives a tree-level contribution from the transversity collinear PDF of the same flavor. As in the case of light-quark TMD PDFs, Lorentz covariance further implies that only unpolarized (helicity) collinear PDFs can contribute to the unpolarized and Boer-Mulders (helicity and worm-gear L) TMD PDFs, matching the dependence on S L in the spin decomposition. These conclusions are not modified by the inclusion of the soft factor, the rapidity renormalization, and the UV renormalization of the TMD PDFs, all of which are orthogonal to the spin structure. They are likewise unaffected by the renormalization of the collinear PDFs, which acts autonomously on the unpolarized and longitudinally polarized sectors. Passing to renormalized objects, this altogether leaves us with the following four nontrivial matching relations for heavy-quark TMD PDFs onto collinear PDFs, Here the subscripts λ, λ ′ = ∅, ∥, ⊥ on C Q λ /j λ ′ (z, k T , µ, ζ) label the polarization of the heavy quark and the light parton j, the sum runs over gluons and the n ℓ flavors of light quarks and antiquarks, and we have included a factor of k T /M N on the left-hand side as needed to ensure that the matching coefficient is independent of the hadronic state. We have also changed integration variables from p − in eq. (3.10) to z, exploiting the fact that projections of the matching coefficients onto good components can only depend on z by reparameterization invariance. Note that in a crucial difference to the light-quark case, the heavy-quark worm-gear L TMD PDF, which involves an odd number of Dirac matrices on the heavy-quark line in eq. (3.9) is allowed at twist-2 level because the quark mass breaks chirality. The same is true for the Boer-Mulders function. In both cases, the original argument of ref. [103] why the twist-2 matching for these functions vanishes to all orders in the light-quark case critically relied on chirality. 15 Conversely, the respective matching coefficients must vanish linearly as m → 0 to afford the helicity flip, Lastly, note that to all orders it is only the gluon PDF f g (x) and the quark singlet PDF i=q,q f i (x) that contribute to the sum f 1 Q/N + f 1Q/N due to the invariance of eq. (3.9) under the n ℓ light flavor symmetry, and similarly for the two polarized cases. The difference f 1 Q/N − f 1Q/N of heavy quark and antiquark TMD PDFs receives a nonzero contribution s ) due to the relative orientation of the color flow along the fermion lines in eq. (3.9), as in the light-quark case [84,85,98]. Inverting the Hankel transforms in eq. (3.5), we find the b T -space matching relations where the matching coefficients are given by (n = 1 for λ =⊥ and n = 0 otherwise) For the dimensionless b T -space matching coefficients, eq. (3.11) simply reads (3.14) 15 An earlier version of this manuscript incorrectly stated that the twist-2 matching for the heavy-quark Boer-Mulders function should vanish to all orders based on its transformation behavior under time reversal, which however only constrains the leading O(αs) diagram we consider in the next section. We thank Markus Diehl for pointing this out to us. One-loop evaluation of matching coefficients At O(α s ), only the gluon diagram in eq. (3.9) is nonzero. Using standard QCD Feynman rules, we find the leading-order result where p = (p − , 0, 0) is the momentum of the external gluon and ℓ is defined as indicated (in the direction of fermion flow). The ℓ + integral is straightforward to evaluate by contours, which amounts to setting an emitted antiquark on shell, ℓ 2 = m 2 . Note that the diagram is finite in four dimensions and without a rapidity regulator because the quark mass cuts off infrared singularities. This is expected, as the UV and rapidity renormalization only become nontrivial at the next order. Dotting eq. (3.15) into the gluon collinear PDF correlator in eq. (3.7) and projecting onto quark spin structures, we find individual momentum-space matching coefficients with leading-order coefficient functions given by (3.17) As a nontrivial check, we have confirmed that using massive SCET Feynman rules [104] results in the same expressions after performing the spin traces and integrating over the loop momentum. Note that the projection of the O(α s ) twist-2 matching diagram onto the Boer-Mulders function remains zero even for finite quark masses. This is expected because the Boer-Mulders function is odd under time reversal [105], i.e., it changes sign depending on whether the Wilson lines in the operator point to the future (SIDIS) or the past (Drell-Yan). The diagram in eq. (3.15) does not yet feature gluon attachments to the Wilson lines that could resolve their direction, and thus its projection onto the Boer-Mulders function has to vanish. Starting at O(α 2 s ), the matching coefficient can in general receive nonzero contributions from the absorptive part of real-virtual diagrams because chirality is broken by the quark mass, and it would be interesting to investigate these contributions further. Evaluating the inverse Hankel transforms in eq. (3.13), we find the position space matching coefficients which at this order only depend on the dimensionless combination b T m and are given by where K 0 and K 1 are modified Bessel functions of the second kind. These are the main analytic results of this section. The unpolarized matching coefficient C Q/g has been computed long ago [46], and we agree with the b T -space expression given in that reference as well as with the k T -space result in ref. [47]. The results for the polarization-dependent matching coefficients are new. Consistency with the light-quark limit For Λ QCD ≪ m ≪ k T , heavy-quark TMD PDFs can be determined using a two-step matching [47]. First, the TMD operators at the scale µ ∼ k T are matched onto collinear PDFs in a theory with n ℓ + 1 massless quark flavors, which results in the standard massless TMD matching coefficients. In a second step, the n ℓ +1-flavor PDFs are matched onto those in a theory with n ℓ flavors at the scale µ ∼ m. At fixed order, this implies the following consistency relation for the unpolarized and linearly polarized massive TMD matching coefficients, where M j λ /k λ denotes the PDF matching function, the sum runs over all light degrees of freedom, and the subscript λ = ∅, ∥ again labels the polarization of the heavy quark and the light partons j and k. Perturbatively expanding the matching functions as these relations simplify for our dimensionless O(α s ) coefficient functions in b T space, q λ /g λ (z, m, µ) , (3.22) where the µ dependence has to cancel within the matching coefficient. For the unpolarized case, this relation has previously been verified in refs. [46,47]. At NLO, the polarized PDF matching function relevant for our case is given by [106] M (1) The massless matching coefficient for the quark helicity TMD PDF onto the collinear gluon helicity PDF was calculated in ref. [100], (3.24) , it is straightforward to see that our result in eq. (3.19) indeed satisfies eq. (3.22). By contrast, the worm-gear L matching coefficient is suppressed by one power of the mass, see eq. (3.14), and therefore cannot be reproduced by a leading-power PDF matching at the scale µ ∼ m. Interestingly, it contains a logarithm of mb T at subleading power, refs. [107][108][109][110][111], and it would be interesting to understand whether the logarithm in eq. (3.25) might be amenable to similar techniques. Numerical results for TMD PDFs For numerics, we evaluate eq. (3.12) at the boundary scales µ 0 ∼ √ ζ 0 ∼ 1/b T given in and below eq. (2.61), perform the TMD evolution back to µ = √ ζ = Q as described around eq. (2.62), and finally take a numerical Fourier transform as in eq. (3.5). E.g., we have for the evolved unpolarized heavy-quark TMD PDF, and similarly for the other cases. For the input collinear gluon PDFs we use the NNPDF31 nnlo as 0118 unpolarized proton PDF set [112] together with the NNPDFpol11 100 set for the polarized case [113]. Our input values for the strong coupling and the quark pole masses were given in section 2.6. In figure 5, we show our numerical results for the heavy quark TMD PDFs for producing a charm or bottom quark from a longitudinally polarized proton as a function of k T and x, respectively. The bottom quark TMD PDFs have a wider peak in k T compared to the charm because of its larger mass, as can be understood from the fact that the expressions in eq. (3.19) only depend on mb T up to RG effects. Note also that the worm-gear L function (after including a Jacobian 2πk T ) is quadratic in the small k T region with a coefficient proportional to 1/m 3 , whereas the unpolarized and helicity TMD PDFs are linear in k T . As this approximation is valid to higher k T in the case of the bottom quark than that of the charm, the bottom-quark TMD PDF has a numerically smaller value over a wide range. As x decreases, the unpolarized heavy-quark TMD PDF rises much more rapidly than the polarized ones, as expected from the smaller gluon polarization fraction at smaller x. We point out that the unpolarized TMD PDF changes sign at very high x ≥ 0.6, indicating a need for resumming subleading-power threshold logarithms of 1 − x using e.g. the tools of refs. [114]. Accessing heavy-quark TMDs in e + e − collisions In e + e − collisions, TMD fragmentation functions may be accessed from double-inclusive measurements with two identified hadrons, e + e − → H a H b X. For instance, the six-fold differential cross section for this process in the TMD limit P a,T , M a,b ≪ Q is given by [115,116] where cos θ and ϕ are the spherical coordinates of hadron H b with respect to the incoming beams in the center-of-mass frame, z a and z b are the lightcone momentum fractions of the two hadrons, and ⃗ P a,T is the transverse momentum of hadron H a . On the right-hand side, α em is the fine-structure constant, Q is the center-of-mass energy of the collision, y = (1 + cos θ)/2, and ϕ 0 is the azimuthal angle of ⃗ P a,T measured relative to the plane spanned by H b and the beams. The hadronic structure functions factorize into TMD FFs, where F ee denotes a weighted sum over flavors and a convolution of two TMD FFs (i.e., a product in b T space) at total partonic transverse momentum q T = P a,T /z a , and the hard function describing the pair production of quarks is given by Here we have kept the contribution from Z boson exchange and Z-photon interference, as relevant for measurements on the Z pole, where P Z (Q 2 ) = Q 2 /(Q 2 − m 2 Z + iΓ Z m Z ) is the reduced Z propagator and e f (v f , a f ) are the electromagnetic charge (vector, axial coupling to the Z) of a fermion f . We may assume that the experimental measurement involves an integral over symmetric ranges in cos θ such that the forward-backward asymmetry and an associated odd Collins effect in eq. (4.1) drop out. Crucially, the TMD factorization theorems in eqs. (4.1) and (4.2) only assume that the hard scale Q ∼ z a Q ∼ z b Q is large compared to all other scales, i.e., all masses and transverse momenta, and therefore hold for both light-quark and heavy-quark fragmentation at z a,b ∼ 1 without modification. In particular, the heavy quarks are approximately massless at the scale µ ∼ Q at which they are produced, and their polarization states are thus fully entangled. The hard function in eq. (4.4) could be modified to account for the effect of perturbative spin flips, but this amounts to retaining power corrections in m/Q further suppressed by powers of α s . Importantly, this means that a characteristic cos(2ϕ 0 ) modulation (the Collins effect) is present both for light and for heavy quarks at leading power and at tree level. As is commonly done for light quarks, the Collins effect strength can be accessed by taking suitable ratios of weighted cross sections, which we here take to be integrated over z a and z b as likely relevant for an initial study of the heavy-quark Collins effect. In figure 6 we show the predicted e + e − → DDX or BBX cross sections as a function of hadron transverse momentum P a,T , and the Collins effect strength R cos(2ϕ 0 ) as a function of q T . The universality for charm and bottom quarks follows along the same lines as for figure 4, and holds as long as the center-of-mass energy is sufficient to produce the quark-antiquark pair in a boosted state. This is the case for charm mesons at typical continuum center-of-mass energies at existing B factories, and for both charm and bottom mesons at higher values of Q such as at the Z pole. The Collins effect is smaller at higher center-ofmass energies because χ ⊥ 1,H is linearly suppressed in b T compared to the unpolarized, which means it predominantly contributes at larger values of b T where the Sudakov suppression at higher energies tends to be stronger. We show the results of varying κ H (κ ⊥ H ) for the unpolarized (Collins) TMD FF, and illustrate the variation of λ H⊥ by the yellow band, exactly as in figure 4. Note that the information about the absolute sign of the Collins function is lost in e + e − collisions, i.e., for two charge-conjugate hadrons we end up with a positive effect strength for any value of λ H⊥ = λH ⊥ since the effect is proportional to the square of the Collins function. One may nevertheless extract the relative factor between e.g. the D and D * Collins function, which heavy-quark spin symmetry predicts to be exactly minus one, see eq. (2.57), by measuring the Collins effect separately for e + e − → DDX and e + e − → D * D X. Explicitly, our prediction from heavy-quark spin symmetry reads We point out that for generic O(Λ QCD ) model parameters, the Collins effect strength reaches the several-percent level for continuum open charm production at existing B factories, in line with our expectation of an effect strength that is comparable to the light quark case, making a future dedicated measurement (or search) appear very feasible. Comment on claims regarding a mass suppression of the Collins effect In e + e − collisions, the "intrinsic" heavy-quark Collins effect we analyzed above has been disregarded so far. Note that this effect is in general distinct from the large background contribution of DD weak decays to e.g. a measurement of the Collins effect on a KK sample. This contribution is indeed considered in experimental analyses [41,42,44,45] and subtracted as a background using Monte-Carlo simulations and heavy-quark enriched samples, but cannot be immediately interpreted as a sign of a (nonperturbative) Collins effect since the progenitor DD pair in this case is not constrained to be near the back-toback limit by the measurement, meaning that e.g. perturbative gluon emissions can also induce azimuthal correlations on the DD pair and thus their weak decay products. Ref. [44] mentions that it would be possible to look for the intrinsic heavy-quark Collins effect with some further improvements to their analysis, but also incorrectly expects that the Collins effect should be parametrically suppressed by the mass of heavy quarks. The argument sketched in that reference (see beginning of their section IV) is that helicity flips should wash out the spin correlation between the heavy quark and the antiquark. This is not the case, as we have argued above: The quarks are approximately massless at the scale µ ∼ Q at which they are produced, and thus are produced with fully entangled spin states, such that there is no suppression by the mass from physics at this scale. Similarly, in our detailed analysis of the Collins FF at the scale µ ≤ k T ≤ m, we find no suppression of the effect by the mass, and the Collins effect in particular is fully allowed by heavy-quark symmetry when accounting for the presence of lightlike Wilson lines. Note that this is not contradictory to the fact that we do find a suppression of the Collins effect by Λ QCD /k T at large k T , since this suppression is exactly commensurate with the twist suppression of the two Collins functions in the light-quark case, which has been mapped out extensively [41][42][43][44][45]. We conclude that the prospects for a measurement of the intrinsic, nonperturbative heavy-quark Collins effect at B factories are even better than anticipated in ref. [44]. Accessing heavy-quark TMDs at the future EIC TMD fragmentation functions may also be accessed from single-inclusive measurements with one identified hadron in electron-nucleon collisions, e − (ℓ)+N (P ) → e − (ℓ ′ )+H(P H )+ X, where the scattering is mediated by an off-shell photon with momentum q = ℓ − ℓ ′ (and Q 2 ≡ −q 2 > 0). The fully differential cross section for this process in the TMD regime reads [3,66,93,117,118] On the left-hand side, x = Q 2 /(2P · q), y = (P · q)/(P · ℓ), z H = (P · P H )/(P · q), and ⃗ P H,T is the outgoing hadron's transverse momentum relative to ⃗ q in the Breit frame. On the right-hand side, [93]. The beam polarization information is encoded in the lepton beam helicity λ e and the covariant nucleon spin vector S µ = (0, S T cos ϕ S , S T sin ϕ S , −S L ) as decomposed in the Trento frame. We have dropped terms proportional to S T , which cannot be populated by leading-power heavy-quark TMD PDFs, see section 3. We have also dropped terms proportional to the Boer-Mulders function, whose twist-2 matching in the heavy-quark case is suppressed by at least one additional power of α s . The hadronic structure functions factorize in terms of one TMD PDF and one TMD FF each, Table 1: Total cross sections in picobarn for producing charm (left two columns) or bottom-quark hadrons (right two columns) in the TMD region at the future 18 × 275 GeV 2 EIC for different cuts on x > x min , Q > Q cut , q T = P H,T /z < q cut T . See the text for further details on the acceptance cuts we consider. where the convolution in transverse momentum may be written in position space as [67] (4.10) and the hard function for scattering a quark off a virtual photon is As for e + e − collisions, the TMD factorization theorems in eq. (4.9) only assume that the hard scale Q ∼ zQ is large compared to all low scales, and thus hold for both light and heavy hadron production without modification. Again, the heavy quark is approximately massless at the hard scale such that helicity is conserved during the hard scattering. This means that while the production mechanisms for longitudinally or transversely polarized heavy quarks from an incoming nucleon are different from light quarks (and are fully perturbative), the way they imprint on the distribution of final-state hadrons is the same, leaving nonzero spin asymmetries . (4.12) In particular, the sin(2ϕ H ) modulation induced by a nucleon beam polarization flip gives direct access to the heavy-quark Collins function including its sign, which is not accessible in e + e − collisions. To assess the statistical power of the future EIC to constrain charm and bottom quark TMD dynamics, we first estimate the expected sample size of heavy hadrons in electronproton collisions. To do so, we consider the total cross section for producing a heavy quark in the TMD region summed over beam polarizations, where Θ DIS (x, y) denotes DIS acceptance cuts given by x > x min , 0.01 < y < 0.95 , (4.14) We consider the EIC at beam energies E e = 18 GeV and E N = 275 GeV. Any experimental cuts on z H > z cut and the additional prefactor of z 3 H in eq. (4.13) are irrelevant at our working order because the heavy quark carries all the energy in all regimes, i.e., z H = 1 either at leading power in the heavy-quark expansion or at the leading perturbative order, see the comments below eq. (2.62). For this estimate we set κ H = 0 in the unpolarized heavy-quark TMD FF, since the total integral of the cross section up to q cut T ≫ Λ QCD is independent of it up to corrections of O(Λ 2 QCD /q cut T ) [75], and sum over all heavy hadrons containing the heavy quark, exploiting H χ H = 1. This means that the total rate at which heavy quarks are produced is predicted fully perturbatively, as expected. Our results for the expected total charm and bottom-quark TMD cross sections are given in table 1 for Q cut = 4 GeV and Q cut = 10 GeV, where higher Q cut allows for mapping out the TMD region to higher q T before encountering power corrections, but at the cost of much lower rates. (We have also adjusted q cut T accordingly in each case.) Scaled to an integrated luminosity of 10 fb −1 , we expect a total charm quark sample of 35 × 10 3 in the TMD region for the loose cut on Q and in the region x > 0.1 where polarization effects are expected to be most pronounced, see figure 5, and where a measurement of the sin(2ϕ H ) asymmetry is the most promising. This suggests that even with this limited integrated luminosity, percent-level asymmetries should be statistically resolvable. In figure 7 we show the results for the unpolarized SIDIS cross section with a D (B) meson in the final state, and for the two spin asymmetries defined in eq. (4.12). Note that the effect of different κ H in the unpolarized TMD fragmentation function is negligible in the cross section and the A LL asymmetry, which as expected are dominated by perturbative physics. The A LL asymmetry is very sizable at ∼ 30% at the chosen value of x = 0.2. On the other hand, the A U L asymmetry is substantially smaller (1 − 2%) for the generic O(Λ QCD ) parameters we picked here due to the smaller value of both h ⊥ 1L compared to g 1L and H ⊥ 1 compared to D 1 in most of the contributing TMD region, see figures 4 and 5 and the surrounding discussion. The numerically smaller value of h ⊥ 1L for bottom quarks discussed around figure 5 is likewise reflected in the size of the asymmetry for bottom compared to charm quarks. We emphasize that a measurement of A U L , compared to the Collins effect in e + e − collisions, has the unique benefit of accessing the absolute sign of the heavy-quark Collins function. Resolving this sign should well be possible within the expected statistics at the future EIC. While we leave the study of systematic effects (such as luminosity uncertainties) to future work, we note that the requirements that the established heavy-flavor/gluon distribution program of the EIC places on instrumentation have already been analyzed in depth in ref. [27]. Among these requirements are secondary vertex reconstruction capabilities and the momentum resolution on soft pions from D decays, all of which will also benefit the kind of differential measurements of semi-inclusive heavy-quark fragmentation that we propose here. Summary of main results and outlook In this paper, we have studied the transverse momentum-dependent (TMD) dynamics of bottom or charm quarks with mass m ≡ m c , m b ≫ Λ QCD fragmenting into heavy hadrons for the first time. We considered two parametric regimes for the transverse momentum k T , (a) Λ QCD ≲ k T ≪ m, where the hadron transverse momentum k T is determined by nonperturbative soft radiation into the final state, and (b) Λ QCD ≪ m ≲ k T , where k T is set by perturbative emissions. We assumed throughout that the heavy quark is produced at a hard scale Q ≫ m, k T , i.e., it is boosted in the frame of the hard scattering, such that standard TMD factorization applies at the scale Q and only the low-energy TMD matrix elements are modified by the heavy quark dynamics. In both regimes, the dynamics at scales below the heavy quark mass are constrained by heavy-quark symmetry and encoded in novel low-energy matrix elements in boosted Heavy-Quark Effective Theory (bHQET): • We showed that in regime (a), the unpolarized and Collins TMD fragmentation functions (FF) match onto new, universal nonperturbative bHQET matrix elements χ 1,H (k T ) and χ ⊥ 1,H (k T ), which we dubbed TMD fragmentation factors. • In regime (b), we made use of the twist expansion for light-quark TMD FFs and combined it with the matching collinear FFs onto bHQET to identify the relevant leading or subleading bHQET matrix elements. • An important new ingredient in this analysis is the unpolarized partonic heavy quark TMD FF d 1 Q/Q , a perturbative Wilson coefficient that appears in our analysis for the first time and that we expect to appear also in other contexts like flavor-tagged energy-energy correlators in the back-to-back limit. • We find that the Collins TMD FF scales as Λ QCD /k T at k T ≫ Λ QCD -but is not suppressed by the quark mass -and identified the coefficient as a new subleading bHQET matrix element probing gluon correlations within the fragmentation process. • We used heavy-quark spin symmetry to express the TMD fragmentation factors in terms of the underlying spin density matrix of the light hadron constituents. • For the unpolarized TMD FF D 1 H/Q (z H , k T , µ, ζ), this allowed us to prove the following relations between renormalized TMD FFs within spin symmetry multiplets: These relations are a powerful generalization of known results for inclusive heavyquark fragmentation, demonstrating for the first time that they hold point by point in transverse momentum. • We showed that the Collins function arises from correlations between hadronic radiation into the final state and the transverse polarization of the light constituents, which in turn is correlated with the heavy-quark spin by the experimental reconstruction of e.g. D vs. D * mesons. This new picture of spin correlations in the heavy-quark limit allowed us to prove the following novel sum rule for the renormalized heavy-quark Collins function, which holds up to corrections of O(Λ QCD /m) and up to radiative corrections in α s at the scale µ ∼ m, and likely beyond, To extend our analysis to the possible phenomenology at the future Electron-Ion Collider (EIC), we also considered the production of polarized heavy quarks from a polarized nucleon, which is encoded in all-order matching relations between heavy-quark TMD PDFs and twist-2 collinear light-parton PDFs: • We find that terms proportional to the transverse nucleon polarization vanish for heavy quarks at twist-2 to all orders in the strong coupling due to chirality and flavor conservation in the light-quark sector. • In contrast to the light-quark case, transverse quark polarization states are populated from unpolarized and linearly polarized nucleons because the quark mass breaks chirality. • We find nontrivial matching coefficients at O(α s ) for the heavy-quark worm-gear L and helicity TMD PDFs onto the gluon helicity collinear PDF, both of which we computed explicitly for the first time. We anticipate that the heavy-quark Boer-Mulders function will receive a contribution from the twist-2 collinear gluon PDF starting at O(α 2 s ), where it becomes allowed by time-reversal invariance. Combining the standard TMD factorization theorems for e + e − to hadrons and SIDIS with simple numerical models for the new nonperturbative functions we identified, we provided predictions for unpolarized heavy-quark TMD cross sections, the Collins effect strength for heavy quarks at e + e − colliders (and in particular for cc continuum production at current B factories), as well as for the relevant spin asymmetries at the future EIC: • We find that a measurement of the intrinsic heavy-quark Collins effect is well within reach of existing B factories, and is motivated by the rich nonperturbative structure of the heavy-quark Collins function that our analysis revealed. • The fact that transversely polarized heavy quarks are produced from linearly polarized nucleons at a significant rate, as encoded in the worm-gear L matching coefficient, in addition provides a clean avenue for probing the heavy-quark Collins functions in heavy-quark SIDIS at the future EIC, including its absolute sign. The theoretical framework we developed in this paper paves the way for many promising future applications: • While we only considered the case of unpolarized heavy hadrons in this work, an immediate next application of our framework are polarized vector mesons or baryons containing heavy quarks. This gives access to a larger set of transverse-momentum dependent polarized fragmentation functions [115,119,120] which in the heavy-quark case resolve the light spin density matrix in even greater detail and obey additional sum rules. • Another promising prospect is to consider heavy-quark TMD fragmentation within jets, which makes its rich physics accessible in hadron collisions. This extension is in fact straightforward because our results for the heavy-quark TMD FFs hold independent of the factorization theorem they appear in. This makes it possible to insert them into the hadron-in-jet frameworks of refs. [121,122] in a plug-andplay fashion as long as Q ∼ p jet T R ≫ m, k T . Yet another possibility, which could mitigate the effect of nonglobal logarithms that can become nonperturbative in our regime of interest, would be to apply grooming to the jet and study the hadron transverse momentum spectrum with respect to the groomed jet axis [58,123], see also footnote 3. We look forward to the attendant phenomenology, which may in addition serve as a vacuum baseline for TMD interactions of open charm and bottom quarks with the quark-gluon plasma in heavy-ion collisions. • Other natural extensions are higher-order calculations of the various new partonic matching coefficients we introduced in this paper, which will reduce the perturbative uncertainties on the lowest-order theory predictions we provided here. This will also involve analyzing the renormalon structure and optimizing the choice of quark mass scheme. In addition, one could consider the matching onto subleading bHQET fragmentation matrix elements (for TMD FFs) or onto twist-3 collinear PDFs (for TMD PDFs, extending the work of ref. [124] to the massive case), which would make it possible to interpret phenomenological extractions in terms of higher-point correlation functions. Higher-order resummed predictions for heavy-quark TMD spectra then immediately follow from our factorization results by solving the attendant renormalization group equations, and will serve as powerful, highly differential benchmarks of the heavy-quark physics encoded in present and future parton showers, including their interface with hadronization models, on which our field-theory analysis of the nonperturbative dynamics places rigorous constraints. In conclusion, our analysis reveals that a wealth of information on the all-order and nonperturbative structure of QCD resides in the transverse momentum dependence of heavyquark fragmentation. An experimental exploration of this new subfield of TMD physics is in immediate reach of existing B factories and will be an exciting addition to the planned heavy-flavor physics program of the future EIC.
2023-05-26T01:16:16.372Z
2023-05-24T00:00:00.000
{ "year": 2023, "sha1": "a0a5827154ce2f2c3d8e10734df8fcb814b5fc8a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2023)205.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a0a5827154ce2f2c3d8e10734df8fcb814b5fc8a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
116356053
pes2o/s2orc
v3-fos-license
Effectiveness of autolevellers when they used for improvement of periodic faults which may be occurred in some textile processes The aim of spinning in textile is to produce a yarn having a suitable and constant quality with the lowest possible production costs. Today’s textile spinning industries, with its high speeds and high productivity machines supervised by minimum operator and other personnel, requires, wherever possible and suitable, production and process control based on some form of technical equipments. On the other hand, process control by sampling are needed where continuous process control is not available, not possible or too costly, e.g., the testing of certain property of yarns and rovings in order to determine optimum processing conditions or for calibrating selected machines, etc.2 Introduction The aim of spinning in textile is to produce a yarn having a suitable and constant quality with the lowest possible production costs. Today's textile spinning industries, with its high speeds and high productivity machines supervised by minimum operator and other personnel, requires, wherever possible and suitable, production and process control based on some form of technical equipments. On the other hand, process control by sampling are needed where continuous process control is not available, not possible or too costly, e.g., the testing of certain property of yarns and rovings in order to determine optimum processing conditions or for calibrating selected machines, etc. 2 Autolevelers, as one of those technical equipments used particularly in the finisher drawing passage are last opportunity to correct the faults which to degrade and perhaps to ruin the quality of large amount of subsequent unfinished and finished material such as comber silver, roving and yarn. However, it is shown in the paper 1 that in case of incorrect setting of these equipments can be useless and even disruptive for the purpose of improvement or correction of faults. In this work, the effectiveness of improvement periodic faults by autoleveler is examined and then this effectiveness is compared with effectiveness of improvement of non periodic faults having same values of their corresponding parameters. The findings show that, in case of incorrect setting of action point, the use of autoleveler equipments is unprofitable and even disruptive as it is in non periodic faults cases. But most importantly the effectiveness of correction of periodic faults is less compared to that of non periodic faults at the same degree of incorrect setting of action point. Yarn spinning process and importance of draw frame Yarn spinning is shortly the joining of short fibers by drawing them from a loose fibrous mass and twisting them together. Although the yarn spinning process differs depending on spinning system it primarily involves opening, cleaning, blending, carding, drawing, prespinning (rowing), spinning, and cone winding processing stages. Spun yarns, those ''composed of staple fibers held together by some binding mechanism, 'are of three types: ring spun, open-end, and airjet. 3 In modern spinning process, the draw frame has an important function of the evening the slivers. However, the evenness of the slivers is essentially affected by the quality of the draft at the draw frame. There are two major causes that exert the considerable influence on sliver and yarn evenness. Firstly, the position of the draw frames in the spinning mill, which is definitively the last compensation point for correcting the faults in the slivers. Secondly, the defect produced at draw frame itself, can exert the significant disturbances and quality related problems in the further process. Material faults (e.g. short fiber contents) and machine faults (e.g. improper draft zone settings) during the drafting cause periodic and non-periodic variations (thick or thin places) in the sliver, which create problems during the subsequent process. 4 Periodic faults and spectrogram Periodic faults do not arise only drawing stage as mentioned before. This type of faults may also arise in blow room, carding, roving and spinning process due to some machinery and drafting causes. The lengths of periods can be determined on the spectrogram. Spectrograms are a sort of graphics obtained from spectrograph equipment used in spinning mills laboratories for the quality control and improvement aims. In (Figure 1) ( Figure 2) are shown two spectrogram examples. In any spectrogram if the height of the peak (P), which is named as chimney, above the basic spectrogram at any wavelength equals or over by 50 percent of height of basic spectrum at that wavelength indicates to a sufficiently serious periodic fault requiring corrective action to be taken immediately. There are mainly two types of spectrograms. These are chimney-type and hilltype spectrograms as shown in (Figure 1) (Figure 2) respectively. A chimney-type spectrogram, consisting of one or more "peaks" or "chimneys", as shown in Figure 1, is normally due to a mechanical fault such as eccentric roller or gear, missing or broken of teeth in gear wheels etc. The spectrogram shown in Figure 1 indicates two periodic faults having 8cm and 7m wavelengths. A hill type spectrogram, on the other hand, where several adjacent peaks is noticed, is normally due to drafting waves caused by factors such as improper settings in the drafting zone, improper pressure applied by the top rollers, too many short fibres in the material, etc. The wavelength of this type faults varies over a range. Whenever there is an occurrence of a mechanical fault, it would result in a shooting up of a particular channel in the spectrogram. However, not all faults result in deterioration in the fabric quality. This is because, the extent of influence of a periodic mass variation on the fabric quality is not only dependent on the amplitude of the spectrogram peak but also on the width and type of the woven fabric, type of fibre, yarn count.etc. 5 Autoleveling and correction of faults Autoleveler is described in the paper 1 as "An additional device which is meant for correcting the linear density variations in the delivered sliver by changing either the main draft or break draft of the drafting system, according to the feed variation. There are two types of autoleveling systems. These are closed system and open-loop system. Most of the draw frame autolevelers are open-loop autolevelers. In open-loop autolevelers, sensing is done at the feeding end and the correction is done by changing either a break draft or main draft of the drafting system. In closed-loop system, sensing is at the delivery side and correction is done by changing either a break draft or main draft of the drafting system". In The distance that separates the scanning rollers and the point of action is called the zero point of regulation or LAP. This leads to the calculated correction on the corresponding defective material. Moreover, in the case of a change in fiber material, the machine settings and process controlling parameters such as production speed, material, break draft setting, main draft setting, feeding tension, and setting of the sliver guiding rollers, LAP is needed to be altered. 4 Materials and methods As expressed above, the determination of LAP correctly is very even vital important issue in order to obtain maximum benefit from autoleveler. In case of incorrect LAP setting, depending on the amount of early or late determination of action time, the effectiveness of regulation decreases. However, at least due to inevitable inertias of autoleveler components such as servomotor, differential, speed sensor and other machinery reasons, it is not possible, in any case, to obtain completely corrected, pure regular silver at the end of autoleveling process. In the previous work (1) in case of incorrect death time setting or with alternative statement LAP setting, the effectiveness of correction of the silver having non periodic random (exponentially distributed) faults is analysed. In the present work, the analysis is taken a step further and it is examined the effectiveness of correction unevenness silver having periodic faults in case of incorrect LAP setting. The simulation approach is also used in this work considering basic assumptions and the algorithm given in that paper. considered in the work are chimney-type periodic faults, which occurred generally due to machinery faults or other reasons as it is pressure marks on the top rollers. The wavelengths of this type of periodic faults are same and more marked then drafting periodic faults. In the (Figure 6a) (Figure 6b), "a" and "b" represents length and period of a periodic thick place fault. The fault length "a" depends on the dimension of the defective machine part. The distance between faults "b" corresponds to the circumference of the roller, e.g. at the front roller of a drafting system. For example, an eccentric front roller of draw frame leads to a periodic fault with a wavelength of 80 mm, which this value is used in the work, as this roller always causes faulty drafts in the draw-box within the same time intervals. On the other hand, D1 and D2 have shown in the (Figure 6a) (Figure 6b) represent the diameter of normal and thick place of a periodic fault respectively. (Figure 6b) describe the mechanism of correction by autoleveler in case of incorrect setting of LAP. As it can be seen from the Figure 6a, if the command given by sensor as L mm delayed, the form of the corrected unevenness silver will be like shown in the Figure 6b. The graphics shown in Figures 7-24 has been obtained by means of the computer simulation program which its algorithm is given in the paper mentioned before, considering length of 135m and 165m simulated silver depending on parameters "a" and "b" considered and indicated in the related figures. As seen from Figure 6b, at the beginning of correction process the silver of Lmm leaves drafting zone as uncorrected with diameter of D2 mm due to delayed implementation of command given as described in Figure 5. Later, diameters of the silver autoleveled change patterning as D1, D3, D1, D2, D1, D3, D1, and D2mm respectively. Here, D3 can be calculated as shown in the paper (1) as D3=D1 2 /D2. In this work, D1=24mm, D2=32mm are also taken in the model as the diameter of normal and thick places of the simplified periodic faults. In the simulation program used various length of periodic faults "a", distance between periodic faults "b" and various deviations from optimal LAP are considered as shown in Figures 7-24. The effectiveness of correction is measured decrease in percent CV for 10mm. In this work, is also examined the effectiveness of improvement of non periodic random (exponentially distributed) faults, which their average of lengths is shown in 1/λ 1 and average of distances between themselves in 1/λ 2 correspondingly. Conclusion The Figures 7-24 show the effectiveness of regulation of both periodic and non periodic faults by autoleveler in case of incorrect setting of LAP. It is noticed from all Figures 7-24 that the irregularity CV for 10mm is nearly same for uncorrected silvers having both periodic and non periodic faults. Secondly, in case of in correct setting of LAP the effectiveness of correction of unevenness silver having both periodic and non periodic faults decrease depending on size and distance between faults and after from certain point the irregularity of autoleveled silver increase markedly. Thirdly, in the case of the same degree of incorrect setting, size and distance between faults, the effectiveness of correction of unevenness silver having periodic fault is less compared to that of having non periodic faults. However, periodic faults have an advantage compared to non periodic random faults so that sources of periodic faults can be found and eliminated more easily by means of spectrogram analysis. Therefore, the degree of benefit achieved by autoleveling equipments will be proportional to the degree of correct setting of LAP, so that, in case of improper setting of LAP or using the autoleveling equipments as malfunction, it will be better to run the draw frame without autoleveler, especially when the fault distance and length decrease. Figure 9 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 10 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 . Figure 11 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 12 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 . Figure 13 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 14 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 . Figure 15 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 16 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 . Figure 17 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 18 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 . Figure 19 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 20 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 . Figure 21 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Effectiveness of autolevellers when they used for improvement of periodic faults which may be occurred in some textile processes 252 Figure 23 Effectiveness of regulation depending on LAP setting, distance between faults "b" and length of fault "a". Figure 24 Effectiveness of regulation depending on LAP setting, distance between faults in average 1/λ 1 , and length of fault in average 1/λ 2 .
2019-04-16T13:25:58.525Z
2017-05-29T00:00:00.000
{ "year": 2017, "sha1": "76553de58c671b18f6721cc3ccbb1ecf4daa75c7", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/JTEFT/JTEFT-01-00040.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b7a3f1456ff0ec480fb0b8a2306675c62b502e6f", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
118330138
pes2o/s2orc
v3-fos-license
Stochastic metastability by spontaneous localization Nonequilibrium, quasi-stationary states of a one-dimensional"hard"$\phi^4$ deterministic lattice, initially thermalized to a particular temperature, are investigated when brought into contact with a stochastic thermal bath at lower temperature. For lattice initial temperatures sufficiently higher than those of the bath, energy localization through the formation of nonlinear excitations of the breather type during the cooling process occurs. These breathers keep the nonlinear lattice away from thermal equilibrium for relatively long times. In the course of time some breathers are destroyed by fluctuations, allowing thus the lattice to reach another nonequilibrium state of lower energy. The number of breathers thus reduces in time; the last remaining breather, however, exhibits amazingly long life-time demonstrated by extensive numerical simulations using a quasi-symplectic integration algorithm. For the single-breather states we have calculated the lattice velocity distribution unveiling non-gaussian features describable in a closed functional form. Moreover, the influence of the coupling constant on the life-time of a single breather has been explored. The latter exhibits power-law behaviour as the coupling constant approaches the anticontinuous limit. Nonequilibrium, quasi-stationary states of a one-dimensional "hard" φ 4 deterministic lattice, initially thermalized to a particular temperature, are investigated when brought into contact with a stochastic thermal bath at lower temperature. For lattice initial temperatures sufficiently higher than those of the bath, energy localization through the formation of nonlinear excitations of the breather type during the cooling process occurs. These breathers keep the nonlinear lattice away from thermal equilibrium for relatively long times. In the course of time some breathers are destroyed by fluctuations, allowing thus the lattice to reach another nonequilibrium state of lower energy. The number of breathers thus reduces in time; the last remaining breather, however, exhibits amazingly long life-time demonstrated by extensive numerical simulations using a quasi-symplectic integration algorithm. For the single-breather states we have calculated the lattice velocity distribution unveiling non-gaussian features describable in a closed functional form. Moreover, the influence of the coupling constant on the life-time of a single breather has been explored. The latter exhibits power-law behaviour as the coupling constant approaches the anticontinuous limit. Introduction.-The energy relaxation of thermalized deterministic systems in close contact with temperature baths have been long investigated, and several important results have been obtained [1][2][3][4][5][6][7][8][9]. One of the most important aspects of this problem for nonlinear systems is the non-exponential relaxation behaviour of the energy as a function of time, that has been connected to the formation of spontaneously generated discrete breathers, i.e., spatially localized and time-periodic excitations that appear generically in extended nonlinear lattices [10][11][12][13]. The question then arises about the life-times of these entities, which result from cooling of an initially "hot" deterministic system in contact with a thermal bath. In the present work we investigate the relaxation of energy in a deterministic nonlinear lattice comprised of N nearestneighbour coupled oscillators, that is in contact with a stochastic thermal (Langevin) bath. We demonstrate that for large initial temperature differences between this lattice and the bath, the former may not reach thermal equilibrium (eq) with subsequent equipartition of energy between its degrees of freedom but, instead, it may end up in a very long-lived metastable state with a relatively small number of breathers concentrating most of the energy. In these non-equilibrium, metastable states, we analyze the total velocity distribution of the lattice and compare it with the Gaussian one being present at thermal equilibrium. We further show that the life-time ∆t of a breather presents a power-law dependence on the strength of the coupling constant k between neighboring oscillators. The slope of the former dependence is influenced by the temperature of the bath. Stochastic equations of motion.-Consider a free-end, one-dimensional nonlinear lattice of oscillators (of mass m equal to unity) without dissipation and external forcing, whose symmetrized Hamiltonian function is given by [1,3] where p n =ẋ n is the canonical conjugate momentum of the nth oscillator (the overdot denotes derivation with respect to the temporal variable), k is the coupling coefficient between nearest neighbouring oscillators, N is the number of oscillators and V (x n ) = a 2 x 2 + b 4 x 4 is the nonlinear on-site potential, with a and b being positive coefficients. The values of a and b are set to unity throughout the paper. The resulting Hamilton's equations of motion describe the dynamics of the displacements of that deterministic system, hereafter refered to simply as "the system". The system is initially thermalized to attain a particular temperature T 0 using the standard Metropolis algorithm. When the thermalization procedure is over, the system is embedded into a stochastic thermal bath (or simply "the bath") of lower temperature, say T b , by adding N b stochastic oscillators at each edge of the system. The dynamics of the bath is then described by Langevin equations resulting from Eqs. (2) with the addition of a stochastic and a dissipative term on the right-hand-side, in the form where γ is the dissipation coefficient and ξ n (t) are zero mean uncorrelated random Gaussian deviates of standard deviation unity. As usual, the Boltzman's constant k B has been set to unity. The equations for the system and the bath are integrated for long times with a quasisymplectic stochastic integrator of second order [14][15][16]. While for the corresponding linear system the thermal equilibrium is reached exponentially fast, the presence of nonlinearity complexifies considerably the energy relaxation behaviour. Throughout this work, the temperature of the system and/or the bath T is calculated according to the equipartition theorem of the thermodynamic canonical ensemble, from the total average kinetic energy E K eq ≡ 1 2 n p 2 n eq through the relation Metastability.-In the presence of nonlinearity, two different regimes are observed; the energy of the system either relaxes to that corresponding to the thermal equilibrium temperature T b , or it decreases slowly towards thermal equilibrium following a sequence of longlived, metastable states (with energies higher than these at thermal equilibrium). The latter states, which are reached when the system initially has a temperature much higher than that of the bath, are due to the formation of nonlinear excitations of the form of discrete breathers. The system has initially a high amount of energy; as it cools down, a number of breathers can be formed trapping significant amounts of energy at particular, random lattice sites. These breathers become unstable and disappear in the course of time, leading to the decrease of energy in time that exhibits a staircase pattern. The aforementioned energy decay behaviour is recorded in Fig. 1(a) for three trajectories, each of them corresponding to a different set of initial condictions (i.e., three different thermalizations), while all the other parameters are kept fixed. Indeed then, depending on the initial conditions the system may be either led directly to the to the thermal equilibrium state (black curve), or it may stay at one of the metastable states (indicated by the formation of horizontal segments characterized by constant energy, i.e., red and orange curves) until it gradually reaches T b . In Fig. 1(b) we plot the energy density of the system as a function of time for the "orange" trajectory on Fig. 1(a), i.e., the one with the highest energy. It becomes evident that the formation of breathers may trap an amount of energy between them as well. This in turn means that the energy decay is attributed to both the reduction of the number of breathers and the decrease of the energy confined between them. In Figs. 1(a) and 1(b) the former causes the energy decrease at time t = 2 × 10 5 time units (t.u.), while the latter causes the energy decrease at around t = 5 × 10 5 t.u. followed by a translation of the breather. For a quantitative statistical description of the metastable states we consider the velocity distribution As well known, at thermal equilibrium the velocity v N presents a Gaussian distribution. It is interesting then to explore to which extend the former distribution changes at the various local equilibrium, metastable states. Particularly, we study the last metastable state before equilibrium corresponding to the existence of a single breather; the former state can be quickly reached by adding more edge-oscillators in the thermal bath. Therefore we choose N b = 22. Then, considering a set of initial conditions leading to equilibrium of the total energy E eq ≡ E a = 2.44 ( Fig. 2(a)), and five random initial condition sets (Figs. 2(a) and E f = 7.49, respectively, we determine the normalized velocity distributions over 10 8 integration points and present them in a log-linear scale. As expected, we observe singnificant deviations from the Gaussian behaviour when the system is in the metastable state. The Gaussian symmetry breaks creating new statistics of two symmetric maxima. More precisely, the higher the energy of the metastable state is, the more the aforementioned maxima separate from each other. This is caused by the superposition of two distinct velocity distribution behaviours. While the 2N b sites fluctuate around thermal equilibrium (Gaussian distributions), the "breather-site" acts as an independent (decoupled) deterministic φ 4 -oscillator, i.e., picky velocity distribution around the amplitudes (higher picks for higher oscillation energy) with negligible values between the formers creating the overall picture in Fig. 2. We fit the above distributions with the probability density function of four parameters. The function f (v N ) captures the deviations from the Gaussian behaviour. The results are presented in Table I. The parameter α and ε weakly variate for the various states keeping very low values confined in the ranges 0.002 ≤ α ≤ 0.005 and 0.1 ≤ ε ≤ 0.3. The latter exponent shows that f tends to a constant function, and accordingly the distribution P (v N ) presents Gaussian characteristics, the more the velocity departs from zero. Conversely, when it approaches zero its contribution on P (v N ) becomes essential yielding strong deviations from gaussianity. The multiplicative factor δ decreases con- sidering higher metastable states varying in the range 4.47 ≤ δ ≤ 1.34. Of course, at equilibrium in Fig. 2(a) its value is by default equal to zero. Last but not least, the parameter β presents a clear distinction between equilibrium and metastability being positive in the former and negative in the latter case. Moreover, the higher the energy of the metastable state is, the lower β's algebraic value becomes making the distribution pickier. Breather life-time.-We investigate the influence of the coupling constant k on the life-time of a single breather. Therefore, we perform a modified but equivalent version of the relaxation procedure described previously in the following sense. We thermalize the whole lattice at temperature T b and then, at time t 0 we choose randomly one oscillator to which a large amount of extra energy 300 × E eq is provided to assure that the former excitation corresponds to a breather. At the same time, the dissipation coefficient γ for this particular oscillator is set equal to zero. Then, the time interval ∆t := t f − t 0 , where t f is the time of reaching again the equipartition energy, defines the desired life-time of the breather. In Fig. 3(a), a representative example of a breather generation according to the preceding procedure and its subsequent decay is recorded. At t 0 = 1000 we insert energy 300 × E eq into the oscillator at the 25th site, and then let the whole system to reach again thermal equilibrium in order to estimate t f . In Fig. 3(b), the life-time of a breather, ∆t, is plotted with respect to the coupling constant k in a log-log scale for three different temperatures T b . The final trajectories for each T b determining ∆t are obtained after averaging over 100 experiments. As can be seen, for all three temperatures, ∆t exhibits a powerlaw behaviour as k approaches the anticontinuous limit. For a quantitative description of the numerical data we fit them with the function The {A i , B i }-coefficients are determined as A 1 = 138.8 ± 0.05, A 1 = 138.8 ± 0.05, A 1 = 138.8 ± 0.05, B 1 = 0.24 ± 0.02, B 2 = 4.52 ± 0.003 and B 3 = 153.7 ± 0.02 presenting as well as the λ's, λ 1 = 2.3±0.05, λ 2 = 2.56±0.03 and λ 3 = 2.02 ± 0.03, a heat bath temperature dependence. This power-law dependence with the slope lying in the range (2, 3) is a sign of the coherence induced locally by the discrete breathers and self-organization indicating the existence of correlations between the system variables. Conclusions.-During the energy relaxation process of one-dimensional nonlinear lattices when bringing them in contact with a colder bath of non-zero temperature T b > 0, the system may stay for very long times in various metastable states. The decay of the energy of the system with respect to time exhibits a staircase pattern, through a sequence of metastable states, that ends at thermal equilibrium. Considering the metastability of a single breather state we have statistically explored the lattice velocity distribution P (v N ) observing nongaussian behaviours. The deviations from gaussianity (thermal equilibrium) has been captured by assuming a velocity dependent factor of the v 2 N -term. In the frame of one-breather study we have demonstrated that the lifetime of the former presents a power-law dependence on the nearest-neighbour coupling constant k when the latter is close to the anticontinuous limit.
2014-08-26T10:42:56.000Z
2014-08-26T00:00:00.000
{ "year": 2014, "sha1": "ef3192c8191be6474e4191f5f87478f28ef675bf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef3192c8191be6474e4191f5f87478f28ef675bf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238533405
pes2o/s2orc
v3-fos-license
Health Worker Absenteeism in Selected Health Facilities in Enugu State: Do Internal and External Supervision Matter? Background: Absenteeism is widespread in Nigerian health facilities and is a major barrier to achievement of effective Universal Health Coverage. We have examined the role of internal (by managerial staff within facilities) and external (by managers at a higher level) supervision arrangements on health worker absenteeism. Specifically, we sought to determine whether these forms of supervision have any role to play in reducing health worker absenteeism in health facilities in Enugu State Nigeria. Methods: We conducted interviews with 412 health workers in urban and rural areas of Enugu State, in South-Eastern Nigeria. We used binary logistic regression to estimate the role of different types of supervision on health worker absenteeism in selected health facilities in Enugu State. Results: Internal supervision arrangements significantly reduce health worker absenteeism (odds ratio = 0.516, p = 0.03). In contrast, existing external supervision arrangements were associated with a small but significant increase in absenteeism (OR = 1.02, 0.043). Those reporting a better financial situation were more likely to report being absent (OR = 1.36, p < 0.01) but there was no association with age and marital status of respondents. Our findings also pointed to the potential for alternative forms of supervision, provided in a supportive rather than punitive way, for example by community groups monitoring the activities of health workers but trying to understand what support these workers may need, within or beyond the work environment. Conclusion: The existing system of external supervision of absenteeism in health facilities in Nigeria is not working but alternatives that take a more holistic approach to the lived experiences of health workers might offer an alternative. INTRODUCTION Absenteeism is a major problem in health systems worldwide. For example, it has been linked to the annual loss of 2 weeks of work in Organization for Economic Co-operation and Development (OECD) countries (1). However, the problem seems even greater in low-and middle-income countries, with severe consequences for already weak health systems (2)(3)(4)(5). This is especially so in the public primary healthcare facilities on which the poor often depend. Thus, absenteeism is a major barrier to achievement of Universal Health Coverage (UHC). Health worker absenteeism is attracting growing concern amongst service users and policy makers, concerned about the consequences for health outcomes and productivity (6,7). This is especially so in Nigeria, where it is recognized by key stakeholders as the most important manifestation of corruption in the health system because of its widespread nature and its ability to impact service delivery and other health outcomes (8). A Nigerian study of 242 health workers found that 110 had at least one spell of absence in a year (9), while qualitative research finds it to be pervasive (5,10,11). Other research has pointed to lack of, or weak policies, including on supervision, the topic of this paper, as a major contributor (10). The Covid-19 pandemic has extremely strained health workers involvement in providing health care all over the world. However, in Nigeria it didn't contributed much to absenteeism of health workers as most of them were very present at work delivering various health care to patients while protecting themselves. This is expected because by their profession, it is an obligation for them to be present at their places of work even if their health is at risk. During this study, most of them were present and work various shifts to meet up with various health care demands. Nevertheless, various PPE were provided to keep health workers safe at all times during the pandemic. Within facilities, absenteeism has profound consequences for everyone involved. Those health workers who are present face extra work; they may have to perform tasks above their level of competence; facilities may depend on volunteers to provide services, and ultimately, patients are offered low-quality care, if they receive any at all (5,11). As more health workers can be absent from work without facing severe consequences, those who are diligent in their work become increasingly frustrated and may, with time, engage in absenteeism (12). Health workers expressed basically that most of them are affected by negative pressures from unavoidable causes such as ill health, long distances to health facilities, family responsibilities, leadership style of their superiors, political connections among others (13). Financial pressures necessitating workers to keep a second job is also a major reason for absenteeism among health workers. The phenomenon of dual practice of health workers is a key driver to absenteeism, hence holding two or more jobs concurrently as a means to meet family demands and also make up for low salaries (14). For all these reasons, there is a pressing need to understand factors that could reduce absenteeism by health worker. Among these factors, much attention has focused on the quality and nature of supervision, which influences the productivity and quality of care in PHCs more generally (12,15,16). However, what literature exists focuses on comparisons between supportive and abusive supervision (17,18). In the current study we examine the association with absenteeism with supervision of health workers by internal health facility managers and by external supervisors, who often come unannounced. We consider these two dimensions to explore the proximity and perceptions toward the supervisor (internal vs. external) and how they contribute toward reducing absenteeism. Supervisors support Community Health Extension Workers (CHEWs) by explaining their roles, ensuring they have the supplies needed to perform their duties effectively, and addressing any community and personal problems they encounter (19). While there is a consensus in the literature on health worker absenteeism that improved supervision is needed, evidence on its impact has been inconsistent. One study found that external supervision had mixed influences as some workers (62%) perceived it to be helpful in, amongst other things, improving supplies, identifying expired drugs, and providing on-the job training, yet other workers (24%) found external supervisors to be uninterested in the problems of the facility, making only infrequent visits (20). Hence, poor supervision may be as ineffective as none (21). Crigler et al. (19) reported how supervision had evolved from punitive and critical of those being supervised to being facilitative or supportive. However, they also differentiated facility-based supervision and that by district level supervisors. Mukasa et al. (22) in researching experiences of health workers in Uganda reported now some supervisors are perceived to be aloof and disconnected from the realities in the health center, providing little feedback (23). While numerous studies have examined the role of supportive and abusive/punitive supervision, there is a scarcity of studies that examine whether the location of the supervisors influences the commitment of health workers to their jobs. Countries in Sub-Saharan Africa face dire shortages of health workers so it is important to understand the factors that can motivate health workers to stay at work (22). As supervision features prominently in the literature as a contributor to absence, we ask how its nature contributes to absenteeism of health workers and how this intersects with the financial situation of those health workers. Study Area This study was conducted in 10 local government areas in Enugu State, in the Southeastern part of Nigeria. The areas were purposively selected to cover urban, rural, and peri-urban areas. The population of the state is estimated at over 3 million, with 2,235,540 in rural areas and 1,032,297 in urban areas (24). Study Design and Population The survey was designed to understand the nature of absenteeism in various health facilities in Enugu state and also the role supervision plays in tackling increasing rates of absenteeism in the facilities. Data were analyzed using Binary Logistic Regression Model. The study population comprises resident doctors, nurses, midwives, and Community Health Extension Workers (CHEWs) in various health facilities across the State. Face to face interviews were conducted and at least 2 health workers from each facility were included in the study. In all 412 respondents participated in the survey from about 125 health facilities in Enugu State Nigeria. Data Collection A survey instrument was designed to assess absenteeism amongst health workers and their preferences in relation to supervision. The instrument was developed following a draft instrument pretested to ascertained the views of health workers about absenteeism and potential remedies. The draft instrument was tested with 30 health workers and, after incorporation of amendments and corrections, a final version was prepared. It was converted into electronic form for use with the Open Data Kit (ODK) on an android platform. We categorized absenteeism using two questions, one about engaging in absenteeism; and one on not engaging in absenteeism. Supervision was assessed using questions about being supervised internally by colleagues of higher rank within the facility (internal supervision) and meeting an external supervisor who comes to check health workers' activities in the facility (external supervision). Approval to undertake the study was provided by the Enugu State Primary Health Development Agency (ESPHDA). The survey was conducted from May to June, 2020. Four researchers participated in the data collection process and were assisted by four research assistants. Heads of the (Health) Department (HODs) in all the local governments were also informed about the study and gave approval after confirming the approval of ESPHDA. HODs also provided comprehensive lists of all health centers in their local government areas from which a convenience sample of 10 PHCs was selected. Officers-incharge (OICs) of the selected facilities were also approached with the approvals from the HODs and ESPHDA, which asked them to grant the researchers access to their staff. The survey instruments were interviewer-administered and the researchers recorded the responses on paper and in electronic media. Before leaving each site, data from both records were cross-checked and discrepancies checked with the health worker concerned. The electronic data were then uploaded to a database. The approach taken, which did involve duplication of data entry, was necessary because of COVID-19 restrictions. Researchers ensured that all safety protocols were adhered to, using facemasks and handsanitizers for themselves and respondents and social distancing. Data Analysis The hypotheses were tested using the Binary Logistic Regression Model. Odds ratios were estimated to determine the impact of the independent variables on whether respondents reported being absent in the past year. We chose this approach because it performs very well when datasets are linearly separate from each other and it also uses the maximum likelihood robust estimation, allowing for non-normality that could be present in the data. The absenteeism variable was adopted to ascertain the variables that determine health workers' absence from work separately for the two measures. That is engaging in absenteeism and also not engaged in absenteeism. Dependent Variable: Absenteeism We captured the effect of absenteeism in the questionnaire by asking questions about whether a health worker engaged in absenteeism (missing either a full or partial day of work over the past year) and whether they do not engage in absenteeism. Respondents who answered "yes" to being engaged in absenteeism were coded "1" and those who responded "no" were coded "0." Independent Variables Met External Supervisor This variable was included in the model to capture the role external supervision plays in regulating absenteeism in the health facilities. It captures the number of times a health worker meets an external supervisor over a set period of 1 year who monitors their work at the facility. This external supervisor could come from the local government headquarters; within the community (paramount rulers, health facility committee members, youth and women leaders, etc.); WHO (25); UNICEF; some nongovernment organizations, etc. Performance Supervised Internally This variable represents internal supervision of health workers by senior/higher ranking health workers in the facility. Respondents who answered "yes" to being supervised by a senior/higher ranking health worker were coded "1" and those who responded "no" were coded "0." Marital Status This variable represents whether a health worker is married, single, divorced or separated. This was included to examine whether married health workers more frequently absent due to family commitments. Respondents who answered "single" for any of these were coded "0, " those who responded "married" were coded "1, " those who responded "divorced" were coded "3, " those who responded "separated" were coded "4." During the data analysis process, we only used respondents who answered "married" as equal to 1 and others 0. The reason for this is that only very limited number of respondents were separated or divorced. A code was indicated in the analysis to single out only married and single in the analysis. Financial Situation This variable captures the financial situation of health workers. The hypothesis is that when a health workers' financial situation improves, they tend to be absent from work by engaging in other income generating practices, so as to earn more income. The variable was classified into 5 categories, representing "very poor" "poor" "neither good nor bad" "very good" and "good." Ethical Considerations Ethical approval was obtained from the Research Ethics Committee of the University of Nigeria Teaching Hospital (UNTH). Other approvals have been described above. The study was explained to the health workers who were given written material containing details of confidentiality and anonymity and they were asked to sign consent forms on paper and in the electronic device. Table 1 describes characteristics of respondents. The vast majority were females and within the age group of 41-50 years (40.5%). Most of the health workers were married (79.4%). Just over a quarter (29.6%) considered their financial situation to be relatively poor, and about the same number (29.1%) relatively good. Table 2 shows that 92 health workers reported never engaging in absenteeism within a year, while 320 health workers engaged in absenteeism. Absenteeism was broken down by number of days a health worker was absent from work within a 1 year and it was found that, while 92 of them never engaged in absenteeism, 225 of them were absent in 10 days or below (54.6%). Fifty-eight of them were absent for 11-20 days within a year (14.1%), 18 (4.4%), and 19 (4.6%) of them were absent between 21 and 30 days and above 30 days, respectively. Table 3 showed the correlation matrix of variables of interest. Absenteeism was positively related to financial situation with about 12.79% correlation among them. While other variables in the model had a positive correlation with absenteeism, only performance supervised internally had a negative correlation with absenteeism at about −12.1%. This is evident in the binary logistic results presented in Table 4, which also shows a negative and significant relationship with absenteeism. The results in Table 4 represents the descriptive statistics of some of the variables of interest. Met external supervisor had 405 responses with a mean value of 12.1 visits per year with a standard deviation of 19.56. The maximum value of 169 represents that highest times a health worker meets an external supervisor within a year of working in the facility. Performance supervised internally represented a mean value of 0.692 meaning that 69% of respondents reported being supervised internally and a standard deviation of 0.462. Table 5 presents the binary logistic regression results. Internal supervision (performance supervised internally) has a significant and negative relationship with absenteeism, such that those supervised internally (performance supervised internally), were 51% less likely to report engaging in absenteeism over the past year. In contrast, external supervision was positively related to absenteeism. The more health workers reported meeting external supervisors, the more likely they were absent from work. Table 4 shows a positive and significant relationship between absenteeism and meeting an external supervisor. A unit increase in meeting external supervisor led to a 2% increase in the likelihood of reporting absenteeism over the past year. There was a positive and statistically significant relationship between a health worker's perceived financial situation and absenteeism. A better financial situation is associated with more absenteeism. About 1.36% were more likely to report engaging in absenteeism over the past 1 year due to increases in their financial situation. Age and being married were both found not to be statistically associated with the absenteeism. DISCUSSION We compared two forms of supervision, internal and external. We found that internal supervision was associated with reduced absenteeism amongst health workers in PHCs. This lends credence to other studies that obtained similar results (18,26). External supervision was found to increase absenteeism among health workers slightly. Hence, internal supervision seems to reduce health worker absenteeism. In contrast, the association with external supervision was insignificant. It could be that even if external supervision does not reduce absenteeism, it could play a role if it is not frequent, compromised, or previously announced. Although Onwujekwe et al. (5) finds external supervision to be important in optimizing health service delivery in PHCs, we found that it had barely any impact. There are several possible reasons. First, staff found to be absent by the external supervisor may not be punished because they are either politically connected, related, or can offer bribes. Second, external supervision was infrequent and announced so health workers would know when they will be checked and can make sure they are at work. Third, external supervisors, such as health facility committee members, community leaders, and non-government organizations had little power to enforce sanctions against health workers, notwithstanding the few exceptions recorded in the study. Coincidentally, Onwujekwe and colleagues found some of the supposed external supervisors, particularly those at the local government headquarters, were absent themselves. Though the political complexities at the local government level seem to have caused primary healthcare governance to be weak, the need to stimulate strong facility leadership could be a favorable start-point. Our analysis shows that as a health worker's financial situation improves, they are more likely to be absent. Other research suggests that this may be because their improved financial situation allows them to open private clinics where they spend much of their time (14). However, our data did not differentiate the different sources of greater income, including higher salaries, so we cannot explore this further. Agwu et al. (11) discovered that local government health workers in Nigeria whose financial conditions are currently discouraging might abandon their responsibilities at the facilities if they can generate more income from their private businesses. To them, it is survival (27). Despite the merits of our study, there are some limitations. We could not capture presenteeism, where those who are present are doing nothing. This was deliberate as our pilot study showed that respondents either answered in the negative or refused outright to answer. Also, we lacked questions that could address the source of the respondents' improved financial conditions. Therefore, we recommend that future studies should consider addressing these limitations. In conclusion, since we found that external supervision provided no meaningful reduction in absenteeism, the government should explore new approaches. An uncompromised system of external supervision that is unannounced and frequent offers potential benefits. We understand that supervision is one of the most challenging ways of tackling absenteeism because of the economic and time costs for supervisors and their agencies, but there are things that can be done. First, community groups could be involved in monitoring the activities of the health workers and try to understand where such workers need help in the course of their jobs. They might become more involved in the care of the workers, feeding back ideas on how to make the work environment more attractive for workers. We identified a need to empower community-based supervisors, and other groups of external supervisors from reputable agencies and organizations to enable them to impose meaningful sanctions against healthcare staff who are absent without reason. The government could also encourage a peer support structure where a supervisor could meet regularly with groups of community health extension workers to find ways in which they can offer mutual support. DATA AVAILABILITY STATEMENT Data sets will be available to readers on request. Requests to access these datasets should be directed to Divine Ndubuisi Obodoechi divine.obodoechi@unn.edu.ng. AUTHOR CONTRIBUTIONS OO, DO, CN, PA, CO, and AO contributed to conception and design of the study. BA, DO, OO, and CN organized the database. DO and CN performed the statistical analysis. DO wrote the first draft of the manuscript. OO, PA, CO, AO, and CN wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
2021-10-11T13:20:26.106Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "38a1acd63b493bd6d941c97bcf3c1295cbade148", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2021.752932/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38a1acd63b493bd6d941c97bcf3c1295cbade148", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
263787553
pes2o/s2orc
v3-fos-license
Reply to"Comment on 'Why interference phenomena do not capture the essence of quantum theory'" Our article [arXiv:2111.13727(2021)] argues that the phenomenology of interference that is traditionally regarded as problematic does not, in fact, capture the essence of quantum theory -- contrary to the claims of Feynman and many others. It does so by demonstrating the existence of a physical theory, which we term the"toy field theory", that reproduces this phenomenology but which does not sacrifice the classical worldview. In their Comment [arXiv:2204.01768(2022)], Hance and Hossenfelder dispute our claim. Correcting mistaken claims found therein and responding to their criticisms provides us with an opportunity to further clarify some of the ideas in our article. Hance and Hossenfelder state In Spekkens' original toy model paper [6], an "ontic state" is defined as "a state of reality" whereas an "epistemic state" is a "state of knowledge". Both of these definitions are useless, the first because one does not know what "reality" means, the second because one does not know what "knowledge" means [...] At some level, no one can reasonably claim that they fail to understand the distinction between reality and our knowledge thereof. We all understand the difference between the proposition "the back door is locked" and the proposition "I know that the back door is locked". Hance and Hossenfelder are presumably not suggesting that they fail to comprehend the distinction in this commonsense form. Rather, they are presumably suggesting that the distinction introduced in Ref. [3] is deficient by virtue of not having been adequately formalized. Such a criticism might have been apt if all Ref. [3] had to offer in the way of trying to clarify the distinction between ontic states and epistemic states was an explanation of why it saw fit to use this terminology, namely, that the former term derives from the Greek ontos, meaning * lorenzo.catani@tu-berlin.de † leifer@chapman.edu ‡ david.schmid@ug.edu.pl § rspekkens@perimeterinstitute.ca "to be", and the latter from the Greek episteme, meaning "knowledge". As it turns out, however, the discussion of this distinction in Ref. [3] did not, in fact, end with an explanation of the etymology. Indeed, immediately after introducing this terminology, it is stated that "To understand the content of the distinction, it is useful to study how it arises in the uncontroversial context of classical physics," followed by this elucidation of the concept: The first notion of state that students typically encounter in their study of classical physics is the one associated with a point in phase space. This state provides a complete specification of all the properties of the system-in particle mechanics, such a state is sometimes called a "Newtonian state". It is an ontic state. On the other hand, when a student learns classical statistical mechanics, a new kind of state is introduced, corresponding to a probability distribution over the phase space-sometimes called a "Liouville state". This is an epistemic state. The critical difference between a point in phase space and a probability distribution over phase space is not that the latter is a function. An electromagnetic field configuration is a function over three-dimensional space, but is nonetheless an ontic state. What is critical about a probability distribution is that the relative height of the function at two different points is not a property of the system-unlike the relative height of an electromagnetic field at two points in space. Rather, this relative height represents the rel-ative likelihood that some agent assigns to the two ontic states associated with those points of the phase space. The distribution describes only what this agent knows about the system. In other words, the distinction between ontic states and epistemic states is not novel to Ref. [3]. It is already present in physics whenever a system being investigated is such that the investigator may have incomplete knowledge of its physical state. Classical statistical mechanics is the field of physics wherein it first became critical to develop a mathematical formalism for describing such incomplete knowledge. Thus, for instance, a microcanonical ensemble for a gas represents a state of incomplete knowledge, appropriate for any agent who knows only certain macroscopic properties of the gas. Moreover, Ref. [3] goes on to formalize the distinction in terms of a discrete ontic state space and the space of probability distributions thereon. The same was done for continuous ontic state spaces in Ref. [4], where epistemic states are now probability densities over the ontic state space. More generally, the ontic states of a system are the elements of the set that defines the kinematics for the system, and where functions from this set to itself define the dynamical laws. Epistemic states, on the other hand, belong to the normative theory for reasoning in the face of uncertainty, such as Bayesian probability theory. They describe the elements of the set of possible ways of knowing about the ontic state of the system according to the theory. A synthetic approach to the distinction, aiming to go beyond the classical case, has recently been presented in Ref. [5]. As Hance and Hossenfelder do not comment on any of the formal accounts of the distinction between ontic and epistemic states that we have just outlined, their criticism of the distinction seems to us to be devoid of any substance. Hance and Hossenfelder ask two further questions about the status of epistemic states: "whose knowledge?" and "why would we care about it?" Take the second question first. The importance of mathematically formalizing our uncertainty is apparent in all branches of the sciences, including physics. It arises in every situation where there is incomplete knowledge due to practical considerations, such as technological limitations. If the incomplete knowledge is due to a fundamental feature of the physical theory being considered rather than a technological limitation-as with the epistemic restriction in Ref. [3] and our toy field theory-this does not alleviate the need to formally quantify uncertainty. Rather, it makes it more acute. What about the question "whose knowledge?" Note, first of all, that one could ask Hance and Hossenfelder's question about states of incomplete knowledge in classical statistical mechanics. Ought we to take the microcanonical ensemble to be an unprofessionally vague concept because it has not been made explicit in the textbooks who it is that knows the values of certain macrovariables while remaining ignorant of the values of the microvariables? No, of course not. The microcanonical ensemble describes the knowledge of any agent that knows only the specified macro-variables. The statistical mechanics textbooks are right not to waste space answering the question "whose knowledge?" when they ask us to imagine that only certain macroscopic variables are known. More generally, there is, in fact, a long tradition of expressing physical laws in a pragmatic way, that is, in terms of in-principle restrictions on what any agent living in a universe following those laws might be able to do or to know. (See, for instance, the introduction of Ref. [6].) Consider Kelvin's formulation of the second law of thermodynamics. "It is impossible to devise a cyclically operating device, the sole effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work." [7] We might call this principle a "pragmatic restriction", to parallel the notion of an epistemic restriction. Suppose someone asks: "Who does this pragmatic restriction apply to?" or simply "Who is the 'deviser' ?" Is this a question that needs to be answered to understand Kelvin's formulation? No. It is understood that in Kelvin's formulation, the answer to the "who" question is any agent at all, using any physically realizable technology whatsoever. It is not a parochial kind of restriction, specific to some moment in technological history or some particular engineer. It is an in-principle kind of restriction. It is the same with the epistemic restriction in physical theories that posit one. Hance and Hossenfelder claim that the epistemic restriction used in Ref. [1] is different from the one used in Ref. [3], on the grounds that the one stated in Ref. [3] is such that the update rule that it implies does not act locally. This is also mistaken. There is no contrast between the nature of the epistemic restriction (and consequently the update rules for epistemic states) in Ref. [1] and Ref. [3]. In particular, both are explicitly local. The fact that the latter satisfies locality is emphasized throughout Ref. [3], in particular, as the reason we know that the toy theory cannot violate Bell inequalities. It is unclear, therefore, how Hance and Hossenfelder came to this mistaken impression. We turn to considering the following two claims of Hance and Hossenfelder After all, we use quantum mechanics to predict frequencies of occurrence and not Peter Pan's knowledge about these frequencies. and Indeed, one may wonder, why talk about knowledge at all? What we need to predict measurement outcomes is a prescription for the distribution of an ensemble of ontic states [5]. The claim of Catani et al that they can correctly reproduce observations only make sense if the "epistemic restriction" is a change to the underlying distribution of ontic states. In these quotations, Hance and Hossenfelder are attacking the idea that probabilities ought to be defined as credences (i.e., an agent's degrees of belief) and seem to instead endorse the notion that they ought to be defined as relative frequencies. The literature on the philosophy of probability provides many arguments against this type of frequentist interpretation of probability. Indeed, it is a rare instance of something about which there seems to be agreement among those writing on the philosophy of probability. Myrvold, in his recent book on the philosophy of probability [8], goes so far as to call this view the "dead horse" of the philosophy of probability. 1 In our view, the key argument against interpreting probabilities as relative frequencies is the following one. Relative frequencies connect with the probability distribution assigned to a single run through the law of large numbers. But what this law states is that, in the limit of infinitely large ensembles, these relative frequencies are likely to converge to the probabilities in the probability distributions, in the sense that this will occur in a set of measure one of possible sequences. For a Bayesian, the notion of "likelihood" in the law of large numbers (more formally, the notion of a measure) is an appeal to probability that, like all appeals to probability, ought to be interpreted as a credence. But let us consider the frequentist alternative, that this probability also is to be interpreted as a relative frequency. This means that one must interpret the law of large numbers as stating that if one forms an infinite ensemble of copies of the original infinite ensemble, the relative frequency with which the convergence occurs in this new ensemble goes to 1. But it is not the case that the convergence must occur in every element of the new ensemble. So, strictly speaking, all one can claim is that in any given element of this new ensemble, it is "likely" that the convergence occurs. But now one is faced with the problem of how to interpret this notion of likelihood. One can define a third type of ensemble of copies of the second type of ensemble, but then another notion of likelihood appears at that level which needs to be defined. No matter how far one goes in this sequence, there always remains a concept of probability that remains undefined. In short, attempts to define probabilities as relative frequencies lead to an infinite regress. In an appendix that we have added to our article (Appendix C.1), we discuss at length the question of what the toy field theory has to say about relative frequencies. We point out that the connection to relative frequencies comes when considering repetitions of an experiment with an i.i.d. source. If p is the probability distribution assigned to a system by an agent (i.e., representing the agent's credences about the system), then p ⊗n is the probability distribution assigned to the n copies of the system in the n-fold repetition of the i.i.d. experiment (i.e., representing the agent's credences about the n system of the i.i.d. source). Then, the law of large numbers stipulates that in the limit of arbitrarily many repetitions of the experiment, the relative frequencies judged to be most likely are those that converge to the distribution p. That said, one can provide an account of our toy field theory in a language that is more congenial to those who are inclined to a frequentist interpretation of probability. It suffices to express facts about an infinite ensemble of repetitions of an experiment in terms of the relative frequencies that are likely to occur in this ensemble (without defining the probabilities in terms of these relative frequencies). This provides an alternative to the description we provided in the main text of our article (which was in terms of an agent's state of knowledge about a particular element of this ensemble). We make this translation in another appendix that we have added to our article (Appendix C.2). In this new description, the transformation to the physical state induced by a beamsplitter or phase shifter describes a deterministic change in the make-up of the ensemble of physical states at a particular point in the interferometer (rather than a deterministic change in an agent's state of knowledge about a particular element of the ensemble). Similarly, in this new description, conditioning on the outcome of a measurement is no longer modelled as Bayesian updating, but as updating the ensemble that is relevant for making predictions. We describe it as follows in our Appendix C.2: This updating can be understood as consisting of two steps. First, one selects from the pre-existing ensemble the subensemble that is consistent with the outcome that was observed. Second, the fact that a measurement leads to a random disturbance-that is, one of several different transformations to the physical state-implies that elements of the ensemble selected in the first step get split into distinct elements (bifurcated in the case of interest here), thereby leading to an increase in the number of distinct subensembles. For the case of a measurement on one mode, the ensemble of possibilities for the physical state of the other mode may be updated because of a pre-existing correlation between the physical states of the two modes. Consequently, such updating does not involve any nonlocal influence. Hance and Hossenfelder also state: To further muddy the waters, by virtue of the "epistemic restriction" of Spekkens' original toy model, no observer can "know" what the "ontic" state is, which makes it rather unclear what it might mean for it to be "real" in the first place. If one grants that there is a meaningful distinction between ontic states and epistemic states, then it is perfectly straightforward to imagine a scenario wherein agents can have knowledge of any of a number of aspects of the full reality, while not being able to know all of these aspects at once. Plato's allegory of the cave is a useful metaphor for this aspect of epistemically restricted theories [11]. One can imagine that the objects that are casting the shadows have a three-dimensional shape and the prisoners in the cave, by virtue of only seeing the shadows, can only come to know various two-dimensional projections of these three-dimensional shapes. Indeed, we might imagine that a given shape can be oriented in an arbitrary way relative to the light source at the mouth of the cave, such that the prisoners can come to learn any two-dimensional projection, while still never being able to access more than a single two-dimensional projection at a time. This limitation of the prisoners, to only ever acquiring partial information about the shape at a given time, does not in any way undermine the notion that there is, in fact, a three-dimensional shape of which the shadow they see is a two-dimensional projection. The only way we see for someone to think that it does undermine this notion is if they subscribe to the verificationist principle of the logical positivists. 2 This is the idea that a proposition is only meaningful if it is possible to conduct an experiment that verifies it. For the prisoners in Plato's cave, there is no experimental procedure by which they can come to learn all of the two-dimensional projections of an object at once, that is, its three-dimensional shape. It follows that endorsement of the verificationist principle stipulates that it is not meaningful to talk about its three-dimensional shape. Similarly, given that in an epistemically restricted theory there is no way to simultaneously measure all the ontic variables (i.e., every variable in a set that is sufficient to determine the ontic state), endorsement of the verification principle implies endorsement of the notion that propositions about the values of such a set of ontic variables are not jointly meaningful. This sort of criticism of epistemically restricted theories is reminiscent of Bohr's position in the Bohr-Einstein debate. In Ref. [4], it was argued that Bohr's account of experiments measuring alternatively position or momentum of a particle, and in particular his account of the Einstein-Podolsky-Rosen (EPR) experiment, harmonize quite well with the account given by the theory described therein-termed epistemically restricted Liouville mechanics (abbreviated as ERL mechanics)-where a particle has both a position and a momentum, but these can never be known simultaneously. It was furthermore argued in Ref. [4] that the reason Bohr ultimately rejects this account is that he endorsed a version of the verificationist principle. The argument was as follows: [...] ERL mechanics can reproduce the correlations in the original EPR thought experiment and indeed delivers the sort of interpretation of the correlations that EPR favoured, namely, one wherein position and momentum are jointly well-defined but not jointly known. Even though Bohr sought to dispute this sort of interpretation in his reply, his description of the thought experiment makes explicit reference to the positions and momenta of the systems: "In fact, even if we knew the position of the diaphragm relative to the space frame before the first measurement of its momentum, and even though its position after the last measurement can be accurately fixed, we lose, on account of the uncontrollable displacement of the diaphragm during each collision process with the test bodies, the knowledge of its position when the particle passed through the slit." Indeed, his argument for the consistency of the uncertainty principle makes no reference to the quantum formalism at all. It reads better as an argument for the consistency of the uncertainty principle within ERL mechanics. Nonetheless, Bohr denies the interpretation suggested by ERL mechanics: "we have in each experimental arrangement suited for the study of proper quantum phenomena not merely to do with an ignorance of the value of certain physical quantities, but with the impossibility of defining these quantities in an unambiguous way." The only way we see to reconcile this tension in Bohr's reply is that Bohr believed that two quantities can be jointly well-defined only if they can be jointly measured. In essence, Bohr was a radical positivist. Otherwise, why from the impossibility of two quantities being jointly measured would he infer the impossibility of their being jointly well-defined, as opposed to merely inferring the impossibility of their being jointly known? More generally, there have been many persuasive arguments put forward against the verificationist principle and the positivist movement in the philosophy of science more generally. For those not familiar with these arguments, we recommend Quine's classic article "Two dogmas of empiricism" [12]. II. CLASSICALITY In Sec. V.A.3 of our article, when considering what might be genuinely nonclassical about interference phenomenology, we describe our own preferred notion of classicality. Hance and Hossenfelder's comment includes some criticisms of this notion. It should be noted, however, that the thesis of our article was not predicated on our readers espousing the notion of classicality that we favour. As such, these criticisms are not relevant to the question of primary interest in our article. Nonetheless, we take this opportunity to respond to them for the sake of clarifying what our preferred notion of classicality implies. Hance and Hossenfelder correctly summarize our preferred notion of classicality, which includes a notion of classicality for the theory of inference, namely, that it is done using Bayesian probability theory and Boolean propositional logic. Nonetheless, they suggest that according to our preferred notion, what is usually termed "classical statistical mechanics" would come out as nonclassical. This is incorrect. What they seem to have missed is that the contrast class we had in mind in our notion, the thing that we would call "nonclassical", is a theory wherein the way inferences are done is at odds with Bayesian probability theory and Boolean propositional logic. The use of probability in statistical mechanics is not at odds with either Bayesian probability theory or Boolean propositional logic. Consequently, there is no sense in which statistical mechanics would come out as nonclassical according to our preferred notion of classicality. Indeed, Hance and Hossenfelder do not suggest that classical statistical mechanics uses some exotic alternative to Bayesian probability theory, but only that "Bayesian probability theory isn't used much" in classical statistical mechanics. Although we could dispute this assessment of the prevalence of Bayesian inference in classical statistical mechanics (see, e.g., Ref. [13]), it is beside the point. Even if for some physical theory, Bayesian probability theory was not used at all, this would not imply that that physical theory would be judged to be nonclassical relative to our notion. A physical theory needs to commit itself to some concrete alternative to Bayesian probability theory or Boolean propositional logic in order for it to be judged as having a nonclassical theory of inference by the lights of our preferred notion. Lack of use is not the same as use of an alternative. It is worth adding that we do not consider modifications to the interpretation of probabilities (without any difference to the formal apparatus for making predictions) to be examples of a concrete alternative to the classical theory of inference. Such an alternative must deviate from Bayesian probability theory and Boolean propositional logic in more than a cosmetic manner. As such, a mere preference for understanding the predictions of classical statistical mechanics in terms of a frequentist interpretation of probabilities rather than a Bayesian one is not sufficient for claiming that the theory of inference used therein is nonclassical. Similarly, a modification to the scope of some theory of inference, such as moving from Boolean propositional logic to standard predicate logic also does not constitute an example of a concrete alternative to Boolean propositional logic since the propositional segment of predicate logic is still Boolean. Examples of the sorts of modifications of logic that we would consider to be at odds with Boolean propositional logic are quantum logics [14]. Similarly, an example of a modification of probability theory that we would consider to be at odds with Bayesian probability theory is the sort of theory of inference defined in Ref. [15][16][17] using conditional density operators. Hance and Hossenfelder also seek to criticize the Leibnizian methodological principle that is part of our preferred notion of classicality. They state: What is empirically 'indiscernible' depends on what measurements one has made or can make. Distances below, say, a thousandth of a femtometer aren't currently 'empirically discernible'. We treat them as ontologically different in General Relativity, hence, it seems that according to the authors' position, General Relativity is not a classical theory. Here, Hance and Hossenfelder are simply mistaken about the content of the Leibnizian methodological principle. The definition from Ref. [18], which was repeated in Ref. [1], states that the notion of empirical discernibility at issue is indistinguishability in principle rather than in practice, where what is possible in principle is determined by the physical theory that one is assessing. The point is emphasized in Ref. [18]: [...] the Leibnizian methodological principle does not appeal to a parochial kind of empirical indiscernibility, judged relative to the particular in-born capabilities of humans or their particular technological capabilities at a given historical moment, but rather to the in-principle variety of empirical indiscernibility. This variety of indiscernibility must be understood as indiscernibility for any system that might be considered an agent within the universe. This is because, as Deutsch has argued persuasively, the only in-principle limits to human capabilities are the limits imposed by physics [*], and therefore the only limits on our capabilities are the limits on the capabilities of any system embedded in the universe and subject to its physical laws. [*] His argument proceeds by noting that an "in-principle human capability" includes what could be achieved in a distant future with the aid of arbitrarily sophisticated technology. Thus, the fact that certain distances that are not distinguishable by today's technology are nonetheless treated as ontologically distinct in General Relativity is not a failure of the Leibnizian methodological principle. Only if General Relativity stipulated that such distances were in principle empirically indistinguishable, would one conclude that General Relativity contradicted the principle. In fact, the Leibnizian methodological principle is built into General Relativity at a deep level, since this is one of main principles that guided Einstein in his development of the theory, as is argued in Ref. [18]. Although our article did not seek to persuade readers to endorse our preferred notion of classicality, it did seek to insist on a methodological point, namely, that if someone wants to claim that some particular operational phenomenology of interference does capture the essence of quantum theory, then they ought to back up their view with a no-go result. That is, they should articulate a formal notion of classicality within some framework for physical theories and then prove a theorem demonstrating that their notion is inconsistent with the phenomenology in question. It is important to stipulate a formal notion of classicality in such a no-go result because we do not, in fact, all agree about what the correct notion of classicality is. Indeed, there are almost as many ideas about this as there are researchers who work in the foundations of quantum theory. By abiding by the proposed methodology, one can focus the discussion on where the true disagreements lie. This is discussed further in the next section. III. ON SHIFTING THE GOAL POSTS Hance and Hossenfelder attempt to summarize part of how our theory works as follows: The phase of the state changes when a measurement doesn't happen, [...] and then seek to critique it based on this characterization: [...] it is unclear how the absence of a measurement can locally change a state. The summary is incorrect, however. In our theory, any mode for which no measurement is performed has its phase left invariant. Only if a measurement is actually performed on a mode can the latter's phase be randomized. Hance and Hossenfelder also state: The authors seem to assume that a measurement in which no interaction happens still somehow results in an interaction (and that this interaction is still local). Here, they at least seem to acknowledge that the measurement update rule we describe in our article applies to the case where a measurement of occupation number is happening, as opposed to no measurement happening. However, their claim that no interaction happens as a result of this measurement is mistaken. In our theory, every mode has a phase degree of freedom in addition to its occupation number degree of freedom, and the phase of a mode is randomized in a measurement of occupation number of that mode regardless of whether the occupation number is found to be 0 or 1, i.e., regardless whether or not the excitation happens to be found in that mode. It is likely that Hance and Hossenfelder's confusion results from thinking of our theory as one wherein the systems are particles, when in fact the systems in our theory are modes. To head off such confusions, we have clarified this distinction in two appendices we have added to our article (Appendix C.5 and C.6). Hance and Hossenfelder correctly summarize an aspect of our theory when they note that it posits that information is sent over a path with occupation number zero. They are again mistaken, however, when they state that This only works so long as one is forbidden from blocking one of the paths or pulling out one of the mirrors. On doing either of these things, the model either falls apart or requires nonlocal update. In two more appendices we have added to our article (Appendices C.3 and C.4), we have provided an explicit treatment of each of these cases and demonstrated that the theory has no problem treating either. It reproduces what one would expect for the analogous quantum experiment and continues to only require local causal influences to do so. As we note in the new Appendix C.3, blocking a path amounts to implementing a destructive measurement of occupation number, one that absorbs the excitation if it is present. Consequently, one way to summarize Hance and Hossenfelder's first claim is that the toy field theory cannot account for the Mach-Zehnder interferometer experiment in the case where the measurement of the occupation number is made to be destructive rather than nondestructive. They correctly anticipate that our response to this suggestion is that it is an instance of what we called "shifting the goal posts" in our article. 3 3 They suggest further that we ourselves are guilty of some kind of shifting of the goal posts in our original paper by treating the Mach Zehnder experiment rather than the double-slit experiment. Our move is not an instance of the specific argumentative strategy we called "shifting the goal posts" in our article (which we describe below), so it is not really a tit-for-tat situation, as Hance and Hossenfelder suggest. Nonetheless, we here respond to the charge that we have done less than we needed to do to justify the thesis of our article. In the introduction of our article, we provided a summary of the arguments put forward by Feynman and others in favour of the three interpretational claims and the impossibility of a classical explanation within the context of the double-slit experiment. Then, in Sec. II.A, we show that the argument, when adapted to the context of the Mach-Zehnder exper-We begin by recalling how this notion was articulated in Ref. [1]: No doubt those researchers who are sympathetic to the view that interference captures the essence of quantum theory will be tempted to respond to the arguments of this article as follows: "Sure, you have reproduced some of the phenomenology of quantum interference, but you haven't reproduced all of it. What about all of the experiments involving beamsplitters that are not 50-50, or involving phase shifts other than Φ = 0 and Φ = π? You can't make sense of those in the toy field theory. Does Hance and Hossenfelder's response fit the pattern described here? Yes, it does. Indeed, their response is essentially this: "Sure, you might have reproduced the phenomenology of quantum interference in the case where the measurement of occupation number on one of the modes is nondestructive, but what about the case where the measurement is destructive?" As we note in our new appendix, the thesis of our article does not rely on addressing this case. Explicit in Feynman's account (and to an even greater degree in Elitzur and Vaidman's account) is that the destruction of interference occurs even in the case where the measurement does not detect the excitation on its arm. But in this case, there is no difference in the quantum state update rule between destructive and nondestructive measurements, as the output of the measurement device is left in the quantum vacuum state in both cases. For this reason, the distinction between destructive and nondestructive measurements is not significant for discussions of the TRAP phenomenology. It follows that if one can reproduce this phenomenology for either type of measurement in a classical local model, one has undermined the claim that the phenomenology necessitates a departure from the classical worldview. Hence, an explicit consideration of destructive measurements is not required to establish our thesis. iment, is of precisely the same form. In other words, the specific aspects of the operational phenomenology that are cited in the argument are common to the double-slit and Mach-Zehnder scenarios, as is the logical form of the argument. As such, showing that the claimed implication is invalid in any particular scenario shows that the argument is not valid (in the logician's sense of the conclusion failing to follow from the premisses), and so is not to be trusted in any context in which it arises. It is for this reason that undermining the argument in the context of the Mach Zehnder experiment is sufficient to undermine it also in the context of the double-slit experiment. (Here, "TRAP" is an abbreviation of 'traditionally regarded as problematic'.) The same point can be made regarding any modification of the experimental scenario wherein a mirror is removed, since this case appears nowhere in discussions of what is typically regarded as problematic about interference in quantum theory. It is odd that while Hance and Hossenfelder anticipated that we would identify their question as an instance of what we termed "shifting the goal-post", they did not bother to consider what we said about how best to approach such questions. This is what we wrote: If someone wishes to claim that aspects of interference beyond the TRAP phenomenology demonstrate the impossibility of maintaining a classical worldview, then not only must they specify precisely which aspects they have in mind and how they propose to formalize the notion of classicality, they must also back up their claim with a rigorous no-go theorem, following the methodology we endorsed above. Until they do, the view that the phenomena in question resists explanation in terms of a classical worldview is mere speculation, and might only indicate a "lack of imagination", to recall Bell's phrase. It seems to us that Hance and Hossenfelder do wish to claim that aspects of interference beyond the TRAP phenomenology demonstrate the impossibility of maintaining a classical worldview. However, they fail to articulate the precise notion of classicality they have in mind, nor the precise set of operational features of quantum theory that they are appealing to, nor do they prove a no-go theorem to back up their claim. As such, their view, namely, that shifting attention to destructive measurements (or cases where a mirror is removed) implies that the interference phenomena resist explanation in terms of a classical worldview, is mere speculation and as a result it might merely indicate a lack of imagination on their part for what such a classical explanation might be. Indeed, this is precisely what we show to be the case in the appendices we have added to our article (Appendices C.3 and C.4). In the conclusions of Ref. [1], we noted that Feynman could have avoided making the mistaken claim that the phenomenology of interference resisted explanation in terms of a classical worldview if he had tried (and necessarily failed) to back up his belief with a no-go theorem. The same can be said of Hance and Hossenfelder's mistaken claim regarding the phenomenology of interference with destructive measurements rather than nondestructive measurements, or with a mirror removed. In this sense, their claim provides an illustration of why one ought to follow the methodology we proposed in our article. We therefore take this opportunity to repeat a maxim expressed in the introduction of our article: One should not be credulous of statements that a given operational phenomenology im-plies some interpretational claim unless the statement is backed up by a rigorous no-go theorem proving the implication (typically against the backdrop of additional assumptions). We believe that broader adherence to this maxim can raise the level of quality of discussions concerning the foundations of quantum theory particularly between researchers who have diverging interpretational persuasions.
2022-07-26T01:16:23.080Z
2022-07-24T00:00:00.000
{ "year": 2022, "sha1": "3800fbdc12cd74ceb0c0bcde0c097296efebb557", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3800fbdc12cd74ceb0c0bcde0c097296efebb557", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249887104
pes2o/s2orc
v3-fos-license
Home Sweet Home: The Integrated Plastic Surgery Residency Match during the COVID-19 Pandemic Home Sweet Home: The Integrated Plastic Surgery Residency Match during the COVID-19 Pandemic T COVID-19 pandemic has led to inevitable changes in plastic surgery training and the residency application process. Following guidance from the American Council of Academic Plastic Surgeons, plastic surgery residency programs in the United States conducted virtual interviews and online subinternships for the 2021 match cycle.1 In previous years, one-fifth of plastic surgery residents matched into their home institution’s residency programs,2 with approximately 43 percent matching at programs where had they completed an away rotation.3 Moreover, in a 2020 survey of program directors, 42 percent cited “audition elective/rotation within [one’s] department” as a major factor for determining whether an applicant should be interviewed.4 Several constraints exist to the traditional recruitment process, including difficulty with organizing interviews and budgets. The integration of virtual options for applicants in the 2021 match cycle allowed students to engage in “virtual away rotations” and virtual interviews, thereby mitigating expenses and improving interview schedule coordination. However, virtual interviews have morphed a traditionally intimate experience into a digital one, and their benefits come at the price of other challenges – both for applicants and programs. Some reported and perceived drawbacks include: less familiarity with the applicant/ faculty, limited program culture insight, and decreased comfort with rank list structuring.5 only do the speaker(s) provide knowledge, but the moderators and panelists are also experts in the field, so discussions are exciting. Last but not the least, the webinars are recorded, with the permission of the speakers, and the videos are uploaded to International Microsurgery website (http://www.imw.global) for registered members to review without time limitations. As webinars spring up, copyrights and patient privacy are still of concern when using online platforms. Copyright issues are more difficult to regulate due to rapid technology progress, while patient privacy is easier to handle by obtaining a signed informed consent form for publication. In conclusion, the COVID-19 crisis revolutionized and accelerated knowledge transmission using an economic and efficient system, with the great cooperation of global microsurgeons. As noted in the slogan of the International Microsurgery Club webinar series, “your freeway to priceless knowledge,” we believe that webinars will continue to play an important role in microsurgery education worldwide after the COVID-19 pandemic era ends. T he COVID-19 pandemic has led to inevitable changes in plastic surgery training and the residency application process. Following guidance from the American Council of Academic Plastic Surgeons, plastic surgery residency programs in the United States conducted virtual interviews and online subinternships for the 2021 match cycle. 1 In previous years, one-fifth of plastic surgery residents matched into their home institution's residency programs, 2 with approximately 43 percent matching at programs where had they completed an away rotation. 3 Moreover, in a 2020 survey of program directors, 42 percent cited "audition elective/rotation within [one's] department" as a major factor for determining whether an applicant should be interviewed. 4 Several constraints exist to the traditional recruitment process, including difficulty with organizing interviews and budgets. The integration of virtual options for applicants in the 2021 match cycle allowed students to engage in "virtual away rotations" and virtual interviews, thereby mitigating expenses and improving interview schedule coordination. However, virtual interviews have morphed a traditionally intimate experience into a digital one, and their benefits come at the price of other challenges -both for applicants and programs. Some reported and perceived drawbacks include: less familiarity with the applicant/ faculty, limited program culture insight, and decreased comfort with rank list structuring. 5 only do the speaker(s) provide knowledge, but the moderators and panelists are also experts in the field, so discussions are exciting. Last but not the least, the webinars are recorded, with the permission of the speakers, and the videos are uploaded to International Microsurgery website (http://www.imw.global) for registered members to review without time limitations. As webinars spring up, copyrights and patient privacy are still of concern when using online platforms. Copyright issues are more difficult to regulate due to rapid technology progress, while patient privacy is easier to handle by obtaining a signed informed consent form for publication. In conclusion, the COVID-19 crisis revolutionized and accelerated knowledge transmission using an economic and efficient system, with the great cooperation of global microsurgeons. As noted in the slogan of the International Microsurgery Club webinar series, "your freeway to priceless knowledge," we believe that webinars will continue to play an important role in microsurgery education worldwide after the COVID-19 pandemic era ends. Despite the necessity of the virtual recruitment transition in these unprecedented times, is it possible that programs were more likely to favor their own students/research fellows, or that applicants "up-ranked" their own institutions? We reviewed publicly available match data from Electronic Residency Application Service-participating integrated U.S. plastic surgery residencies between 2018 and 2021. Information was collected on residents' medical school affiliation for U.S. medical graduates or most recent clinical or research affiliation for international medical graduates (total, n = 734) ( Table 1). Results revealed a statistically significant (t test, p = 0.0002) increase in the proportion of home matches in 2021 as compared to the aggregated three previous matches (2018 through 2020) (Fig. 1). Specifically, our data show applicants in the 2021 match were 2.24 times more likely to match at their home institutions than in previous years (CI, 1.32 to 3.8; p = 0.0027). Such results are significant for several reasons. First, it demonstrates that when confined to a virtual approach, institutions successfully implemented virtual interview processes, albeit with some perceived drawbacks. Second, this suggests corroborative information supporting recent literature citing that applicants were less comfortable ranking unfamiliar programs after virtual interviews. 4 Applicants, as a result, were perhaps more likely to hedge their bets by "up-ranking" their home institution, with which they were presumably more familiar. If correct, understanding what this means for applicants at schools without plastic surgery residencies becomes even more important. Furthermore, with the possibility of future hybrid interview models, it will be important to determine whether in-person interviews, if offered, play an advantageous role over virtual ones. We hope upcoming studies with qualitative survey elements can address these questions, including how 2021 applicants and home programs ranked each other, and delineate the motives for selecting programs in an atypical cycle. Overall, the landscape of residency applications was transformed in 2021. The particulars of how best to conduct virtual recruitment and interviews require thorough investigation. 494e Plastic and Reconstructive Surgery • August 2022 Wellness: Building a Meaningful and Effective Program W ellness. A word at the forefront of so many efforts and one that quickly loses meaning without concrete goals and tangible outcomes. James Baldwin wrote, "Not everything that is faced can be changed, but nothing can be changed until it is faced." This quote motivated us to face the changes needed to demonstrate our commitment to resident wellbeing. Our wellness leader is a patient-facing social worker who was previously involved in resident wellness efforts within another department. Of critical importance is that she is in no way an evaluator of our residents. The confidentiality and objectivity afforded in wellness interactions lead to trust and honest communication, and having a resource familiar with the institution of residency significantly reduces barriers to participation. By virtue of her profession, our facilitator is familiar with resources for psychosocial support and is able to facilitate referrals, meeting our goals for continuity and follow-through. To make this program a reality, we needed buy-in on multiple levels, from division support for funding to program director and faculty willingness to participate in change efforts. Most importantly, the residents themselves need to believe that wellness efforts are more than a gesture and to partner in its success. We set core goals, a primary one of which is to offer wellness intervention during protected time. We elected to include this program in our weekly teaching sessions to ensure resident availability and transparency regarding required participation and, most importantly, to avoid creating another demand on resident time. Including wellness in the core curriculum also contributes to normalization and an overall cultural shift toward prioritizing this component of the residency experience. We knew that no one intervention would resonate with each individual resident and were prepared for variability in engagement, and even the occasional eye roll, as we began a wider discussion and trialed various approaches. The program is designed to include small group sessions, whole-residency meetings, and 1:1 intervention. No faculty, including our program director, are present for sessions, unless they are invited.
2022-06-22T06:17:08.002Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "4182b75568a55b150220759be56226c8854077b6", "oa_license": null, "oa_url": "https://journals.lww.com/plasreconsurg/Citation/9900/_Home_Sweet_Home____The_Integrated_Plastic_Surgery.916.aspx", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d30b5c6249e3d1fea631c66ef6576042d3472907", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18029163
pes2o/s2orc
v3-fos-license
Prehistoric Pathoecology as Represented by Parasites of a Mummy from the Peruaçu Valley, Brazil Paleopathologists have begun exploring the pathoecology of parasitic diseases in relation to diet and environment. We are summarizing the parasitological findings from a mummy in the site of Lapa do Boquete, a Brazilian cave in the state of Minas Gerais. These findings in context of the archaeology of the site provided insights into the pathoecology of disease transmission in cave and rockshelter environments. We are presenting a description of the site followed by the evidence of hookworm, intestinal fluke, and Trypanosoma infection with resulting Chagas disease in the mummy discovered in the cave. These findings are used to reconstruct the transmission ecology of the site. INTRODUCTION Pathoecology is the study of past behavioral and environmental determinants of infection [1][2][3][4][5][6]. Examples of behaviors include crowding, sanitation, hygiene, and trade. Examples of environmental determinants include presence of pathogens, infection reservoirs, intermediate hosts as well as climate. These features of prehistoric life were affected by environmental factors, such as climate and soil conditions. Pathoecology began to emerge in the Southwest USA with the establishment of a link between the emergence of parasitic infections and Ancestral Pueblo cultural development [7]. This pathoecology approach was based on coprolite and mummy studies. Such studies resulted in the recovery of specific infection organisms, each with its own life cycle requirements. These life cycle requirements shed light on the specific aspects of behavior and environment that existed at sites in prehistory. Analyses of Andean mummies and coprolites contributed to the emergence of pathoecology as a concept and applied on a population scale. Martinson and her colleagues [1] codified the concept of 'pathoecology' to explain patterns of parasitic infections in archaeological sites in the Moquegua Valley of southern Peru. They analyzed mummies and coprolites and developed pathology profiles of 4 contemporaneous villages associated with the same archaeological culture [1,8]. This research showed that the parasitism at several villages was defined by occupation, trade, status, presence of domestic animals, and site location relative to fresh water access. At the same sites, Reinhard and Buikstra [8] analyzed the epidemiological diversity of head lice infection and were able to relate high infestations to specific male-associated activities, such as elaborate hair styles and use of headwear. Additional research in Chile showed that aggregation imposed by the Inca on local populations elevated crowd infections [9]. These Andean studies were used to develop a pathoecological study from analysis of many mummies and coprolites excavated from several archaeological sites. We are presenting below an example of how pathoecology can be reconstructed from a single mummy exhibiting multiple infections. Reinhard and Bryant [6] and Reinhard [2,3] developed the theory of pathoecology to include Pavlovsky's [10] concept of nidality in identifying foci of prehistoric infection. He combined ecological factors into a predictive tool for infection. He included vectors, reservoir hosts, humans, and favorable external environments. He defined a nidus as that portion of a natural geographic landscape which contains a community consisting of a pathogen, vectors, reservoir hosts, and recipient hosts, and possessing an environment in which the pathogen can circulate. He further found that pathogens possessed nidality. Nidality is the characteristic of an infectious agent to oc-cur in distinct nidi, such as being associated with particular geographic, climatic or ecological conditions. Reinhard [2] applied Pavlovsky's concept to tracing interior and exterior nidi at Chacoan Greathouses in New Mexico. Reinhard and Bryant [6] and Reinhard [3] incorporate environmental and dietary data to explain how environmental collapse exacerbates the nidality of infectious diseases. White and her colleagues [11] applied pathoecology on a regional scale to explain patterns of scurvy and anemia in Belize using stable isotopic data combined in an environmental perspective. Most recently, Reinhard and Araújo [5] applied pathoecology to devised excavation strategies to test hypotheses regarding Trypanosoma cruzi transmission in the Lower Pecos Canyonlands of Texas and Coahuila. In that region, T. cruzi infection exists in a sylvatic cycle involving woodrats (Neotoma spp.) and triatomine insects (Triatoma spp). The area's prehistoric hunter-gatherer subsistence strategy impacted this life cycle directly through human predation on wood rats and through the construction of baking pits which amplified woodrat/triatomine habitat. That expanded the habitat suitable for both woodrats and triatomine insects. The construction of baking features in rock shelters used by humans resulted in the close association of triatomines with humans [14]. They proposed field excavation that would test the nidi of transmission for triatomine insects and woodrat bone. Via molecular testing of recovered bones and insects, the transmission of T. cruzi would be explored. Recently, pathoecology has benefited from new diagnostic methods that expand our ability to recover evidence of parasites. This is illustrated by the combination of molecular biology, microscopy, and radiographic diagnosis of gross pathology applied to a mummy from Minas Gerais, Brazil. ARCHAEOLOGICAL BACKGROUND The northern region of Minas Gerais is a transitional zone between cerrado (savanna) and caatinga (desert vegetation). Forest formation is present in some areas, specifically along water courses [12]. The Site of Lapa do Boquete is located in a karstic canyon, the Peruaçu Valley, in the northern region of Minas Gerais state, Brazil (Fig. 1). The Peruaçu River's origin is located on gneiss bedrock, on the bank of the Rio São Francisco. Its middle course cuts through Precambrian calcareous formations, and in the past it was almost entirely subterranean. The roof collapse exposed the river bed, forming a canyon with cliffs 50 to 100 m high, characterized by karstic forms, in-cluding sinkholes (dolines), and subterranean river sections extending 1 to 3 km in length [13][14][15]. Excavations at Lapa do Boquete followed natural and cultural stratigraphy. Sedimentological features like color, texture, and composition where used to define geologic strata, and each stratum was divided into levels according to cultural stratigraphy. Décapage excavation was done, with 3-dimensional plotting of artifacts, features, and excavation units [12,15]. Nine strata were found [16]. Stratum 0 dated to the historic period and was composed of 5 to 15 cm of sediments mixed with modern dry cattle manure, and great quantities of archaeological material. Stratum 1 dates between 500 and 2,000 years ago. It was characterized by many hearths and abundant lithic artifacts. Strata 2 and 3 date between 2,000 and 6,000 years ago [16]. Both of these strata were heavily disturbed by the activity of prehistoric groups that occupied the cave during later periods, and were disturbed by intrusive circular pits of 20 to 40 cm in depth. These pits were dug in order to bury baskets containing vegetal material, such as maize, fruits, and nuts. The base of Stratum 3 has been dated to 5,960± 100 B.P. The remaining strata date to earlier Holocene occupations between 6,000 and 12,000 years ago and are characterized by lithic and faunal materials [16]. These earlier strata are not immediately relevant to the discovery of parasite infections and will not be discussed further. Kipnis [16] described the archaeological remains. Ceramic fragments were recovered from Strata 0 and 1. The ceramics belong to the Una tradition, which is characterized by thinwalled, undecorated small vessels found throughout Central Brazil, and almost exclusively in caves or rockshelters. Some of the other material culture found in the upper levels are a complete axe handle made of wood, beads made out of bone and wood, spatula-like tools made out of deer metapodials, and large planes used for woodworking made out of land snail (Strophocheilidae) shells. The latter 2 are relatively common in all levels. Stratum 1 contained 28 plant taxa [16]. These included cultivated plants, such as Gossypium sp. (cotton), Manihot sp. (manioc), Phaseolus (bean), and Zea mays (maize). Six human burials were found at Lapa do Boquete. One in stratum 1, another in stratum 2, and 4 in the older strata. Chronologically, 3 of the 6 burials range from ca. 7,000 to ca. 4,500 years ago, and the other 3 range from ca. 1,200 to ca. 600 years ago [17]. One male adult and a child (10-14 years old) were buried between 600 and 1,000 years ago. The adult male was dated by radiocarbon method to 560± 40 years ago [18]. This is the focus of the studies in this review. The body of the adult male was loosely flexed, wrapped in plant leaves, and buried with baskets (Fig. 2). The man's head was wrapped in plant fibers. Numerous artifacts were buried with him, including a dart thrower (atlatl), containers made of gourds, a ceramic pot, and numerous tools [17]. The arms, legs, abdominal skin, and some muscles were well preserved. Although healed fractures of the foot phalanges were identified, there were no skeletal signs of violence. Lumbar spondylosis was observed. Researchers observed pronounced dental wear, caries, and dental abscesses. A large fecal mass, illustrated via ct-scan [18,19] (Fig. 2), suggested that the individual experienced megacolon at the time of death. PARASITOLOGICAL DIAGNOSES Ferndandes and his colleagues [18] analyzed 2 samples of bone (rib and metacarpus) and 5 samples of soft tissue from different places of the body for T. cruzi DNA. The sample included tissue in contact with a coprolite. Researchers successfully recovered T. cruzi genotype I from the individual [18,20]. This study linked the gross pathology of Chagas disease directly with the infectious agent. This landmark study is the first to link pre-Columbian T. cruzi with Chagas disease in Brazil. It is likely that the individual had a broadly dispersed parasitic load because T. cruzi I DNA was found in all tissue samples. Another ground-breaking aspect of the Peruaçu study was the molecular refinement of microscopic diagnosis of trematode eggs (Fig. 3). Coprolites were present in the mummy and were analyzed by Sianto and her colleagues [19]. Eggs of a common intestinal parasite, hookworm, and an unusual intestinal parasite, Echinostoma spp. were found. Hookworm eggs were relatively rare in this sample, and only 5 were observed. Echinostoma spp. eggs were abundant and numbered 8,300 eggs per gram of coprolite. This suggested that the individual Hookworm infection has a great antiquity in Brazil, among hunter-gatherers and horticulturalists [21]. Reinhard [2] presented a case that irrigation promoted hookworm infection among people who practiced agriculture. The best soils for survival are sandy or loamy. They require shady and wet conditions because they are killed by desiccation and heat. This is especially true for the hookworm species known from the prehistoric New World, Ancylostoma duodenale. The nidi for infection were areas contaminated with human feces. This could have included areas immediately surrounding, or even within, human habitations. T. cruzi has a moderately complex life cycle [5,18,20,23,24]. The intermediate hosts are insects of the subfamily Triatominae. These insects feed on blood by biting vertebrates. Intracellular amastigotes destroy the intramural neurons of the autonomic nervous system in the intestine and heart. The megacolon evident on the Lapa do Boquete mummy was caused by this destruction of the large intestine neurons. The peristalsis is disrupted, and the colon fills with feces. T. cruzi also exhibits transplacental or transmammary infection. Infection can also occur by eating food contaminated with trypomastigotes, either in the form of triatomine feces in plant foods or blood from incompletely cooked meat. For ancient T. cruzi infection in Texas cave settings, Reinhard and his colleagues [25] hypothesized that several pathoecological aspects of human, parasite, and vector interaction caused the infection in prehistoric peoples. Prehistoric people frequently deposited large amounts of vegetal material in their rockshelter habitations as discovered in Lapa do Boquete. This created a good environment for triatomine bugs which transmit T. cruzi. Humans worked and slept in the natural shelter provided by the cave. Thus, they were in close proximity to the bugs which promoted infection. Eating raw meat from infected reservoir hosts, including rodents and armadillos can cause infection [26]. Analysis by Kipnis [16] showed that armadillo meat was commonly eaten in a poorly cooked state. Armadillos are a reservoir host for T. cruzi and the habit of eating armadillo meat may have caused some infections. The pathoecology of the Lapa do Boquete mummy can be reconstructed from his parasites. The environment was moist and warm. The soils where the man walked and worked in-had a true infection. The morphology of the eggs could be described with certainty and were consistent the Echinostoma genus. The species level diagnosis could not be reached with certainty, but the eggs were most consistent with those of Echinostoma luisreyi. Leles and her colleagues [21] showed that the DNA sequence of the ancient eggs was consistent with the species Echinostoma paraensei. Therefore, molecular biology led to a more refined diagnosis at the species level. PATHOECOLOGY IN THE PERUAÇU VALLEY Echinostoma includes common intestinal flukes of aquatic birds and mammals. In this case, the species of Echinostoma that infected the mummified man is most likely E. paraensei [19,21] [27][28][29]. Consumption of intermediate hosts, probably fish, was the source of Echinostoma infection [19]. There are no recorded cases of E. paraensei infection in humans, so this is a provocative finding. Each of these infections has a distinct nidality. T. cruzi has the most restricted nidi. A nidus for T. cruzi is an enclosed area where the infected triatomine is associated with sleeping humans. The caves in the Peruaçu Valley, including Lapa do Boquete, were the primary nidi for infection. In the case of infection by eating contaminated meat, the nidus extended over the area overlapped by human hunters and armadillos. The nidus for hookworm infection must include soil somewhat close to fecal deposits where humans and infective larvae are active. Areas near latrines or agricultural fields contaminated with feces are common nidi for hookworm infection. The most common method to control for hookworms is to use footwear, but Behnke and colleagues [30] document problems with the wearing of sandals among modern agriculturalists in Mali. They state [30, p 352], "soil sticks to sandals, making them uncomfortable and frustrating to wear when tilling soil, and risking damage. As a result of this practice, those who often wore shoes still became infected through bare skin." Lapa do Boquete inhabitants wore sandals or were barefoot. Thus, simple footwear left them open to infection. Relevant to Lapa do Boquete pathoecology, Schad and his colleagues [31] documented another scenario for hookworm infection. They found that in West Bengal, villagers defecated in areas around the village peripheries. People used the same areas day after day for this purpose. As a result, hookworm larvae proliferated in those areas and the brief time spent by humans in the contaminated areas was sufficient for hookworm infection. In conclusion, the spectrum of parasites from a single individual reveals a relatively complex pathoecology for his community of horticulturalists. Risk of hookworm infection was associated with fecal contamination in moist areas without adequate foot protection. Echinostome infection resulted from the consumption of intermediate host fish or mollusks. T. cruzi infection may have cycled in sleeping areas or through inges-tion of reservoir hosts. ACKNOWLEDGMENTS The writing of this manuscript was supported by the Brazilian National Council for Scientific and Technological Development (CNPq), Brazilian Improving Coordination of Higher Education Personnel (CAPES), and the Fulbright Commission.
2017-04-27T08:35:35.871Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "45b4ef519d3f9e4149f9d2718c23dfa12febb3a6", "oa_license": "CCBYNC", "oa_url": "http://parasitol.kr/upload/pdf/kjp-54-5-585.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45b4ef519d3f9e4149f9d2718c23dfa12febb3a6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17715320
pes2o/s2orc
v3-fos-license
Phylogenetic and morphotaxonomic revision of Ramichloridium and allied genera The phylogeny of the genera Periconiella, Ramichloridium, Rhinocladiella and Veronaea was explored by means of partial sequences of the 28S (LSU) rRNA gene and the ITS region (ITS1, 5.8S rDNA and ITS2). Based on the LSU sequence data, ramichloridium-like species segregate into eight distinct clusters. These include the Capnodiales (Mycosphaerellaceae and Teratosphaeriaceae), the Chaetothyriales (Herpotrichiellaceae), the Pleosporales, and five ascomycete clades with uncertain affinities. The type species of Ramichloridium, R. apiculatum, together with R. musae, R. biverticillatum, R. cerophilum, R. verrucosum, R. pini, and three new species isolated from Strelitzia, Musa and forest soil, respectively, reside in the Capnodiales clade. The human-pathogenic species R. mackenziei and R. basitonum, together with R. fasciculatum and R. anceps, cluster with Rhinocladiella (type species: Rh. atrovirens, Herpotrichiellaceae, Chaetothyriales), and are allocated to this genus. Veronaea botryosa, the type species of the genus Veronaea, also resides in the Chaetothyriales clade, whereas Veronaea simplex clusters as a sister taxon to the Venturiaceae (Pleosporales), and is placed in a new genus, Veronaeopsis. Ramichloridium obovoideum clusters with Carpoligna pleurothecii (anamorph: Pleurothecium sp., Chaetosphaeriales), and a new combination is proposed in Pleurothecium. Other ramichloridium-like clades include R. subulatum and R. epichloës (incertae sedis, Sordariomycetes), for which a new genus, Radulidium is erected. Ramichloridium schulzeri and its varieties are placed in a new genus, Myrmecridium (incertae sedis, Sordariomycetes). The genus Pseudovirgaria (incertae sedis) is introduced to accommodate ramichloridium-like isolates occurring on various species of rust fungi. A veronaea-like isolate from Bertia moriformis with phylogenetic affinity to the Annulatascaceae (Sordariomycetidae) is placed in a new genus, Rhodoveronaea. Besides Ramichloridium, Periconiella is also polyphyletic. Thysanorea is introduced to accommodate Periconiella papuana (Herpotrichiellaceae), which is unrelated to the type species, P. velutina (Mycosphaerellaceae). To date 26 species have been named in Ramichloridium; they not only differ in morphology, but also in life style. Ramichloridium mackenziei C.K. Campb. & Al-Hedaithy is a serious human pathogen, causing cerebral phaeohyphomycosis (Al-Hedaithy et al. 1988, Campbell & Al-Hedaithy 1993, whereas R. musae causes tropical speckle disease on members of the Musaceae (Stahel 1937, Jones 2000. Another plant-pathogenic species, R. pini de Hoog & Rahman, causes a needle disease on Pinus contorta (de Hoog et al. 1983). Other clinically relevant species of Ramichloridium are R. basitonum de Hoog and occasionally R. schulzeri (Sacc.) de Hoog, while the remaining species tend to be common soil saprobes. No teleomorph has thus far been linked to species of Ramichloridium. The main question that remains is whether shared morphology among the species in this genus reflects common ancestry (Seifert 1993, Untereiner & Naveau 1999. To delineate anamorphic genera adequately, morphology and conidial ontogeny alone are no longer satisfactory (Crous et al. 2006a, b), and DNA data provide additional characters to help delineate species and genera (Taylor et al. 2000, Mostert et al. 2006, Zipfel et al. 2006. The aim of the present study was to integrate morphological and cultural features with DNA sequence data to resolve the species concepts and generic limits of the taxa currently placed in Periconiella, Ramichloridium, Rhinocladiella and Veronaea, and to resolve the status of several new cultures that were isolated during the course of this study. Isolates Species names, substrates, geographical origins and GenBank accession numbers of the isolates included in this study are listed in Table 1. Fungal isolates are maintained in the culture collection of the Centraalbureau voor Schimmelcultures (CBS) in Utrecht, the Netherlands. DNA extraction, amplification and sequence analysis Genomic DNA was extracted from colonies grown on 2 % malt extract agar (MEA, Difco) (Gams et al. 2007) using the FastDNA kit (BIO101, Carlsbad, CA, U.S.A.). The primers ITS1 and ITS4 (White et al. 1990) were used to amplify the internal transcribed spacer region (ITS) of the nuclear ribosomal RNA operon, including: the 3' end of the 18S rRNA gene, the first internal transcribed spacer region (ITS1), the 5.8S rRNA gene, the second internal transcribed spacer region (ITS2) and the 5' end of 28S rRNA gene. Part of the large subunit 28S rRNA (LSU) gene was amplified with primers LR0R (Rehner & Samuels 1994) and LR5 (Vilgalys & Hester 1990). The ITS region was sequenced only for those isolates for which these data were not available. The ITS analyses confirmed the proposed classification based on LSU analysis for each major clade and are not presented here in detail; but the sequences are deposited in GenBank where applicable. The PCR reaction was performed in a mixture with 0.5 units Taq polymerase (Bioline, London, U.K.), 1× PCR buffer, 0.5 mM MgCl 2 , 0.2 mM of each dNTP, 5 pmol of each primer, approximately 10-15 ng of fungal genomic DNA, with the total volume adjusted to 25 µL with sterile water. Reactions were performed on a GeneAmp PCR System 9700 (Applied Biosystems, Foster City, CA) with cycling conditions consisting of 5 min at 96 °C for primary denaturation, followed by 36 cycles at 96 °C (30 s), 52 °C (30 s), and 72 °C (60 s), with a final 7 min extension step at 72 °C to complete the reaction. The amplicons were sequenced using BigDye Terminator v. 3.1 (Applied Biosystems,Foster City,CA) or DYEnamicET Terminator (Amersham Biosciences, Freiburg, Germany) Cycle Sequencing Kits and analysed on an ABI Prism 3700 (Applied Biosystems, Foster City, CA) under conditions recommended by the manufacturer. Newly generated sequences were subjected to a Blast search of the NCBI databases, sequences with high similarity were downloaded from GenBank and comparisons were made based on the alignment of the obtained sequences. Sequences from GenBank were also selected for similar taxa. The LSU tree was rooted using sequences of Athelia epiphylla Pers. and Paullicorticium ansatum Liberta as outgroups. Phylogenetic analysis was performed with PAuP (Phylogenetic Analysis Using Parsimony) v. 4.0b10 (Swofford 2003), using the neighbour-joining algorithm with the uncorrected ("p"), the Kimura 2-parameter and the HKY85 substitution models. Alignment gaps longer than 10 bases were coded as single events for the phylogenetic analyses; the remaining gaps were treated as missing data. Any ties were broken randomly when encountered. Phylogenetic relationships were also inferred with the parsimony algorithm using the heuristic search option with simple taxon additions and tree bisection and reconstruction (TBR) as the branch-swapping algorithm; alignment gaps were treated as a fifth character state and all characters were unordered and of equal weight. Branches of zero length were collapsed and all multiple, equally parsimonious trees were saved. Only the first 5 000 equally most parsimonious trees were saved. Other measures calculated included tree length, consistency index, retention index and rescaled consistency index (TL, CI, RI and RC, respectively). The robustness of the obtained trees was evaluated by 1 000 bootstrap replications. Bayesian analysis was performed following the methods of Crous et al. (2006c). The best nucleotide substitution model was determined using MrModeltest v. 2.2 (Nylander 2004). MrBAyeS v. 3.1.2 (Ronquist & Huelsenbeck 2003) was used to perform phylogenetic analyses, using a general time-reversible (GTR) substitution model with inverse gamma rates, dirichlet base frequencies and the temp value set to 0.5. New sequences were lodged with NCBI's GenBank (Table 1) and the alignment and trees with TreeBASE (www.treebase.org). Morphology Cultural growth rates and morphology were recorded from colonies grown on MEA for 2 wk at 24 ºC in the dark, and colony colours were determined by reference to the colour charts of Rayner (1970). Microscopic observations were made from colonies cultivated on MEA and OA (oatmeal agar, Gams et al. 2007), using a slide culture technique. Slide cultures were set up in Petri dishes containing 2 mL of sterile water, into which a U-shaped glass rod was placed, extending above the water surface. A block of freshly growing fungal colony, approx. 1 cm square was placed onto a sterile microscope slide, covered with a somewhat larger, sterile glass cover slip, and incubated in the moist chamber. Fungal sporulation was monitored over time, and when optimal, images were captured by means of a Nikon camera system (Digital Sight DS-5M, Nikon Corporation, Japan). Structures were mounted in lactic acid, and 30 measurements (× 1 000 magnification) determined wherever possible, with the extremes of spore measurements given in parentheses. Phylogeny The manually adjusted alignment of the 28S rDNA data contained 137 sequences (including the two outgroups) and 995 characters including alignment gaps. Of the 748 characters used in the phylogenetic analysis, 373 were parsimony-informative, 61 were variable and parsimony-uninformative, and 314 were constant. Neighbour-joining analysis using the three substitution models on the LSU alignment yielded trees with similar topology and bootstrap values. Parsimony analysis of the alignment yielded 5 000 equally most parsimonious trees, one of which is shown in Fig. 1 Taxonomy The species previously described in Ramichloridium share some morphological features, including erect, pigmented, more or less differentiated conidiophores, sympodially proliferating conidiogenous cells and predominantly aseptate conidia. Other than conidial morphology, features of the conidiogenous apparatus that appear to be more phylogenetically informative include pigmentation of vegetative hyphae, conidiophores and conidia, denticle density on the rachis, and structure of the scars. By integrating these data with the molecular data set, more natural genera are delineated, which are discussed below. Cultural characteristics: Colonies on MEA reaching 35 mm diam after 14 d at 24 °C; minimum temperature for growth above 6 °C, optimum 24 °C, maximum 30 °C. Colonies raised, velvety, dense, with entire margin; surface olivaceous-green, reverse olivaceousblack, often with a diffusing citron-yellow pigment. Etymology: Named after its country of origin, Australia. Cultural characteristics: Colonies on MEA slow-growing, reaching 27 mm diam after 14 d at 24 °C, with entire, smooth, sharp margin; mycelium mostly submerged, some floccose to lanose aerial mycelium in the olivaceous-grey centre, becoming pale pinkish olivaceous towards the margin; reverse pale orange. Etymology: Named after its biverticillate conidiophores. Cultural characteristics: Colonies on MEA rather slow-growing, reaching 12 mm diam after 14 d at 24 °C, velvety to hairy, with entire margin; surface dark olivaceous-grey, with black gelatinous exudate droplets on OA. Specimen examined: Japan, isolated from Sasa sp., K. Tubaki, CBS 103.59, extype. Notes: Phylogenetically, this species together with Ramichloridium apiculatum and R. musae cluster within the Mycosphaerellaceae clade. Ramichloridium cerophilum can be distinguished from its relatives by the production of secondary conidia and its distinct conidial hila. Cultural characteristics: Colonies reaching 7 mm diam after 14 d at 24 °C. Colonies velvety, rather compact, slightly elevated with entire margin; surface dark olivaceous-green in the central part, margin smooth, whitish. Notes: The name Racodium Fr., typified by Ra. rupestre Pers. : Fr., has been conserved over the older one by Persoon, with Ra. cellare as type species. De Hoog (1979) defended the use of Zasmidium in its place for the well-known wine-cellar fungus. Morphologically Zasmidium resembles Stenella Syd., and both reside in the Capnodiales, though the type of Stenella, S. araguata Syd., clusters in the Teratosphaeriaceae, and the type of Zasmidium, Z. cellare, in the Mycosphaerellaceae. When accepting anamorph genera as polyphyletic within an order, preference would be given to the well-known name Stenella over the less known Zasmidium, even though the latter name is older. Further studies are required, however, to clarify if all stenella-like taxa should be accommodated in a single genus, Stenella. If this is indeed the case, a new combination for Zasmidium cellare will be proposed in Stenella, and the latter genus will have to be conserved over Zasmidium. Chaetothyriales (Herpotrichiellaceae) The four "Ramichloridium" species residing in the Chaetothyriales clade do not differ sufficiently in morphology to separate them from Rhinocladiella (type Rh. atrovirens). Because of the pale brown conidiophores, conidiogenous cells with crowded, slightly prominent scars and the occasional presence of an Exophiala J.W. Carmich. synanamorph, Rhinocladiella is a suitable genus to accommodate them. These four species chiefly differ from Ramichloridium in the morphology of their conidial apparatus, which is clearly differentiated from the vegetative hyphae. The appropriate combinations are therefore introduced for Ramichloridium anceps, R. mackenziei, R. fasciculatum and R. basitonum. The genus Veronaea (type species: V. botryosa) also resides in the Chaetothyriales clade. Veronaea can be distinguished from Rhinocladiella by the absence of exophiala-type budding cells and its predominantly 1-septate conidia. Furthermore, the conidiogenous loci in Veronaea are rather flat, barely prominent. Cultural characteristics: Colonies on MEA reaching 5 mm diam after 14 d at 24 °C, with entire, smooth, sharp margin; mycelium densely lanose and elevated in the centre, olivaceous-green to brown; reverse dark olivaceous. Cultural characteristics: Colonies rather slow growing, reaching 15 mm diam on MEA after 14 d at 24 °C; surface velvety to lanose, slightly raised in the centre, pale grey to pale brownish grey; reverse dark grey. Etymology: Named after the country of origin, Japan. Cultural characteristics: Colonies rather slow growing, reaching 7.5 mm diam on MEA after 14 d at 24 °C; surface velvety to lanose, slightly raised in the centre, olivaceous-brown, with entire margin; reverse dark-olivaceous. Note: This species is morphologically similar to V. compacta (Papendorf 1976), but can be distinguished based on the presence of dark brown, swollen hyphal cells in culture, which are absent in V. compacta. Pleurothecium obovoideum clade (Chaetosphaeriales) Ramichloridium obovoideum was regarded as similar to "Ramichloridium" (Rhinocladiella) mackenziei by some authors, and subsequently reduced to synonymy (Ur-Rahman et al. 1988). However, R. obovoideum clusters with Carpoligna pleurothecii, the teleomorph of Pleurothecium Höhn. Because it is also morphologically similar to other species of Pleurothecium, we herewith combine it into that genus. Ramichloridium schulzeri clade Ramichloridium schulzeri, including its varieties, clusters near Thyridium Nitschke and the Magnaporthaceae, and is phylogenetically as well as morphologically distinct from the other genera in the Ramichloridium complex. To accommodate these taxa, a new genus is introduced below. Type species: Myrmecridium schulzeri (Sacc.) Arzanlou, W. Gams & Crous, comb. nov. Notes: Myrmecridium schulzeri was fully described as Acrotheca acuta Grove by Hughes (1951). The author discussed several genera, none of which is suitable for the present fungus for various reasons as analysed by de Hoog (1977). Only Gomphinaria Preuss is not yet sufficiently documented. Our examination of G. amoena Preuss (B!) showed that this is an entirely different fungus, of which no fresh material is available to ascertain its position. Myrmecridium can be distinguished from other ramichloridiumlike fungi by having entirely hyaline vegetative hyphae, and widely scattered, pimple-shaped denticles on the long hyaline rachis. The conidial sheath is visible in lactic acid mounts with brightfield microscopy. The Myrmecridium clade consists of several subclusters, which are insufficiently resolved based on the ITS sequence data. However, two morphologically distinct varieties of Myrmecridium are treated here. The status of the other isolates in this clade will be dealt with in a future study incorporating more strains, and using a multi-gene phylogenetic approach. Cultural characteristics: Colonies reaching 40 mm diam after 14 d at 24 °C; mycelium submerged, flat, smooth; centrally orange, later becoming powdery to velvety and greyish brown due to sporulation, with sharp, smooth, entire margin; reverse yellowish orange. Note: This former variety is sufficiently distinguished from M. schulzeri s. str. by its flexuose conidiophores and conidia which lack an acuminate base, to be regarded as a separate species. (Ellis & Everh.) de Hoog, Stud. Mycol. 15: 79. 1977 Notes: According to the description and illustration of R. torvi provided by de Hoog (1977), this appears to be an additional species of Myrmecridium. Although it is morphologically similar to M. flexuosum in having a flexuose rachis, it differs from the other species of the genus by having smooth, clavate conidia. Fresh collections and cultures would be required to resolve its status. Although Pseudovirgaria is morphologically similar to Virgaria Nees, it has hyaline to pale brown hyphae, conidia and conidiogenous cells. The conidiogenous cells are integrated in creeping threads (hyphae), terminal and intercalary, and the proliferation is distinctly sympodial. The subdenticulate conidiogenous loci are scattered, solitary, at small shoulders of geniculate conidiogenous cells, caused by sympodial proliferation, or aggregated, forming slight swellings of the rachis, i.e., a typical raduliform rachis as in Virgaria is lacking. Furthermore, the conidiogenous loci of Pseudovirgaria are bulging, convex, slightly attenuated towards the rouded apex, in contrast to more cylindrical denticles in Virgaria (Ellis 1971). The scar type of Pseudovirgaria is peculiar due to its convex, papilla-like shape and reminiscent of conidiogenous loci in plantpathogenic genera like Neoovularia U. Braun and Pseudodidymaria U. Braun (Braun 1998). The superficially similar genus Veronaea is quite distinct from Pseudovirgaria by having erect conidiophores with a typical rachis and crowded conidiogenous loci which are flat or only slightly prominent and darkened. Pseudovirgaria is characterised by its mycelium which is composed of branched hyphae with integrated, terminal and intercalary conidiogenous cells. A differentiation between branched hyphae and "branched conidiophores" is difficult and barely possible. It remains unclear if the "creeping threads" and terminal branches of hyphae are to be interpreted as "creeping conidiophores". In any case, the mycelium forms complex fertile branched hyphal structures in which individual conidiophores are barely discernable. These structures and difficulties in discerning individual conidiophores remind one of some species of Pseudocercospora Speg. and other cercosporoid genera with abundant superficial mycelium in vivo. Etymology: Named after its hyperparasitic habit on rust fungi. Cultural characteristics: Colonies reaching 25 mm diam after 14 d at 24 °C; surface velvety, floccose, greyish sepia to hazel, with smooth margin; reverse mouse-grey to dark mouse-grey. Notes: The presence of 1-septate conidia in Veronaeopsis overlaps with Veronaea. However, Veronaeopsis differs from Veronaea based on its conidiophore and conidiogenous cell morphology. Veronaea has much longer, macronematous conidiophores than Veronaeopsis. Furthermore, Veronaea has a more or less straight rachis, whereas in Veronaeopsis the rachis is often geniculate. The conidiogenous loci in Veronaea are less prominent, i.e., less denticle-like. dIscussIon The present study was initiated chiefly to clarify the status of Ramichloridium musae, the causal organism of tropical speckle disease of banana (Jones 2000). Much confusion surrounded this name in the past, relating, respectively, to its validation, species and generic status. As was revealed in the present study, however, two species are involved in banana speckle disease, namely R. musae and R. biverticillatum. Even more surprising was the fact that Ramichloridium comprises anamorphs of Mycosphaerella Johanson (Mycosphaerellaceae), though no teleomorphs have thus far been conclusively linked to any species of Ramichloridium. By investigating the Ramichloridium generic complex as outlined by de Hoog (1977), another genus associated with leaf spots, namely Periconiella, was also shown to represent an anamorph of Mycosphaerella. Although no teleomorph connections have been proven for ramichloridium-like taxa, de Hoog et al. (1983) refer to the type specimen of Wentiomyces javanicus Koord. (Pseudoperisporiaceae), on the type specimen of which (PC) some ramichloridium-like conidiophores were seen. Without fresh material and an anamorph-teleomorph connection proven in culture, however, this matter cannot be investigated further. It is interesting to note, however, that Wentiomyces Koord. shows a strong resemblance to Mycosphaerella, except for the external perithecial appendages. The genus Mycosphaerella is presently one of the largest genera of ascomycetes, containing close to 3 000 names (Aptroot 2006), to which approximately 30 anamorph genera have already been linked (Crous et al. 2006a(Crous et al. , b, 2007. By adding two additional anamorph genera, the Mycosphaerella complex appears to be expanding even further, though some taxa have been shown to reside in other families in the Capnodiales, such as Davidiella Crous & U. Braun (Davidiellaceae) and Teratosphaeria (Teratosphaeriaceae) (Braun et al. 2003, Schubert et al. 2007. Another family, which proved to accommodate several ramichloridium-like taxa, is the Herpotrichiellaceae (Chaetothyriales). Members of the Chaetothyriales are regularly encountered as causal agents of human mycoses (Haase et al. 1999, whereas species of the Capnodiales are common plant pathogens, or chiefly associated with plants. Species in the Chaetothyriales have consistently melanized thalli, which is a factor enabling them to invade humans, and cause a wide diversity of mycoses, such as chromoblastomycosis, mycetoma, brain infection and subcutaneous phaeohyphomycosis . The only known teleomorph connection in this genus is Capronia Sacc. (Untereiner & Naveau 1999). Rhinocladiella and Veronaea were in the past frequently confused with the genus Ramichloridium. However, Rhinocladiella, as well as Veronaea and Thysanorea, were shown to cluster in the Chaetothyriales, while Ramichloridium clusters in the Capnodiales. Rhinocladiella mackenziei, which causes severe cerebral phaeohyphomycosis in humans (Sutton et al. 1998), has in the past been confused with Pleurothecium obovoideum (Ur-Rahman et al. 1988). Data presented here reveal, however, that although morphologically similar, these species are phylogenetically separate, with P. obovoideum belonging to the Sordariales, where it clusters with sexual species of Carpoligna F.A. Fernández & Huhndorf that have Pleurothecium anamorphs (Fernández et al. 1999). In addition to the genera clustering in the Capnodiales and Chaetothyriales, several ramichloridium-like genera are newly introduced to accommodate species that cluster elsewhere in the ascomycetes, namely Pseudovirgaria, Radulidium and Myrmecridium, Veronaeopsis, and Rhodoveronaea. Although the ecological role of these taxa is much less known than that of taxa in the Capnodiales and Chaetothyriales, some exhibit an interesting ecology. For instance, the fungicolous habit of Pseudovirgaria, as well as some species in Radulidium, which are found on various rust species, suggests that these genera should be screened further to establish if they have any potential biocontrol properties. Furthermore, these two genera share a common ancestor, and further work is required to determine whether speciation was shaped by co-evolution with the rusts. A further species of "Veronaea" that might belong to Pseudovirgaria is Veronaea harunganae (Hansf.) M.B. Ellis, which is known to occur on Hemileia harunganae Cummins on Harungana in Tanzania and Uganda (Ellis 1976). The latter species, however, is presently not known from culture, and needs to be recollected to facilitate further study. The genera distinguished here represent homogeneous clades in the phylogenetic analysis. Only the species of Rhinocladiella are dispersed among others morphologically classified in Exophiala or other genera. By integrating the phylogenetic data generated here with the various morphological data sets, we were able to resolve eight clades for taxa formerly regarded as representative of the Ramichloridium complex. According to the phylogeny inferred from 28S rDNA sequence data, the genera Ramichloridium and Periconiella were heterogeneous, requiring the introduction of several novel genera. Although the present 11 odd genera can still be distinguished based on their morphology, it is unlikely that morphological identifications without the supplement of molecular data would in the future be able to accurately identify all the novel isolates that undoubtably await description. The integration of morphology with phylogenetic data not only helps to resolve generic affinities, but it also assists in discriminating between the various cryptic species that surround many of these well-known names that are presently freely used in the literature. To that end it is interesting to note that for the majority of the taxa studied here, the ITS domain (Table 1) provided good species resolution. However, more genes will have to be screened in future studies aimed at characterising some of the species complexes where the ITS domain provided insufficient phylogenetic signal (data not shown) to resolve all of the observed morphological species. AcKnoWledGeMenTs The work of Mahdi Arzanlou was funded by the Ministry of Science, Research and Technology of Iran, which we gratefully acknowledge. Several colleagues from different countries provided material without which this work would not have been possible. We thank Marjan Vermaas for preparing the photographic plates, and Arien van Iperen for taking care of the cultures.
2014-10-01T00:00:00.000Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "d5a79823bb715913253ac8aa995d026e2e897413", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.3114/sim.2007.58.03", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7fb4c7ef9860ab5d9b4a4322dca7b7a00bc0cb87", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
199543598
pes2o/s2orc
v3-fos-license
DPPy: Sampling DPPs with Python Determinantal point processes (DPPs) are specific probability distributions over clouds of points that are used as models and computational tools across physics, probability, statistics, and more recently machine learning. Sampling from DPPs is a challenge and therefore we present DPPy, a Python toolbox that gathers known exact and approximate sampling algorithms for both finite and continuous DPPs. The project is hosted on GitHub and equipped with an extensive documentation. In ML, DPPs mainly serve to model diverse sets of items, as in recommendation (Kathuria et al., 2016;Gartrell et al., 2016) or text summarization (Dupuy & Bach, 2018). Consequently, MLers use mostly finite DPPs, which are distributions over subsets of a finite ground set of cardinality M , parametrized by an M × M kernel matrix K. Routine inference tasks such as normalization, marginalization, or sampling have complexity O(M 3 ) (Gillenwater, 2014). Like other kernel methods, when M is large, O(M 3 ) is a bottleneck. In terms of software, the R library spatstat (Baddeley & Turner, 2005), a generalpurpose toolbox on spatial point processes, includes sampling and learning of continuous DPPs with stationary kernels, as described by Lavancier et al. (2012). Complementarily, we propose DPPy, a turnkey Python implementation of known general algorithms to sample finite DPPs. We also include algorithms for non-stationary continuous DPPs, e.g., related to random covariance matrices or Monte Carlo methods that are also of interest for MLers. The DPPy project, hosted on GitHub, is already being used by the cross-disciplinary DPP community (Burt et al., 2019;Kammoun, 2018;Poulson, 2019;Dereziński et al., 2019;Gautier et al., 2019). We use Travis for continuous integration and Coveralls for test coverage. Through ReadTheDocs we provide an extensive documentation, which covers the essential mathematical background and showcases the key properties of DPPs through DPPy objects and associated methods. DPPy thus also serves as a tutorial. Definitions A point process is a random subset of points X = {X 1 , . . . , X N } ⊂ X, where the number of points N is itself random. We further add to the definition that N should be almost surely finite and that all points in a sample are distinct. Given a reference measure µ on X, a point process is usually characterized by its k-correlation function ρ k for all k, where see Møller & Waagepetersen (2004, Section 4). The functions ρ k describe the interaction among points in X by quantifying co-occurrence of points at a set of locations. A point process X on (X, µ) parametrized by a kernel K : X × X → C is said to be determinantal, denoted as X ∼ DPP(K), if its k-correlation functions satisfy In ML, most DPPs are in the finite setting where X = {1, . . . , M } and µ = M i=1 δ i . In this context, the kernel function becomes an M × M matrix K, and the correlation functions refer to inclusion probabilities. DPPs are thus often defined as X where K S denotes the submatrix of K formed by the rows and columns indexed by S. The kernel matrix K is commonly assumed to be real-symmetric, in which case the existence and uniqueness of the DPP in Equation 1 is equivalent to the condition that the eigenvalues of K lie in [0, 1]. The result also holds for general Hermitian kernel functions K with additional assumptions (Soshnikov, 2000, Theorem 3). We note that there are also DPPs with nonsymmetric kernels (Borodin et al., 2010;Gartrell et al., 2019). Oftentimes, ML practitioners favor a more flexible definition of a DPP in terms of a likelihood kernel L, which only requires L 0 so that , rather than a correlation kernel 0 K I. Yet, the L parametrization makes Equation 1 less interpretable and does not cover important cases such as fixed size DPPs which are achievable using projection K kernels. Kulesza & Taskar (2012, Section 5) countered that with k-DPPs, which can be understood as DPPs parametrized by a likelihood kernel, conditioned to have exactly k elements. However, in general, k-DPPs are not DPPs. The main interest in DPPs in ML is that they model diversity while being tractable. Compared to independent sampling with the same marginals, Equation 1 entails so that, the larger |K ij | less likely items i and j co-occur. If K ij models the similarity between items i and j, DPPs are thus random diverse sets of elements. Most point processes that encode diversity are not tractable, in the sense that efficient algorithms to sample, marginalize, or compute normalization constants are not available. However, DPPs are amenable to these tasks with polynomial complexity (Gillenwater, 2014). Next, we present the challenging task of sampling, which is the core of DPPy. Sampling determinantal point processes We assume henceforth that K is real-symmetric and satisfies suitable conditions Soshnikov (2000, Theorem 3) so that its spectral decomposition is available Note that, in the finite case, the spectral theorem is enough to eigendecompose K. Hough et al. (2006, Theorem 7) proved that sampling DPP(K) can be done in two steps: 1. draw B i ∼ Ber(λ i ) independently and denote {i 1 , . . . , i N } = {i : B i = 1}, 2. sample from the DPP with kernel K(x, y) = N n=1 φ in (x)φ in (y). In other words, all DPPs are mixtures of projection DPPs, that are parametrized by an orthogonal projection kernel. In a nutshell, Step 1 selects a component of the mixture and Step 2 generates a sample of the projection DPP( K). Hough et al. (2006, Algorithm 18) provide a generic projection DPP sampler that we briefly describe. First, the projection DPP with kernel K has exactly N = rank K points, µ-almost surely. Then, the sequential aspect of the chain rule applied to sample (X 1 , . . . , X N ) with probability distribution can be discarded to get a valid sample {X 1 , . . . , X N } ∼ DPP( K). To each x ∈ X we associate a feature vector Φ(x) (φ i 1 (x), . . . , φ i N (x)), so that K(x, y) = Φ(x) T Φ(y). A few remarks are in order. First, the LHS of Equation 2 defines an exchangeable probability distribution. Second, the successive ratios that appear in the RHS are the normalized conditional densities (w.r.t. µ) that drive the chain rule. The associated normalizing constants are independent of the previous points. The numerators can be written as the ratio of two determinants and further expanded with Woodbury's formula. They can be identified as the incremental posterior variances in Gaussian process regression with kernel K (Rasmussen & Williams, 2006, Equation 2.26). Third, the chain rule expressed in Equation 2 has a strong Gram-Schmidt flavor since it actually comes from a recursive application of the base×height formula. In the end, DPPs favor configuration of points whose feature vectors Φ(x 1 ), . . . , Φ(x N ) span a large volume, which is another way of understanding repulsiveness. The previous sampling scheme is exact and generic but, except for projection kernels, it requires the eigendecomposition of the underlying kernel. In the finite setting, this corresponds to an initial O(M 3 ) cost, then the complexity of drawing exact samples is of order O(M E[|X |] 2 ) (see, e.g., Gillenwater, 2014;Tremblay et al., 2018). Besides, there exist some alternative exact samplers. Poulson (2019) and can by done by a rejection sampling mechanism with a tailored proposal. In applications where the costs related to exact sampling are a bottleneck, users rely on approximate sampling. Research has focused mainly on kernel approximation (Affandi et al., 2013) and MCMC samplers (Anari et al., 2016;Li et al., 2016;Gautier et al., 2017). However, specific DPPs admit efficient exact samplers that do not rely on Equation 2, e.g., uniform spanning trees (UST, Propp & Wilson, 1998, Figure 1(c)) or eigenvalues of random matrices. For instance, a β-ensemble is a set of N points of R with joint distribution 1 For some choices of the weight function ω, the β-ensemble can be sampled by computing the eigenvalues of simple tridiagonal (Dumitriu & Edelman, 2002) or quindiagonal random matrices (Killip & Nenciu, 2004). In particular, (β = 2)-ensembles correspond to projection DPPs (König, 2004 DPPy can readily serve as research and teaching support. DPPy is also ready for other contributors to add content and enlarge its scope, e.g., with procedures for learning kernels.
2018-09-19T15:53:00.000Z
2018-09-19T00:00:00.000
{ "year": 2018, "sha1": "ce80f8f3678a6419cb4b9038ad689d1c707efa97", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ce80f8f3678a6419cb4b9038ad689d1c707efa97", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
90809241
pes2o/s2orc
v3-fos-license
Molecular techniques for the detection of the antimicrobial sensitivity : friend or foe ? diagnostic workflow of the Microbiology laboratories started some time ago and in the last 10 years this has also included the detection of the antimicrobial sensitivity of bacteria and fungi. Figure 1 reports the number of published paper per year found in the PubMed web site (https://www.ncbi.nlm.nih.gov/pubmed, accessed on February 5th, 2018) with the research key antimicrobial susceptibility testing molecular method: it is clearly evident that the research in this field produced an increasing number of papers year after year, with more than 100 manuscripts annually from 2013 to 2017. This impressive expansion of papers clearly has as a consequence the increase of the routine use of molecular based techniques in the daily workflow for several Microbiology laboratories. This brief editorial is dedicated to a partial review of what is today available in the field of molecular antimicrobial susceptibility testing (mAST) and to comment on what are the most prominent pros and cons in the use of these methods. According to the large variety of molecular methods that can be used for the identification of specific microbial genes, these techniques are nowadays widely used to evaluate the presence of target sequences capable to determine antibiotic resistant phenotypes in clinically relevant germs. The list of the available (most commercially) techniques to perform mAST is very long, but almost all fall within the following categories: PCR based (either with single or multiple target sequences, mostly with real time detection), MALDI-ToF, microarrays and FISH, microfluidics and, finally, whole genome sequencing. In brief, the methods based on the amplification of selected resistance gene by the many available technical variants of PCR are by far the most commonly used in routine. The PCRs techniques have been in place since many years for the identification of single antimicrobial resistance targets, such as the family of genes determining the MRSA/MRSE phenotypes of the vancomycin resistance related genes in Enterococcus spp. Since the last 5 years these techniques have also been made obtainable in the format of multiple PCR in the box, thus very simply allowing the simultaneous detection of a panel of resistance genes in only one testing run. It is indeed of note that the combination of the genes in these panel play an extremely relevant role in term of clinical utility of the results. As an example, the detection of the major five carbapenemase related genes clearly indicate the possible presence of one of the most spread carbapenemase producing Enterobacteria (CPE), since the panel covers the most epidemiologically relevant CPE related sequences worldwide. This is not the case in the mAST for the detection of genes related to the ESbL phenotype, since the galaxy of related genes (and SNPs variants) is well above any possibility of detection even by using a multiplex targeted technique. Mass spectrometry (MALDI-ToF), aside being today the reference method for bacterial identification, has been proposed to detect spectral modification related to the resistance to selected molecules, such as carbapenemes and vancomycin. This technique is also used to identify the hydrolysis of drugs after incubation in the presence of bacteria suspected to bear a specific resistance phenotype, with variable results. Microarrays and related hybridization based techniques such as the FISH could be used to identify specific genes based on the binding with complimentary oligonucleotides. One prominent advantage of microarrays in respect to FISH is owed to the fact that this method can assembly onto a microscopic solid support a large number of different nucleotides sequences, thus allowing the multiple detection of thousands of different genes (and SNPs variants) in a single testing run. On the other hand, FISH as a lower multiplexing capability, but requires less sophisticated instrument to be performed. Bio-engineering and nanotechnologies have recently evolved allowing the size reduction, or better the miniaturization, of several different molecular assay including some method for mAST. These newly developed assays are in general identified as lab on chip and they require an extremely low volume of reagent (in the magnitude of picolitres). Everything required to achieve the final results is incorporated into these miniaturized devices, including bacterial culture system. As far as the fast response is concerned, these methods are really promising since they can provide results a single shift (i.e. 3 to 7 hours). Correspondence: Vittorio Sambri, Unit of Microbiology, Centro Servizi AUSL della Romagna, Piazza della Liberazione 60, 47522 Pievesestina (FC), Italy. Tel.: +39.0547.394906. E-mail: vittorio.sambri@auslromagna.it The introduction of molecular techniques into the routine diagnostic workflow of the Microbiology laboratories started some time ago and in the last 10 years this has also included the detection of the antimicrobial sensitivity of bacteria and fungi. Figure 1 reports the number of published paper per year found in the PubMed web site (https://www.ncbi.nlm.nih.gov/pubmed,accessed on February 5 th , 2018) with the research key antimicrobial susceptibility testing molecular method: it is clearly evident that the research in this field produced an increasing number of papers year after year, with more than 100 manuscripts annually from 2013 to 2017. This impressive expansion of papers clearly has as a consequence the increase of the routine use of molecular based techniques in the daily workflow for several Microbiology laboratories. This brief editorial is dedicated to a partial review of what is today available in the field of molecular antimicrobial susceptibility testing (mAST) and to comment on what are the most prominent pros and cons in the use of these methods. According to the large variety of molecular methods that can be used for the identification of specific microbial genes, these techniques are nowadays widely used to evaluate the presence of target sequences capable to determine antibiotic resistant phenotypes in clinically relevant germs. The list of the available (most commercially) techniques to perform mAST is very long, but almost all fall within the following categories: PCR based (either with single or multiple target sequences, mostly with real time detection), MALDI-ToF, microarrays and FISH, microfluidics and, finally, whole genome sequencing. In brief, the methods based on the amplification of selected resistance gene by the many available technical variants of PCR are by far the most commonly used in routine.The PCRs techniques have been in place since many years for the identification of single antimicrobial resistance targets, such as the family of genes determining the MRSA/MRSE phenotypes of the vancomycin resistance related genes in Enterococcus spp.Since the last 5 years these techniques have also been made obtainable in the format of multiple PCR in the box, thus very simply allowing the simultaneous detection of a panel of resistance genes in only one testing run.It is indeed of note that the combination of the genes in these panel play an extremely relevant role in term of clinical utility of the results.As an example, the detection of the major five carbapenemase related genes clearly indicate the possible presence of one of the most spread carbapenemase producing Enterobacteria (CPE), since the panel covers the most epidemiologically relevant CPE related sequences worldwide.This is not the case in the mAST for the detection of genes related to the ESbL phenotype, since the galaxy of related genes (and SNPs variants) is well above any possibility of detection even by using a multiplex targeted technique. Mass spectrometry (MALDI-ToF), aside being today the reference method for bacterial identification, has been proposed to detect spectral modification related to the resistance to selected molecules, such as carbapenemes and vancomycin.This technique is also used to identify the hydrolysis of drugs after incubation in the presence of bacteria suspected to bear a specific resistance phenotype, with variable results. Microarrays and related hybridization based techniques such as the FISH could be used to identify specific genes based on the binding with complimentary oligonucleotides.One prominent advantage of microarrays in respect to FISH is owed to the fact that this method can assembly onto a microscopic solid support a large number of different nucleotides sequences, thus allowing the multiple detection of thousands of different genes (and SNPs variants) in a single testing run.On the other hand, FISH as a lower multiplexing capability, but requires less sophisticated instrument to be performed. Bio-engineering and nanotechnologies have recently evolved allowing the size reduction, or better the miniaturization, of several different molecular assay including some method for mAST.These newly developed assays are in general identified as lab on chip and they require an extremely low volume of reagent (in the magnitude of picolitres).Everything required to achieve the final results is incorporated into these miniaturized devices, including bacterial culture system.As far as the fast response is concerned, these methods are really promising since they can provide results a single shift (i.e. 3 to 7 hours).The very recent accessibility of next generation sequencing instruments to a larger number of laboratories has brought also the Clinical Microbiologists face to face with the possibility to receive cheap whole bacterial genome sequencing (WGS) data, that of course included also the whole pattern of genes related with antimicrobial resistance.Nowadays there are many pilot studies that show how the WGS data could be useful to identify the pattern of resistance genes, but unfortunately most of these papers are just on small number of clinical isolates and as a consequence the true clinical value of this approach is still to be determined.As very clearly pointed out in the recent paper by E. Carretto (The clinical microbiology in the era of the many -omics, see this issue of Microbiologia Medica) only the joint efforts of well trained Clinical Microbiologists together with skillful Bioinformatics would allow to make the WGS approach a routine clinically relevant tool for the appropriate treatment of severe infections by multidrug resistant germs. Microbiologia All of the above enlisted diagnostic methods are potentially appropriate on primary blood samples in the case of patients suffering from suspected sepsis, or following the isolation of a germ after a standard culture based protocol. It is of note that the application of these techniques to the routine microbiology diagnostic workflow has indeed large advantages provided that both the Clinicians and the Microbiologists can interpret the findings with the required level of criticism. In detail, the most evident pros of this approach to the determination of AST could be summarized as follows: i) germs are not required to be alive or to replicate in vitro (specific gene sequences are detected); ii) very low amount of target sequences are usually identified (each technique has its own limit of detection -LODlargely dependent on the sensitivity of the reaction and on the number of target sequences contained into the reaction mixture itself); iii) the sensitivity of these methods is in general not influenced by any ongoing antibacterial treatment; iv) the turn around time (TAT) is very fast (frequently within one shift time). The other side of this coin clearly shows some relevant cons as enlisted here: i) these techniques can just detect the presence of predetermined genes, depending on each single panel (see above the issue about the feasibility of a test that detect the ESbL phenotypes); ii) the composition of each panel or the use of tests that identify single target sequences is of course influencing the clinical meaning of the results; iii) the sensitivity of each test is largely depending on the number of targets and on the design of each single PCR reaction; iv) in the case of an unexpected sequence mutation (either at the SNP or at a higher level) the genotype could not be detected even in the presence of a not modified phenotype (i.e.false negative result). In conclusion, the use of these molecular based techniques for the AST is of certain relevance and will become more and more a common feature of the diagnostic workflow in the clinical Microbiology laboratories.This will generate an undeniable clinical advantage, provided that a precise selection of the patients and of the techniques is achieved. Who else than a well trained and scientifically updated Clinical Microbiologist could do this? Figure 1.Number of published paper per year found in the PubMed.
2018-12-05T05:36:39.947Z
2018-02-27T00:00:00.000
{ "year": 2018, "sha1": "693628cb95c7a3e4939a50fdeae497e7e00e9bbc", "oa_license": "CCBYNC", "oa_url": "https://www.pagepressjournals.org/index.php/mm/article/download/7346/7005", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "693628cb95c7a3e4939a50fdeae497e7e00e9bbc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226666357
pes2o/s2orc
v3-fos-license
A new family of boundary-domain integral equations for the diffusion equation with variable coefficient in unbounded domains A system of Boundary-Domain Integral Equations is derived from the mixed (Dirichlet-Neumann) boundary value problem for the diffusion equation in inhomogeneous media defined on an unbounded domain. This paper extends the work introduced in [ 25 ] to unbounded domains. Mapping properties of parametrix-based potentials on weighted Sobolev spaces are analysed. Equivalence between the original boundary value problem and the system of BDIEs is shown. Uniqueness of solution of the BDIEs is proved using Fredholm Alternative and compactness arguments adapted to weigthed Sobolev spaces. 1. Introduction. Boundary Domain Integral Equations appear naturally when applying the Boundary Integral Method to boundary value problems with variable coefficient. This class of boundary value problems has a wide range of applications in Physics or Engineering, such as, heat transfer in non-homogeneous media [27], motion of laminar fluids with variable viscosity [5], or even in the acoustic scattering by inhomogeneous anisotropic obstacle [6]. The popularity of the Boundary Integral Method is due to the reduction of the discretisation domain. For example, if the boundary value problem (BVP) is defined on a three dimensional domain, then, the boundary integral method reduces the BVP to an equivalent system of boundary integral equations (BIEs) defined only on the boundary of the domain. However, this requires an explicit fundamental solution of the partial differential equation appearing in the BVP. Although these fundamental solutions may exist, they might not always be available explicitly for PDEs with variable coefficients. To overcome this obstacle, one can construct a parametrix using the known fundamental solution. A discussion on fundamental solution existence theorems, algorithms for constructing fundamental solutions and parametrices is available in [24]; for classical examples of derivation of Boundary Domain Integral Equations refer to [7] for the diffusion equation with variable coefficient in bounded domains in R 3 ; [25] for the same problem applying a different parametrix; [26] for the Dirichlet problem in R 2 and [22] for the mixed problem for the compressible Stokes system, as an example of derivation of BDIEs from a PDE system. The introduction of a parametrix for BVPs with variable coefficient leads to a system of integral equations not only defined on the boundary but also in the domain. Still, one can transform domain integrals into boundary integrals applying the methods shown in [1]. These methods help to preserve the reduction of dimension while also remove singularities appearing in the domain integrals. The approximation of numerical solutions of BDIEs is a relevant problem nowadays. In particular, the very recent article [2] focuses on the solution of the analogous mixed BVP presented in this paper in R 2 . In [3], the authors show that it is possible to obtain linear convergence with respect to the number of quadrature curves, and in some cases, exponential convergence. Analogous research in 3D shows the successful implementation of fast algorithms to obtain the solution of boundary domain integral equations, see [27,14,28]. Furthermore, the authors [4] show the application of the Boundary Domain Integral Equation Method to the study of inverse problems with variable coefficients. A parametrix is not unique, see discussion on [25,Section 1]. The study of different parametrices is adventageous to construct parametrices for PDE systems. Moreover, numerical methods may work with one parametrix more efficiently than with another. However, before attempting numerical experiments, results on the existence and uniqueness of solution need to be established and that is the purpose of this paper. In this paper, we extend the results presented in [25] to unbounded domains which employ a different parametrix from the one used in [9]. In unbounded domains, the mixed problem is set in weighted Sobolev spaces to allow constant functions in unbounded domains to be possible solutions of the problem. Hence, all the mapping properties of the parametrix based potential operators are shown in weighted Sobolev spaces. An analysis of the uniqueness of the BDIES is performed by studying the Fredholm properties of the matrix operator which defines the system. Unlike for the case of bounded domains, the Rellich compactness embeding theorem, see [18,Theorem 3.27], is not available for Sobolev spaces defined over unbounded domains. Nevertheless, we present a lemma to reduce the remainder operator to two operators: one invertible and one compact. Therefore, we can still benefit from the Fredholm Alternative theory to prove uniqueness of the solution. Weighted Sobolev spaces. Let Ω = Ω + be an unbounded exterior connected domain of R 3 . Let Ω − := R 3 Ω + the complementary (bounded) subset of Ω. The boundary S := ∂Ω is simply connected, closed and infinitely differentiable, S ∈ C ∞ . Furthermore, S := S N ∪ S D where both S N and S D are non-empty, connected disjoint submanifolds of S. The border of these two submanifolds is also infinitely differentiable: With regards to function spaces that we employ in this paper, D(Ω) := C ∞ comp (Ω) denotes the space of test functions, C ∞ functions with compact support inside of Ω. The space D * (Ω) denotes the space of distributions or generalised functions. The space D(Ω) is the set of restrictions to Ω of functions from D(R 3 ). We also use Sobolev spaces H k (Ω) with k ∈ Z; Bessel potential spaces on the boundary of the domain H s (∂Ω), where s ∈ R (see e.g. [18,16] To ensure uniquely solvability of the BVPs in exterior domains, we will use weighted Sobolev spaces with weight ω(x) = (1 + |x| 2 ) 1/2 , (see e.g., [9]). Let be the weighted Lebesgue space and H 1 (Ω) the following weighted Sobolev (Beppo-Levi) space constructed using the L 2 (ω −1 ; Ω) space endowed with the corresponding norm . Taking into account that D(Ω) is dense in H 1 (Ω) it is easy to prove that D(Ω) is dense in H 1 (Ω). For further details, cf. [9, p.3] and more references therein. If Ω is unbounded, then the seminorm If Ω is a bounded subdomain of an unbounded domain Ω and g ∈ H 1 (Ω), then g ∈ H 1 (Ω ). For any generalised function g in H −1 (Ω), we have the following representation property, see [9, Section 2], g = 3 i=1 ∂ i g i + g 0 , g i ∈ L 2 (R 3 ) and are zero outside the domain Ω, whereas g 0 ∈ L 2 (ω; Ω). Consequently, D(Ω) is dense in H −1 (Ω) and 3. Traces, conormal derivatives and Green identities. We consider the following differential operator where a(x) ∈ C 2 , a(x) > 0, is a variable coefficient. It is easy to see that if a ≡ 1 then, the operator A becomes the Laplace operator ∆. Here and thereafter, we will assume the following condition on the coefficient a(x). Condition 3.1. The coefficient a(x) belongs to the space L ∞ (Ω). Furthermore, there exist two positive constants, C 1 and C 2 , such that: (3.2) Condition 3.1 is necessary so that the operator A acting on u ∈ H 1 (Ω) is well defined in the weak sense. Hence, we define the operator A in the weak sense as where For a scalar function u ∈ H 1 (Ω) in virtue of the trace theorem it follows that γ ± u ∈ H 1/2 (S) where the trace operators from Ω ± to S are denoted by γ ± respectively. Consequently, if u ∈ H 1 (Ω), then u ∈ H 1 (Ω) and it follows that γ ± u ∈ H 1/2 (S), (see, e.g., [18,19]). For u ∈ H s (Ω); s > 3/2, we can define by T ± the conormal derivative operator acting on S understood in the classical sense: where n(x) is the exterior unit normal vector to the domain Ω at a point x ∈ S. However, for u ∈ H 1 (Ω) (as well as for u ∈ H 1 (Ω)), the classical co-normal derivative operator may not exist in the trace sense. This issue is overcome by introducing the following function space for the operator A, (cf. [9]) . Now, if a distribution u ∈ H 1,0 (Ω; A), we can appropriately define the conormal derivative T + u ∈ H −1/2 (S) using the Green's formula, cf. [18,9], where γ + −1 : H 1/2 (S) → H 1 (Ω) is a continuous right inverse to the trace operator γ + : H 1 (Ω) −→ H 1/2 (S) while the brackets u, v S represent the duality brackets of the spaces H 1/2 (S) and H −1/2 (S) which coincide with the scalar product in L 2 (S) when u, v ∈ L 2 (S). The operator T + : H 1,0 (Ω; A) −→ H −1/2 (S) is bounded and gives a continuous extension on H 1,0 (Ω; A) of the classical co-normal derivative operator (3.5). We remark that when a ≡ 1, the operator T + becomes the continuous extension on H 1,0 (Ω; ∆) of the classical normal derivative operator T + ∆ u = ∂ n u := n · ∇u. In a similar manner as in the proof [18,Lemma 4.3] or [10, Lemma 3.2], the first Green identity holds for a distribution u ∈ H 1,0 (Ω; A) Applying the identity (3.8) to u, v ∈ H 1,0 (Ω; A), exchanging roles of u and v, and then subtracting the one from the other, we arrive to the following second Green identity, see e.g. [18] 4. Boundary value problem. Now that we have shown that if u ∈ H 1,0 (Ω; A), then its trace and its conormal derivative are well defined, it is possible to formulate the mixed problem for the operator A for which we aim to derive an equivalent of system of boundary-domain integral equations (BDIEs). Mixed problem. Find u ∈ H 1,0 (Ω; A) such that The previous BVP can be represented with the following operator equation The following result is well known and it has been proven [9, Appendix A] by using variational settings and the Lax Milgram lemma. . It is clear that hypotheses of the Theorem 4.1 are satisfied under the assumption of Condition 3.1. Hence, the mixed BVP problem (4.1)-(4.3) is uniquely solvable. 5. Parametrices and remainders. We define a parametrix (Levi function) P (x, y) for the differential operator A differentiating with respect to x, as a function on two variables that satisfies where δ(.) is the Dirac distribution and the term R(x, y) is a weakly singular distribution, i.e. O(|x − y| −2 ), so-called remainder. A given operator A may have more than one parametrix. For example, the parametrix was employed in [21,7], for the operator A, given in (3.1), where is the fundamental solution of the Laplace operator. The remainder corresponding to the parametrix P y is In this paper, we consider the parametrix P x used in [25,23], where analogous results to the ones presented in the upcoming sections have been obtained in bounded domains with smooth and Lipschitz boundary. The parametrix P x is defined as follows: which leads to the corresponding remainder Due to the smoothness of the variable coefficient a(x), both remainders R x and R y are weakly singular, i.e., which has been used to derive analogous results to those in this paper, in [9]. The parametrix P y has been widely analysed in the literature, see [21,20,14,7,8]. The difference between both parametrices relies on the dependence from the variable of the coefficient a(x) or a(y). Clearly, choosing a parametrix involving a(y) simplifies the expression of the remainder as the coefficient a(y) acts as a constant when differentiating with respect to x which is the variable of differentiation of the operator A. However, for some PDE problems, it is not always possible to obtain a parametrix that depends exclusively on a(y) and not on a(x). This is the case of the Stokes system, see [22]. Hence, the usefulness of the analysis of the family of parametrices depending on a(x). 6. Volume and surface potentials. Boundary-domain integral equations are usually formulated in terms of parametrix-based surface and volume potential operators. In this section, the surface and volume potentials based on the parametrix P x are introduced. We analyse their mapping properties in weighted Sobolev spaces. Additional boundedness conditions are often imposed on the variable coefficient a(x) in order to prove the boundedness properties of the potential operators. Condition 6.1. We will assume the following condition from now on unless stated otherwise: 2) and (6.1), then where the constants k 1 and k 2 do not depend on g ∈ H 1 (Ω). This implies that the functions a and 1/a behave now as multipliers in the space H 1 (Ω). Furthermore, as long as a ∈ C 1 (S), then ∂ n a is also a multiplier. The volume parametrix-based Newton-type potential and the remainder potential are respectively defined, for y ∈ R 3 , as BOUNDARY-DOMAIN INTEGRAL EQUATIONS IN UNBOUNDED DOMAINS 5103 The parametrix-based single layer and double layer surface potentials are defined for y ∈ R 3 : y / ∈ S, as We also define the following pseudo-differential operators associated with direct values of the single and double layer potentials and with their conormal derivatives, for y ∈ S, The operators P, R, V, W, V, W, W and L can be expressed in terms the volume and surface potentials and operators associated with the Laplace operator, as follows The symbols with the subscript ∆ denote the analogous operators for the constant coefficient case, a ≡ 1. Furthermore, by the Lyapunov-Tauber theorem (cf. [15,13] and more references therein), 3), (7.5), and (7.6), follow from the parametrix relation with the fundamental solution (5.2) A similar argument would apply for the operators V and V as they all share the same integral kernel. The remainder relation (7.4) follows from expanding the expression (5.3) by applying the product rule. To obtain relation (7.7) and (7.8), we need to apply the product rule to the kernel T + P (x, y) Multiplying by ρ and integrating over S, we see that the first term in (7.12) leads to harmonic single layer potential term in (7.2) and the second term coincides with the harmonic double layer potential term. Relations (7.9) and (7.10) directly follow from applying the conormal derivative operator T + at both sides of (7.5) and (7.7). These relations can be exploited to obtain mapping properties of the parametrix based surface and volume potentials taking into account those mapping properties already known for the analogous surface and volume potentials constructed with the fundamental solution of the Laplace equation. One of the main differences with respect to the bounded domain case is that the integrands of the operators V , W , P and R and their corresponding direct values and conormal derivatives do not always belong to L 1 . In these cases, the integrals should be understood as the corresponding duality forms (or their their limits of these forms for the infinitely smooth functions, existing due to the density in corresponding Sobolev spaces). Proof. Let us prove first the mapping property (8.1). From Theorem 7.1, we have that V g ∈ H 1 (Ω) for some g ∈ H −1/2 (S). Hence, it suffices to prove that V g ∈ L 2 (ω; Ω). Differentiating using the product rule, we can write Taking into account relation (7.5) and applying (8.3) to h = V ∆ (g/a), we get By virtue of the mapping property for the operator V provided by Theorem 7.1, the last term belongs to L 2 (ω; Ω) due to the fact that V ∆ (g/a) = V g ∈ H 1 (Ω), and thus its derivatives belong to L 2 (ω; Ω). The term ∇a acts as a multiplier in the space L 2 (ω; Ω) due to Condtion 6.1. On the other hand, the term a∆V ∆ (g/a) vanishes on Ω since V ∆ (·) is the single layer potential for the Laplace equation, i.e., V ∆ (g/a) is a harmonic function. This, completes the proof for the operator V . The proof for the operator W follows from a similar argument. Condition 8.1. In addition to Condition 3.1 and Condition 6.1, we will sometimes need the following condition: Remark 9. Note as well that due to Condition 3.1 and the continuity of the function ln a, the components of ∇(ln a) and ∆(ln a) are bounded as well. Theorem 9.1. The following operators are continuous under Condition 6.1, Proof. Let g ∈ H −1 (R 3 ). Then, by virtue of the relation (7.3) Pg = P ∆ (g/a). Since Condition 6.1 holds, (g/a) ∈ H −1 (R 3 ) and thefore the continuity of the operator P follows from the continuity of P ∆ : , which at the same time implies the continuity of the operator (9.3), see [9, Theorem 4.1] and more references therein. Let us prove now the continuity of the operator R. Due to the second condition in (6.1), the components of ∇a ∈ L 2 (R 3 ) behave as multipliers in the space L 2 (ω −1 ; R 3 ). Let g ∈ L 2 (ω −1 ; R 3 ), then the relation (7.4) applies and gives Here, ∇ ln a is multipliers under Condition 6.1 in the space H −1 (R 3 ). As Rg ∈ H 1 (Ω), we only need to prove that ∆Rg(y) ∈ L 2 (ω; Ω). Using the relation Taking the trace and the conormal derivative of (10.2), we obtain integral representation formulae for the trace and traction of u respectively: For some distributions f, Ψ and Ψ, we consider a more indirect integral relation associated with the third Green identity (10.2) u + Ru − V Ψ + W Φ = Pf, in Ω. Proof. To prove that u ∈ H 1,0 (Ω; A), taking into account that by hypothesis u ∈ H 1 (Ω), so there is only left to prove that Au ∈ L 2 (ω; Ω). Firstly we write the operator A as follows: It is easy to see that the second term belongs to L 2 (ω; Ω). Keeping in mind Remark 7 and the fact that u ∈ H 1 (Ω), then we can conclude that the term u∇a ∈ H 1 (Ω) since due to the second condition in (6.1) ∇a is a multiplier in the space H 1 (Ω) and therefore ∇(u∇a) ∈ L 2 (ω; Ω). Now, we only need to prove that ∆(au) ∈ L 2 (ω; Ω). To prove this we look at the relation (10.5) and we put u as the subject of the formula. Then, we use the potential relations (7.3), (7.5) and (7.7) In virtue of the Theorem 9.2, Ru ∈ L 2 (ω; Ω). Moreover, the terms in previous expression depending on V ∆ or W ∆ are harmonic functions and P ∆ is the newtonian potential for the Laplacian, i.e. ∆P ∆ ( f a ) = f a . Consequently, applying the Laplacian operator in both sides of (10.7), we obtain: Thus, ∆u ∈ L 2 (ω; Ω) from where it immediately follows that ∆(au) ∈ L 2 (ω; Ω). Hence u ∈ H 1,0 (Ω; A). The rest of the proof is equivalent to [25,Lemma 5.1]. The proof of the following statement is the counterpart of [25, Lemma 5.2] for exterior domains. The proof follows from the invertibility of the operator V ∆ , see [18,Corollary 8.13]. (10.9) then Ψ * (y) = 0. Proof. Take the trace of (10.9) and relation (7.5), to obtain VΨ * (y) = V ∆ Ψ * a (y) = 0, y ∈ S. 11. BDIES. In this section, we will derive a system of boundary domain integral equations formally segregated from the solution u of the BVP (4.1)-(4.3), following a similar approach as in [7,Section 5]. Consequently, we introduce Φ 0 ∈ H 1/2 (S) and Ψ 0 ∈ H −1/2 (S) as continuous fixed extensions to S of the functions φ 0 ∈ H 1/2 (S D ) and ψ 0 ∈ H −1/2 (S N ). Moreover, let φ ∈ H 1/2 (S N ) and ψ ∈ H −1/2 (S D ) be arbitrary functions formally segregated from u. Then, make in the three third Green identities (10.2)-(10.4) to obtain the following BDIES (M12) In what follows, we will denote by X the vector of unknown functions . We will denote by M 12 the matrix operator that defines the system (M 12): 4) and by F 12 the right hand side of the system F 12 = [ F 0 , γ + F 0 − Φ 0 ] . Using this notation, the system (M12) can be rewritten in terms of matrix notation as M 12 X = F 12 . If Condition 6.1 and Condition 8.5 hold, then, due to the mapping properties of the potentials, F 12 ∈ F 12 ⊂ Y 12 , while operators M 12 : H → F 12 and M 12 : X → Y 12 are continuous. Here, we denote The following result shows that the BDIES (M12) is equivalent to the original mixed BVP (4.1)-(4.3). Representation theorems and invertibility. In this section, we aim to prove the invertibility of the operator M 12 : H → F 12 by showing first that the arbitrary right hand side F 12 from the respective spaces can be represented in terms of the parametrix-based potentials and using then the equivalence theorems. The following result is the counterpart of [9, Lemma 7.1] for the new parametrix P x (x, y). The analogous result for bounded domains can be found in [7, Lemma 3.5]. Proof. Let us assume that such functions f * and Ψ * , satisfying (12.1), exist. Then, we aim to find expressions of these functions in terms of F * . Applying the potential relations (7.5), (7.3) to the equation (12.1), we obtain Applying the Laplace operator at both sides of the equation (12.2), we get On the other hand, we can rewrite equation (12.2) as V ∆ Ψ * a (y) = Q(y), y ∈ Ω, (12.4) where Q(y) := F * (y) − P ∆ (∆F * ) (y). (12.5) Now, we take the trace of (12.4) It is well known that the direct value operator of the single layer potential for the Laplace equation V ∆ : H −1/2 (S) −→ H 1/2 (S) is invertible (cf. e.g. [18,Corollary 8.13]). Hence, we obtain the following expresion for Ψ * : Relations (12.3) and (12.7) imply the uniqueness of the couple (f * , Ψ * ). Now, we just simply need to prove that the pair (f * , Ψ * ) given by (12.7) and (12.3) satisfies (12.1). For this purpose, let us note that the single layer potential operator, V ∆ (Ψ * /a) with Ψ * given by (12.7), as well as Q(y) given by (12.5) are both harmonic functions. Since Q(y) and V ∆ (Ψ * /a) are two harmonic functions that coincide on the boundary due to (12.6), then they must be identical in the whole Ω due to the uniqueness of solution to the Dirichlet problem for the Laplace equation, see [ Then there exists a unique triplet (f * , is a linear an bounded operator and (F 0 , F 1 ) are given by Proof. Taking Φ * = γ + F 0 −F 1 and applying the previous lemma to F * = F 0 +W Φ * we prove existence of the representation (13.1) and (13.2). The uniqueness follows from the homogenenous case when F 0 = F 1 = 0. Then, (13.2) implies Φ * = 0 and consequently, by (13.1) and Lemma 12.1, we get Ψ * = 0 and f * = 0. We are ready to prove one of the main results for the invertibility of the matrix operator of the BDIES (M12). Theorem 13.1. If Conditions (6.1) and (8.5) hold, then the following operator is continuous and continuously invertible: Proof. In order to prove the invertibility of the operator M 12 : H −→ F 12 , we apply the Corollary 13 to any right-hand side F 12 ∈ F 12 of the equation M 12 U = F 12 . Thus, F 12 can be uniquely represented as (f * , Ψ * , Φ * ) = C * F 12 as in (13.1)-(13.2) where C * : F 12 −→ L 2 (ω; Ω) × H −1/2 (S) × H 1/2 (S) is continuous. In virtue of the equivalence theorem for the system (M12), Theorem 11.1, and the invertibility theorem for the boundary value problem with mixed boundary conditions, Theorem 4.1, the matrix equation M 12 U = F 12 has a solution U = (M 12 ) −1 F 12 where the operator (M 12 ) −1 , is given by expressions where (f * , Ψ * , Φ * ) = C * F 12 . Consequently, the operator (M 12 ) −1 is a continuous right inverse to the operator (13.3). Moreover, the operator (M 12 ) −1 results to be a double sided inverse in virtue of the injectivity implied by Theorem 11.1. 14. Fredholm properties and invertibility. In this section, we are going to benefit from the compactness properties of the operator R to prove invertibility of the operator M 12 : X → Y 12 . This invertibility result is more general than the one presented in the previous section. The price to pay is imposing an additional condition on the variable coefficient. Unlike as in the bounded case, see similar to [9, Section 7.2], the Rellich compact embedding theorem, see e.g. [18,Theorem 3.27], cannot be applied as Ω is a bounded domain. Still, we can overcome this obstacle by decomposing the operator R into the sum of two operators: one which can be made arbitrarily small and the other one will be compact. Then, we shall simply make use of the Fredholm alternative to prove the invertibility of the matrix operator that defines the Proof. Let B(0, r) be the ball centered at 0 with radius r big enough such that S ⊂ B r . Furthermore, let χ ∈ D(R 3 ) be a cut-off function such that χ = 1 in S ⊂ B r , χ = 0 in R 3 B 2r and 0 ≤ χ(x) ≤ 1 in R 3 . Let us define by R c g := R(χg), R s g := R((1 − χ)g). We will prove first that the norm of R s can be made infinitely small. Let g ∈ H 1 (Ω), then Consequently, we have the following estimate: Using the previous estimate is easy to see that when r →+∞ the norm R s g H 1 (Ω) tends to 0 due to Conditions (14.1). Hence, the norm of the operator R s can be made arbitrarily small. To prove the compactness of the operator R c g := R(χg), we recall that supp(χ) ⊂ B(0, 2r). Then, one can express R c g := R Ωr ([χg| Ωr ]) where the operator R is defined now over Ω r := Ω ∩ B 2r which is a bounded domain. As the restriction operator | Ωr : H 1 (Ω) −→ H 1 (Ω r ) is continuous, in virtue of Theorem 9.1, the operator R c g : L 2 (Ω r ) −→ H 1 (Ω r ) is also continuous. Due to the boundedness of Ω r , we have H 1 (Ω r ) = H 1 (Ω r ) and thus the compactness of R c g follows from the Rellich Theorem (see [18,Theorem 3.27]) applied to the embedding L 2 (Ω r ) ⊂ H 1 (Ω r ). is Fredholm with zero index. Proof. Using the previous Lemma, we have R = R s + R c so R s < 1 hence I + R s is invertible. On the other hand R c is compact and hence I + R s a compact perturbation of the operator I + R, from where it follows the result. Then, U will also solve the following extended system Furthermore,every solution of the system (15.2) will solve the equationM 12 0 U = F. The system (15.2) can be written also in matrix form as M 12 0 U = F where F denotes the right hand side and M 12 0 is defined as We note that the three diagonal operators: is a compact perturbation of M 12 . Consequently, M 12 is Fredholm with index zero. In addition, as the operator M 12 is one to one, we conclude that it is also continuously invertible. 16. Conclusions. A new parametrix for the diffusion equation in non homogeneous media (with variable coefficient) has been analysed in this paper. Mapping properties of the corresponding parametrix based surface and volume potentials have been shown in corresponding weigthed Sobolev spaces depending on several regularity and decay conditions on the variable coefficient a(x). A BDIES for the original BVP has been obtained. Results of equivalence between the BDIES and the BVP have been shown along with the invertibility of the matrix operator defining the BDIES using Fredholm alternative arguments overcoming the technicalities that unbounded domains present. Now, we have obtained an analogous system to the BDIES (M12) of [9] with a new family of parametrices which is uniquely solvable. Hence, further investigation about the numerical advantages of using one family of parametrices over another will follow. Further generalised can be obtained by relaxing the smoothness of the boundary to Lipschitz domains. In this case, one needs the generalised canonical conormal derivative operator defined in [19,20]. Another possible generalisation could consider relaxing the smoothness of the coefficient, see [20].
2020-08-06T09:06:56.107Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9da0c8ebe20f9dc2f4619882a4e58e17cb73d3f0", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=f581dc79-69b1-4492-b325-5e4de55bf783", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c956d51cc4ef15e7c9ed8b2ce66ce2070e34e463", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119225986
pes2o/s2orc
v3-fos-license
Berkeley Supernova Ia Program III: Spectra Near Maximum Brightness Improve the Accuracy of Derived Distances to Type Ia Supernovae In this third paper in a series we compare spectral feature measurements to photometric properties of 108 low-redshift (z<0.1) Type Ia supernovae (SNe Ia) with optical spectra within 5 d of maximum brightness. We find the pseudo-equivalent width (pEW) of the Si II 4000 line to be a good indicator of light-curve width, and the pEWs of the Mg II and Fe II complexes are relatively good proxies for SN colour. We also employ a combination of light-curve parameters (specifically the SALT2 stretch and colour parameters x_1 and c, respectively) and spectral measurements to calculate distance moduli. The residuals from these models are then compared to the standard model which uses only light-curve stretch and colour. Our investigations show that a distance model that uses x_1, c, and the velocity of the Si II 6355 feature does not lead to a decrease in the Hubble residuals. We also find that distance models with flux ratios alone or in conjunction with light-curve information rarely perform better than the standard (x_1,c) model. However, when adopting a distance model which combines the ratio of fluxes near ~3750 Ang. and ~4550 Ang. with both x_1 and c, the Hubble residuals are decreased by ~10 per cent, which is found to be significant at about the 2-sigma level. The weighted root-mean-square of the residuals using this model is 0.130 +/- 0.017 mag (as compared with 0.144 +/- 0.019 mag when using the same sample with the standard model). This Hubble diagram fit has one of the smallest scatters ever published and at the highest significance ever seen in such a study. Finally, these results are discussed with regard to how they can improve the cosmological accuracy of future, large-scale SN Ia surveys. [Abridged] INTRODUCTION Type Ia supernovae (SNe Ia) have been used in the recent past to discover the accelerating expansion of the Universe E-mail: JSilverman@astro.berkeley.edu (Riess et al. 1998;Perlmutter et al. 1999), as well as to measure cosmological parameters with increasing accuracy and precision (e.g., Astier Suzuki et al. 2012). In the most general terms, thermonuclear ex-plosions of carbon/oxygen (C/O) white dwarfs (WDs) are thought to give rise to SNe Ia (e.g., Hoyle & Fowler 1960;Colgate & McKee 1969;Nomoto et al. 1984; see Hillebrandt & Niemeyer 2000 for a review). However, after decades of observations and theoretical work, a detailed understanding of both the SN progenitors and explosion mechanisms is still missing. In addition, there is very little known about how differences in the initial conditions in SNe Ia give rise to the measured range of observables. A large, self-consistent dataset is needed in order to solve these problems. The ability to do precision cosmology using SNe Ia requires that one is able to calibrate or standardise their luminosity. Phillips (1993) showed a tight correlation between light-curve decline rate and luminosity at peak brightness for the majority of SNe Ia, the so-called "Phillips relation." However, the addition of spectral observations to the lightcurve data complicates the picture far beyond the simple assumption underlying the Phillips relation. Many comparisons of spectral and photometric data of low-redshift SNe Ia have been performed in the past (e.g., Nugent et al. 1995;Benetti et al. 2005;Bongard et al. 2006;Hachinger et al. 2006;Arsenijevic et al. 2008;Walker et al. 2011;Nordin et al. 2011b;Blondin et al. 2011;Chotard et al. 2011). In addition, there has been similar work with SNe Ia at higher redshifts (e.g., Blondin et al. 2006;Altavilla et al. 2009;Nordin et al. 2011b;Walker et al. 2011). These studies were often aimed at finding a "second parameter" in SN Ia spectral or photometric data which would increase the accuracy of their distance measurements. Most of these previous studies utilised relatively small and heterogeneous datasets. The data studied here, in contrast, were self-consistently observed and reduced, and constitute one of the largest datasets to be analysed in this manner. Low-redshift (z 0.1, < z >≈ 0.023) optical SN Ia spectra from the Berkeley Supernova Ia Program (BSNIP; Silverman et al. 2012) are used along with complementary photometric data, largely from Ganeshalingam et al. (2010). The spectral features have been accurately and robustly measured (BSNIP II; Silverman, Kong & Filippenko 2012), and the light curves have been fit using a variety of methods (Ganeshalingam et al., in preparation). We summarise both the spectral and photometric datasets used for this analysis in Section 2, and we describe our procedure for measuring spectral features, fitting light curves, and producing Hubble diagrams in Section 3. How these measured values correlate with each other and with previously determined classifications is presented in Section 4, along with our Hubble diagram results using various models for the distances to SNe Ia. We present our conclusions in Section 5, where the main results of our analysis are summarised and the most accurate and useful spectral indicators are discussed. Other, future BSNIP papers will expand on the analysis performed here with the addition of host-galaxy properties and late-time SN spectra. Spectral Data The same SN Ia spectral data are analysed in the current study as were used in BSNIP II. The spectra are all orig-inally published in BSNIP I. Most of the spectra were obtained using the Shane 3 m telescope at Lick Observatory with the Kast double spectrograph (Miller & Stone 1993), and the typical wavelength coverage is 3300-10,400Å with resolutions of ∼11Å and ∼6Å on the red and blue sides (crossover wavelength ∼5500Å), respectively. See BSNIP I For more information regarding the observations and data reduction. In BSNIP II, we required that a spectrum be within 20 d (rest frame) of maximum brightness and we a priori ignored the extremely peculiar SN 2000cx (e.g., Li et al. 2001), SN 2002cx (e.g., Li et al. 2003;Jha et al. 2006), SN 2005hk (e.g., Chornock et al. 2006Phillips et al. 2007), and SN 2008ha (e.g., Foley et al. 2009Valenti et al. 2009). BSNIP II contains 432 spectra of 261 SNe Ia with a "good" fit for at least one spectral feature. However, only a subset of these data are used in the current study since not all of them have reliable photometric observations and we are currently only considering spectra within 5 d of maximum brightness. It was shown in BSNIP II that the spectral measurements do not evolve significantly during these epochs. For the 11 SNe Ia that had more than one spectrum within 5 d of maximum brightness and photometric information, we only use the spectrum closest to the date of maximum in the current analysis. For our investigation using arbitrary ratios of fluxes (see Section 4.10) we also use spectra within 5 d of maximum brightness, even though previous studies only investigated spectra within 2.5 d of maximum (Bailey et al. 2009;Blondin et al. 2011). When we employ spectra only in this narrower age range, our sample size for the flux-ratio analysis decreases from 62 to 38 objects. While our overall results are mostly unchanged when considering only spectra within 2.5 d of maximum brightness, their significance is weakened due to the smaller number of objects used. Adopting a larger age range than within 5 d of maximum increases the sample size only moderately and introduces much larger scatter in spectral measurements (see BSNIP II). Photometric Data A majority of the SNe in our spectral sample were discovered as part of the Lick Observatory Supernova Search (LOSS). LOSS is a transient survey utilising the 0.76-m Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory (Li et al. 2000;Filippenko et al. 2001; see also Filippenko, Li, & Treffers, in preparation). KAIT is a robotic telescope that monitors a sample of ∼15,000 galaxies in the nearby Universe (redshift z < 0.05) with the goal of finding transients within days of explosion. Fields are imaged every 3-10 d and compared to archived template images, after which potential new transients are flagged. These images are examined by human image checkers and the best candidates are re-observed the following night. Candidates that are present on two consecutive nights are reported to the community using the International Astronomical Union Circulars (IAUCs) and the Central Bureau of Electronic Telegrams (CBETs). The statistical power of the LOSS sample is well demonstrated by the series of papers deriving nearby SN rates (Leaman et al. 2011;Li et al. 2011a,b). In addition to the SN search, KAIT monitors active SNe of all types in broad-band BVRI filters. The first data release of BVRI light curves for 165 SNe Ia along with details about the reduction procedure have been published by Ganeshalingam et al. (2010). In summary, point-spread function (PSF) fitting photometry is performed on images from which the host galaxy has been subtracted using templates obtained > 1 yr after explosion. Photometry is transformed to the Landolt system (Landolt 1983(Landolt , 1992 using averaged colour terms determined over many photometric nights. Calibrations for each SN field are obtained on photometric nights with an average of 5 calibrations per field. We also include SN Ia light curves obtained from the literature to maximise the overlap between our photometric and spectroscopic samples. We include 29 objects from the Calán-Tololo sample (Hamuy et al. 1996), 22 objects from the CfA1 sample (Riess et al. 1999), 44 objects from CfA2 , and 185 objects from CfA3 ). In instances where we have data for the same SN from multiple samples, we use the light curve that is most densely sampled and best captures the light-curve evolution. We also include light curves for SNe 1999aw (Strolger et al. 2002), 1999ee, 2000bh, 2000ca, 2001ba (Krisciunas et al. 2004a), 2001bt, 2001cn, and 2001cz (Krisciunas et al. 2004b. Our final photometry sample consists of 335 multi-colour light curves, though we note that not all the objects in this sample have corresponding spectroscopy. Of the data within 5 d of maximum brightness investigated in BSNIP II, 115 SNe have light-curve width or colour information and are included in the present study. The redshift range spanned by this sample is 0 < z < 0.1. A complete list of these SNe Ia, their ages, spectral classifications, and spectral feature measurements can be found in BSNIP II. The photometric parameters of these objects are presented by Ganeshalingam et al. (in preparation). Spectral Measurements The algorithm used to measure each of nine spectral features and the features themselves are described in detail in BSNIP II. Here we give a brief summary of the procedure. Each spectrum has its host-galaxy recession velocity removed and is corrected for Galactic reddening (according to the values presented in Table 1 of BSNIP I), and it is smoothed using a Savitzky-Golay smoothing filter (Savitzky & Golay 1964). If the signal-to-noise ratio (S/N) is larger than 6.5 pixel −1 over the entire spectral range, we attempt to define a pseudo-continuum for each spectral feature. This is done by determining where the local slope changes sign on either side of the feature's minimum. Quadratic functions are fit to each of these endpoints and the peaks of the parabolas (assuming that they are both concave downward) are used as the endpoints of the feature; they are then connected with a line to define the pseudo-continuum. We record the flux at the blue and red endpoints of the feature (F b and Fr, respectively) as well as the pseudo-equivalent width (pEW; e.g., Garavini et al. 2007). Once a pseudo-continuum is calculated, a cubic spline is fit to the smoothed data between the endpoints of the spectral feature. 1 From the wavelength at which the spline fit reaches its minimum (λmin) the expansion velocity (v) is calculated. The flux is then normalised to the pseudocontinuum, and the relative depth of the feature (a) and its full-width at half-maximum (FWHM) are computed. Finally, every spectral feature in each spectrum is visually inspected by more than one person and removed from the study if the spline fit and/or pseudo-continuum do not accurately reflect the spectral feature. Light-Curve Fitting A variety of methods have been developed to measure the photometric properties of SN Ia light curves. Here, we describe three different light-curve fitting methods adopted in this paper to characterise the SN Ia light curves. While the light-curve parameters derived from each of these methods are degenerate to some extent, it is useful to perform all three fitting techniques for the purpose of comparing our results to previous results in the literature. There are also cases, for certain spectroscopic subtypes, where one method is superior to the other two. Template and Polynomial Fitting Our most direct measurement of light-curve properties makes use of the template-fitting routine introduced by Prieto et al. (2006). For a given photometric bandpass, a set of template light curves is used to construct models which match the light-curve data. The model light curves are linear combinations of the template light curves using the weighting scheme described by Ganeshalingam et al. (2010). A χ 2minimisation fitting routine is used to determine the combination of templates that best fits the data. For band X, we measure the date of maximum brightness, the apparent peak magnitude (mX ), and the light-curve width parametrised as the difference in magnitudes between maximum and fifteen days past maximum, ∆m15(X). We independently fit the B-and V -band light curves for the SNe in our sample. In instances where we have a well-sampled light curve, but cannot achieve an acceptable fit with our templatefitting routine, we fit the data with a fourth-order polynomial. For both the template-and polynomial-fitting routines, we use a Monte Carlo routine to measure the uncertainty in our derived parameters. We simulate realisations of our dataset by randomly perturbing each data point by its 1σ photometric error assuming a Gaussian distribution centred at 0 mag. We fit the dataset realisation with our fitting routine and measure light-curve properties for that simulation. We estimate the uncertainty in our derived parameters to be the standard deviation of 50 dataset realisations. MLCS2k2 The Multi-colour Light Curve Shape (MLCS) distancefitting software was first introduced by Riess et al. (1996) to simultaneously fit all light-curve data for a given SN Ia to produce a distance estimate. This method relies on the observation that more luminous SNe Ia have broader light curves (Phillips 1993) and also have bluer colours during the photospheric phase. MLCS parametrises light-curve width using the parameter ∆, which measures the difference in absolute magnitude of the SN with reference to a fiducial SN Ia. MLCS attempts to disentangle intrinsic colour variations from host-galaxy effects to also produce an estimate for host-galaxy extinction, AV . MLCS2k2.v006 (referred to as simply MLCS2k2 for the rest of this work; Jha et al. 2007) is the the most current publicly available implementation of this fitting routine. In comparison to the original version, it has an expanded set of training templates and improvements in the treatment of host reddening and K-corrections. For our analysis with MLCS2k2, we use the galactic line-of-sight prior which models the distribution of hostgalaxy extinction values as a decaying exponential with a peak value of 0 mag (Hatano et al. 1998). We also set the host-galaxy RV = 1.7 based on the cosmological analysis of Hicken et al. (2009). Their analysis found that a lower host-galaxy RV reduced the scatter in the Hubble diagram compared to a more typical Galactic value of RV = 3.1. A fit using MLCS2k2 is considered reliable (and thus its parameters are used in the current analysis) only when the reduced χ 2 1.6. SALT2 Spectral Adaptive Light-curve Template (SALT) was first developed by the SuperNova Legacy Survey (SNLS) (Guy et al. 2005). Calculating distances with SALT is a two-step process. SALT first measures light-curve parameters that are expected to correlate with the intrinsic brightness of individual SNe (i.e., light-curve width and colour). Then a model for the corrected apparent magnitude of a SN, mB,corr, is adopted which applies linear corrections for the light-curve width and colour to the measured apparent magnitude, mB. Thus, the corrected apparent magnitude has the form The constants α and β are found by minimising χ 2 using distance estimates from a large sample of SNe Ia compared to a cosmological model (see Section 3.3 for more details). SALT2 is an updated version of SALT with an expanded training set of light-curve templates and is the version implemented here. SALT2 is trained on light curves and spectra from low-z SNe compiled from the literature and high-z SNe from the first two years of the SNLS (Guy et al. 2007). SALT2 measures a parametrisation of the light-curve width (x1), the SN colour (c), and the apparent B-band magnitude at maximum light (mB). In fitting our light curves, we exclude I-band data, which are not included in the SALT2 template set. We also exclude subluminous SNe Ia, often of the spectral subclass of SN 1991bg-like objects (e.g., Filippenko et al. 1992b;Leibundgut et al. 1993), since SALT2 was not developed to fit this particular subtype. This is achieved by using only SNe Ia with −3 x1 2 (as in Blondin et al. 2011). Finally, the results of SALT2 fits are utilised here only when the reduced χ 2 < 2. Hubble Diagrams In this section we present the methodology used to standardise SNe Ia for cosmological application. We use a model that applies linear corrections for light-curve width and colour. The width of a light curve correlates with the intrinsic luminosity in the sense that SNe with broader light curves are also more luminous. This correlation has been well established (Phillips 1993). The colour parameter combines the effects of intrinsic colour variations and host-galaxy reddening. We use the SALT2 parameters x1 and c as the parametrisations of light-curve width and SN colour, respectively. We will also generalise this approach to allow for linear corrections using spectroscopic parameters. The distance modulus for each SN can be estimated from its redshift by µ(z) = 25 + 5 log 10 [DL (z)], where DL is the luminosity distance expressed in units of Mpc. The distance modulus including linear corrections for light-curve width and colour can be expressed as The variables α, β, and M (the fiducial absolute magnitude of a SN Ia) are determined by using a custom version of cosfitter (A. Conley, 2011, private communication) based on the Minuit minimisation package (James & Roos 1975). The software minimises the function where µ(z) is the distance modulus of the galaxy in the cosmic microwave background (CMB) rest-frame redshift z, σm is the measurement error in light-curve properties accounting for covariances between measured parameters, σpec is the uncertainty due to deviations from Hubble's law induced by gravitational interactions from neighbouring galaxies, and σint is a constant intrinsic scatter added to each SN to achieve a reduced χ 2 ≈ 1. We adopt 300 km s −1 as the peculiar velocity for each SN. The intrinsic scatter, σint, can be considered as the uncertainty associated with a model that attempts to standardise SNe using the parameters x1 and c. Only objects with z helio > 0.01 are used in the Hubble diagrams in order to avoid including SNe with motions dominated by peculiar velocities. We also adopt the same Hubble diagram colour cut as Blondin et al. (2011), who exclude objects with c > 0.50. Finally, as mentioned above, all analysis using SALT2 fits is restricted to objects with a reduced χ 2 < 2. In this work we consider nearby SNe (median zCMB ≈ 0.021) and are not attempting to find a best-fitting cosmology. A goal of this study is to combine photometric and spectroscopic properties such that SNe Ia become more accurate standardisable candles. We also aim to quantify the amount of improvement when using a variety of observed measurements. To that end, we adopt the standard ΛCDM cosmology with Ωm = 0.27, ΩΛ = 0.73, and w = −1 when calculating µ(z). Models for Predicting SN Ia Distances Here we describe the generalisation of our calculation of the distance modulus for each SN to allow for linear corrections using measured spectral parameters (such as velocity and pEW; see Sections 4.2 and 4.4) or the ratios of pEWs and fluxes (such as (Si II) and R; see Sections 4.5 and 4.10). We consider five models for predicting distances to SNe Ia using combinations of the SALT2 measured light-curve parameters (x1 and c) and spectral measurements: Here, S represents any spectral measurement: v, pEW, (Si II), R, etc. The last model included in our study was already mentioned above (Equation 2) and is the model usually adopted in cosmological studies of SNe Ia using only light-curve parameters (e.g., Astier et al. 2006;Kowalski et al. 2008;Hicken et al. 2009;Amanullah et al. 2010). In the following analysis the so-called (x1, c) model will be the one to which we compare the other cosmological models that include spectral information. Cross-Validation Ideally, with a sufficiently large sample, the predictive abilities of a model could be inferred by inspecting the dispersion of the residuals. However, for samples of limited size, the dispersion of the residuals is prone to statistical fluctuations and may not accurately reflect the true predictive nature of the model. Furthermore, for a fixed sample, one can always reduce the dispersion of the residuals by adding more variables to the model. However, it is not clear whether the added variables are actually improving the model itself or simply fitting to the noise inherent in the observables. For analysing the predictive nature of a model, it is useful to perform some form of cross-validation (CV) in which a subset of the entire sample is used to train the model and another subset is used to validate the predictive ability of that model. Bailey et al. (2009) use a sample of 58 SNe, 28 of which are adopted as a training set to train a model and the other 30 are used as a validation set to assess the predictivity of the model. Using a smaller sample of 26 SNe, Blondin et al. (2011) use a K-fold CV method that allows all of the SNe to be used in the training and validation procedure. For this study, and following Blondin et al. (2011), we adopt the Kfold CV with K = 10. We also tested CV with K = 2, 5, and 62 and find that our final results are mostly unchanged (however, see Section 4.10.2 for more on our K = 2 run). The basics of the procedure are best illuminated by an example, as follows. Let us begin with a sample of 60 SNe. 10-fold CV starts by randomly dividing the sample into 10 subgroups of 6 SNe each. The first subgroup is set aside; this will be our first validation set. We combine the remaining 9 subgroups and train our model on the 54 other SNe to determine the best-fitting parameters (i.e., α, β, γ, and M ). Using the best-fitting pa-rameters found with the training set, we apply our model to the validation set and calculate the Hubble residual for each of the 6 SNe in our validation set. We repeat this process using the second subgroup as the validation set and the union of the other 9 subgroups as the training set. This process is repeated a total of 10 times (once for each subgroup as a validation set) until we have calculated a residual for every SN in the sample. Comparing the Models As in Blondin et al. (2011), the dispersion in each model is estimated using the weighted root-mean-square (WRMS), where the weights, ws, are given by The variance in WRMS is estimated as The 1σ uncertainty is found by taking the square root of the variance. This is a more appropriate estimator of the dispersion in the model then simply taking the standard deviation in the residuals since we are not guaranteed that the mean residual will be zero (Blondin et al. 2011). Following Blondin et al. (2011), for each model which uses a spectral measurement (Equations 4-7) we also calculate the intrinsic prediction error (σ pred ), the intrinsic correlation (ρx 1 ,c) of the residuals with residuals using the (x1, c) model (Equation 8), and the difference (∆x 1 ,c) in intrinsic prediction error with respect to the (x1, c) model. An uncertainty can be computed for ∆x 1 ,c (Appendix B of Blondin et al. 2011), and thus the significance of the difference between a given model and the standard (x1, c) model can also be computed. This parameter is the most direct comparison of how much better (or worse) a model which utilises a spectral measurement is compared to the (x1, c) model, and how significant the change is. Note that ∆x 1 ,c < 0 represents an improvement over the (x1, c) model. Benetti et al. (2005) defined the velocity gradient,v = −∆v/∆t, as the "average daily rate of decrease of the expansion velocity" of the Si II λ6355 feature and used this parameter to place each of their 26 SNe Ia into one of three categories. The high velocity gradient (HVG) group had the largest velocity gradients (v 70 km s −1 d −1 ) and the low velocity gradient (LVG) group had the smallest velocity gradients. The third subclass (FAINT) had the lowest expansion velocities, yet moderately large velocity gradients, and consisted of subluminous SNe Ia with the narrowest light curves (∆m15(B) 1.6 mag). Velocity Gradients Even though the BSNIP sample is not well suited to velocity gradient measurements (the average number of spectra per object is ∼2, shown in BSNIP I), we are still able to and a close-up view of the low-v objects (bottom). Blue points are high-velocity (HV) objects, red points are normal-velocity objects, and black points are objects for which we could not determine whether the SN was normal or high velocity (see BSNIP II for further details regarding how HV SNe are defined). Stars are FAINT objects, squares are low velocity gradient (LVG) objects, and triangles are high velocity gradient (HVG) objects (see BSNIP II for further details regarding how these subclasses are defined). calculate av value for many of our SNe Ia. Figure 1 shows 44 objects and theirv measurements plotted against their ∆m15(B) values. The points are colour-coded by their nearmaximum Si II λ6355 velocity, with red points representing normal-velocity objects and blue points representing highvelocity (HV) objects. These so-called "Wang types" were first presented by Wang et al. (2009), and in BSNIP II we discuss our definition of these subclasses in more detail. The data are also shape-coded by the aforementioned "Benetti types" (FAINT are stars, LVG are squares, and HVG are triangles). The top panel of Figure 1 shows all objects for which av and ∆m15(B) are measured, while the bottom panel shows a close-up view of the same data such that the axis ranges match those of Figure 3b of Benetti et al. (2005). As a result of the definitions of the different subclasses, the FAINT SNe are found to the right in Figure 1 (i.e., large ∆m15(B) values) and the HVG SNe are found in the upper part of the Figure (i.e., largev values), though there is no obvious break between the various classes. As pointed out in BSNIP II and confirmed in Figure 1, the HVG and LVG objects have similar average ∆m15(B) values and ranges of values. It was stated by Benetti et al. (2005) thatv is weakly correlated with ∆m15(B), though there is no evidence of such a correlation in the BSNIP data. They also claim that there are three distinct families of SNe Ia (LVG, HVG, and FAINT) based partially on their plot ofv versus ∆m15(B). With almost 70 per cent more objects, the BSNIP data fill in this parameter space and cast serious doubt on the existence of truly distinct families of SNe Ia based on velocity-gradient measurements (see Table 4 of BSNIP II for the median values ofv and ∆m15(B) for each of these three subclasses). Velocities at Maximum Brightness Expansion velocities of SNe Ia are calculated from the minima of various absorption features (see BSNIP II for more information on how this measurement is performed on the data presented here). These velocities have been compared to light-curve width measurements and photometric colours in a variety of ways (e.g., Hachinger et al. 2006;Blondin et al. 2011;Nordin et al. 2011b). As discussed in BSNIP II, Hachinger et al. (2006) interpolate/extrapolate their expansion velocities to the time of maximum brightness (i.e., t = 0 d), and v0 was defined as the expansion velocity of Si II λ6355 at maximum brightness. They then compare these velocities to the light-curve shape parameter ∆m15(B). Figure 2 presents the 44 SNe in the BSNIP data for which both v0 and ∆m15(B) are calculated. As above, the points are colour-coded by "Wang type" and shape-coded by Benetti type. As in Figure 1, the FAINT objects are (by definition) found at the right-hand edge of Figure 2. Similarly, the HV objects are all in the upper half of the figure. All of the objects, except for a few of the highest velocity SNe and the object with the lowest velocity in the figure, are within ∼1500 km s −1 of 11,000 km s −1 . This is remarkably similar to what was found by Hachinger et al. (2006), except the BSNIP data have a slightly larger scatter around the typical velocity of ∼11,000 km s −1 . Hachinger et al. (2006) also note that the majority of the scatter in velocity comes from the HVG objects, which is also true in Figure 2. However, there is a major difference between the two results. The BNSIP data in Figure 2 show that HVG SNe have a huge range of v0, spanning well above average to significantly below average values, whereas the data presented by Hachinger et al. (2006) exhibit evidence of the (oft-quoted) one-to-one relationship between HVG and HV SNe Ia. As mentioned in BSNIP II, while most HVG objects are found to have expansion velocities above the LVG objects, this is not an exclusive feature of HVG SNe Ia. A relationship between the calculated intrinsic "pseudocolour" Bmax − Vmax (i.e., the B-band magnitude at B-band maximum minus the V -band magnitude at V -band maximum corrected for host-galaxy reddening) and v0 has been seen previously . Figure 3 shows 39 BSNIP SNe for which we measure v0 and the difference between observed B-band magnitude and observed V -band magnitude at the time of B-band maximum brightness (what we refer to as (B − V ) max in this work). We opt to use (observed) (B − V ) max since it is an actual, physical colour of the SN at a discrete period of time, though keep in mind that in this study we do not attempt to correct for host-galaxy reddening. Following , we are only presenting SNe Ia with (B − V )max < 0.319 mag in Figure 3. The linear least-squares fit to all of the points is shown by the solid line. We see effectively no evidence for any overall correlation (Pearson correlation coefficient of 0.17 2 ), and the linear fit seems to be driven solely by the two outliers with (B − V )max > 0.25 mag. If those two objects are removed, the correlation coefficient drops to 0.05 (the bestfitting line to the remaining data is shown by the dotted line). Finally, the dashed line is the relationship between v0 and Bmax − Vmax from the model spectra of Kasen & Plewa (2007), as shown by Foley & Kasen (2011). When including all of the data in Figure 3, the correlation between v0 and (B − V ) max is weaker that what was seen by , where they derive correlation coefficients of 0.28 and 0.39 for two different datasets. 3 However, no correlation is present whatsoever if the two significant outliers are removed. The difference between the results shown here and those of is likely due to the fact that we do not correct the BSNIP colours for any possible host-galaxy reddening while attempt to convert the observed colours into intrinsic colours. We will delve deeper into this colour conversion in future BSNIP studies (Ganeshalingam et al., in preparation). Velocities Near Maximum Brightness If instead of v0 we plot the actual measured velocity of the Si II λ6355 feature for each object having a spectrum within 5 d of maximum brightness versus ∆m15(B), the same basic trends are seen (but with nearly twice as many data points). A comparison of the velocity of the Si II λ5972 feature (within 5 d of maximum brightness) with ∆m15(B) yields nearly identical results. The biggest difference is that the velocities are clustered around 10,300 km s −1 , lower than that of the Si II λ6355 feature. This difference between these features has been seen in previous studies as well (Hachinger et al. 2006). The same analysis using the velocity of the S II "W" once again shows the same behaviour, but with an even lower typical velocity (∼9000 km s −1 ). This has also been pointed out in earlier work (Hachinger et al. 2006). The velocity of the S II "W" feature is further discussed below. Blondin et al. (2011) present a Hubble diagram that is corrected by SALT2 light-curve width parameter x1 and colour parameter c in addition to the velocity of the Si II λ6355 feature. This yielded approximately a 10 per cent decrease in the scatter of their Hubble diagram. It has also been shown that the Si II λ6355 velocity is uncorrelated with both x1 and c, and thus it gives information beyond light-curve width and colour. However, the anticorrelation of this velocity with Hubble residuals (corrected for lightcurve width and colour) is relatively small (Blondin et al. 2011). Plotted in Figure 4 are the 66 SNe Ia in the BSNIP sample which have SALT2 fits and measured Si II λ6355 velocities within 5 d of maximum brightness. The velocities are plotted against x1, c, and Hubble residuals corrected for light-curve width and colour (only for objects which are used to make the Hubble diagram). As above, the points are colour-coded by "Wang type." In Figure 4 the data are shape-coded by what is referred to in BSNIP II as the "SNID type." The SuperNova IDentification code (SNID; Blondin & Tonry 2007), as implemented in BSNIP I, was used to determine the spectroscopic subtype of each SN used in BSNIP II (as well as this study). SNID compares an input spectrum to a library of spectral templates in order to determine the most likely spectroscopic subtype. Spectroscopically normal objects are objects classified as "Ia-norm" by SNID. The data in Figure 4 are extremely similar to those of Blondin et al. (2011) and show the same correlations (or lack thereof). The velocity of the Si II λ6355 feature as measured from the BSNIP data is uncorrelated with x1 (correlation coefficient −0.21). This velocity is also uncorrelated with c, as seen in Blondin et al. (2011). Removing SNe with c > 0.5 from the BSNIP data (which is done when Hubble diagrams are produced using this parameter) yields a correlation coefficient of 0.13 between Si II λ6355 velocity and c. The bottom panel of Figure 4 shows the velocity of the Si II λ6355 feature versus the (x1, c)-corrected Hubble residuals. If the correction term in a given model (here, the velocity of Si II λ6355) is well correlated with the SALT2corrected Hubble residuals, then the extra term is likely providing new information that is actually in the data. Thus, the model is improving the fit not by fitting to noise, but to physical information contained in the data. However, the correction term here is uncorrelated with the residuals (cor- No improvement is found (i.e., the WRMS increases) when adding the Si II λ6355 velocity to the standard (x1, c) model (∆x 1 ,c = 0.0083±0.0084). Blondin et al. (2011) found that there was a 10 per cent decrease in the WRMS when using the x1, c, and Si II λ6355 velocity, but their ∆x 1 ,c is consistent with 0 (as is ours). They also find only a "modest" correlation between Si II λ6355 velocity and (x1, c)-corrected residuals (correlation coefficient 0.4; Blondin et al. 2011). Therefore, it seems that adding the Si II λ6355 velocity to the standard (x1, c) model does not significantly improve the precision of SN Ia distance calculations. In Figure 5 we plot all measured velocities of the Si II λ6355 feature (within 5 d of maximum brightness) against (B − V ) max . This differs from Figure 3 since the velocities in that plot were interpolated/extrapolated to t = 0 d (i.e., v0) and this plot shows the actual velocities measured from the Figure 4. The velocity of the Si II λ6355 feature versus SALT2 light-curve width parameter x 1 (top), SALT2 colour parameter c (middle), and Hubble residuals corrected for light-curve width and colour (bottom). Blue points are HV objects, red points are normal-velocity objects, and black points are objects for which we could not determine whether the SN was normal or high velocity. Squares are Ia-norm, upward-pointing triangles are Ia-91T, downward-pointing triangles are Ia-99aa, and circles are objects which do not have a SNID subtype (see BSNIP I for further details regarding how these subclasses are defined). spectra. The top panel displays all 77 SNe from the BSNIP data for which both of these values have been measured and the bottom panel provides a close-up view of objects with (B − V ) max < 0.319 mag (in order to match the sample fit by . The linear least-squares fit to all of the points is shown by the solid line and the fit to SNe with (B − V ) max < 0.319 mag is shown by the dotted line; the correlation coefficients are 0.23 and 0.30, respectively, and slightly lower than the value found by , 0.39 (though the linear fit shown here matches well to what was found in their study). As with Figure 3, there is only marginal evidence for a correlation and there is a large amount of scatter around the linear fit. Again, this is unsurprising since we are measuring observed (B − V ) max and are plotting pseudo-colours that have been corrected for hostgalaxy reddening. The typical (B − V ) max for HV objects is larger than for Ia-norm and Ia-91T/99aa, but smaller than that of Ia-91bg (which appear in the Figure as highly reddened, significant outliers). However, the range of (B − V ) max spanned by Ia-norm and HV objects is similar, with a significant amount of overlap. The scatter in (B − V ) max of the HV objects (0.098 mag) is effectively equal to that of the other objects in the bottom panel of Figure 5 (0.101 mag). While this matches the scatter in the HV objects found previously (0.095 mag; , it differs from the SNe with velocities < 11, 800 km s −1 , which they found to have a smaller intrinsic colour scatter (0.072 mag). The correction for host-galaxy reddening applied by is likely responsible for the decrease in colour scatter of the normal-velocity objects. The dashed line in Figure 5 is the relationship between v0 and Bmax − Vmax from the model spectra of Kasen & Plewa (2007), as shown in Figure 8 of Foley & Kasen (2011). Interestingly, even though plot intrinsic colours and we plot observed colours, both studies match these predictions very well. One difference between Figure 5 and Figure 8 of Foley & Kasen (2011) is that the BSNIP data contain a handful of objects that are extremely reddened (i.e., they have relatively large values of (B − V ) max ). However, this can easily be explained. All of the Ia-norm and HV SNe in Figure 5 with (B − V ) max > 0.31 mag have been observed to have significant reddening from their host galaxies (which is not taken into account in the models of Kasen & Plewa 2007). The other objects with (B − V ) max > 0.31 mag are Ia-91bg, which were also not discussed in the models of Kasen & Plewa (2007). Figure 8 of Foley & Kasen (2011) also presents the theoretical relationship between velocity of the Ca II H&K feature and intrinsic (B − V ) max . A comparison of this velocity and observed (B − V ) max measured from the BSNIP data shows effectively no correlation with a correlation coefficient of only 0.14. Again, the scatter in (B − V ) max is similar for the HV and normal-velocity objects. However, this is unsurprising since, as pointed out in BSNIP II, the Ca II H&K velocities of HV and normal-velocity objects (determined using the Si II λ6355 velocity) are highly overlapping. A distance model involving x1, c, and the velocity of the Ca II H&K feature was calculated, and while the WRMS technically decreased with the addition of this velocity, it was not found to be significant (∆x 1 ,c = −0.0085 ± 0.0141). . Blue points are HV objects, red points are normal-velocity objects, and black points are objects for which we could not determine whether the SN was normal or high velocity. Squares are Ia-norm, stars are Ia-91bg, upward-pointing triangles are Ia-91T, downward-pointing triangles are Ia-99aa, and circles are objects which do not have a SNID subtype (see BSNIP I for further details regarding how these subclasses are defined). The solid line is the fit to all of the data while the dotted line is the fit only to objects with (B − V )max < 0.319 mag. The dashed line is the relationship between v 0 and Bmax − Vmax from the model spectra of Kasen & Plewa (2007), as shown by Foley & Kasen (2011). Other models involving this velocity were all found to degrade the accuracy of distance measurements when compared to the standard (x1, c) model. Furthermore, we used Equations 4-7 along with velocities of all seven spectral features for which velocities were measured and compared the results to the (x1, c) model. The vast majority of these models predicted a larger scatter than the standard model corrected for light-curve width and colour. However, both the O I λ7773 triplet and the Ca II near-IR triplet, when combined with x1 and c, were found to perform equally as well as when using just x1 and c. Thus, adding either of these velocities did not degrade the distances calculated, but they did not significantly improve them either. On the other hand, the velocity of the S II "W," when used in conjunction with x1 and c, decreased the WRMS by ∼3 per cent and the σ pred by ∼14 per cent, at the 1.8σ level (∆x 1 ,c = −0.0119 ± 0.0066). Figure 6 shows the 64 SNe Ia in the BSNIP sample which have SALT2 fits and measured S II "W" velocities within 5 d of maximum brightness. The velocities are again plotted against x1, c, and Hubble residuals corrected for light-curve width and colour (only for SNe that are part of the Hubble diagram). Neither x1 nor c show any correlation with the velocity of the S II "W" (correlation coefficients of 0.13 and 0.17, respectively). Even when removing SNe with c > 0.5 from the BSNIP data (as done for the Hubble diagrams), the correlation coefficient becomes only −0.15. The bottom panel of Figure 6 shows the velocity of the S II "W" versus the (x1, c)corrected Hubble residuals. The correction term is uncorrelated with the residuals (correlation coefficient 0.003). Also shown, as the grey band, is the WRMS for both models. While the relative depth of this feature has been seen to improve Hubble diagrams (Blondin et al. 2011, and Section 4.3 of this work), the velocity of this feature has not previously been shown to do this. When adding the velocity of the S II "W" feature to the standard (x1, c) model, the overall decrease in WRMS is relatively small, but the effect appears to be fairly significant. This distance model should be explored further using future, larger datasets. Relative Depths The depth of the bluer absorption of the S II "W" feature relative to the pseudo-continuum has been shown to decrease the scatter of Hubble residuals by about 10 per cent (Blondin et al. 2011). The relative depth (a) of this feature was found to be uncorrelated with both x1 and c by Blondin et al. (2011), and its correlation with Hubble residuals (corrected for light-curve width and colour) is relatively small. Figure 8 presents the 64 BSNIP SNe Ia which have SALT2 fits and measured relative depths of the redder absorption of the S II "W" feature within 5 d of maximum brightness. The depths are plotted against x1, c, and Hubble residuals corrected for light-curve width and colour (for objects that are in the Hubble diagram). It should be noted that whereas Blondin et al. (2011) measure the bluer absorption of this feature (λ5454), we measure only the redder absorption (λ5624) in BSNIP II. While these absorptions are separated by < 200Å, there may be differences between the relative depths of the two. Nevertheless, we will compare the results presented here to those of Blondin et al. (2011), with the caveat that this may be analogous to comparing "Red Delicious apples" to "Granny Smith apples." The BSNIP data show a fairly significant correlation between the relative depth of S II "W" and x1 (a correlation coefficient of −0.47, which is significant at the > 3σ level 4 ). This is in stark contrast to Blondin et al. (2011), who found no evidence of such a correlation (correlation coefficient −0.04). As opposed to x1, both studies agree that c is uncorrelated with the relative depth a of the S II "W" feature (the BSNIP data having a correlation coefficient of 0.09). The Hubble residuals corrected for x1 and c show weak evidence of a correlation with the relative depth of the S II "W" (correlation coefficient 0.29). Blondin et al. (2011) found that a model which includes x1, c, and a of the S II "W" will decrease the WRMS by 10 per cent, while we find the WRMS to be effectively unchanged whether or not one adds in the relative depth of S II "W" (∆x 1 ,c = 0.0012 ± 0.0047). All other spectral features' relative depths were used, along with Equations 4-7, to create Hubble diagrams. No model significantly decreased the residuals over the standard (x1, c) model. Models involving the relative depth of Ca II H&K, Si II λ6355, and the O I triplet, each in combination with x1 and c, were found to be as accurate as the (x1, c) model. Pseudo-Equivalent Widths Nordin et al. (2011b) fit the temporal evolution of their pEW measurements to attempt to "remove" the age dependence of the pEW values. To do this, an epoch-independent quantity called the "pEW difference" (∆pEW) was defined; it is simply the measured pEW minus the expected pEW at the same epoch using the linear or quadratic fit. In BSNIP II we calculated ∆pEW for the BSNIP sample. However, the relationships seen in BSNIP II involving ∆pEW values were also seen when simply using the pEW values (within 5 d of maximum brightness). This is due to the fact that pEWs do not evolve much within a few days of maximum. Furthermore, the ∆pEW values rely on defining a fit to the measurements which adds another assumption to the analysis. Thus, the current study will focus solely on pEW values within 5 d of maximum brightness and will not further investigate ∆pEW values. Note that comparisons to Nordin et al. (2011b) will be made, despite the fact that their study uses ∆pEW values almost exclusively. Si II λ4000 The pEW of the Si II λ4000 feature has recently been found to be an indicator of light-curve width due its relatively tight anticorrelation with the SALT2 x1 parameter (Arsenijevic et al. 2008;Walker et al. 2011;Blondin et al. 2011;Nordin et al. 2011b;Chotard et al. 2011). Curiously, in BSNIP II, only a weak correlation was found between this pEW and another often-used SN Ia luminosity indicator, the so-called "Si II ratio", (Si II) (originally defined by Nugent et al. 1995; see also BSNIP II and Section 4.5 for more information on this spectral parameter). Here the pEW of Si II λ4000 is compared directly to photometric parameters. In Figure 9 we present the 57 BSNIP SNe which have SALT2 fits and measured pEW values for the Si II λ4000 feature within 5 d of maximum brightness. The pEWs are plotted against x1, c, and Hubble residuals corrected only for colour (for SNe Ia that are used when constructing the Hubble diagram). The pEW of Si II λ4000 is highly correlated with x1 ( Figure 9, top plot) with a correlation coefficient of −0.86 (which is significant at > 3σ). This is in agreement with many previous studies and is actually a stronger correlation than has been seen before (e.g., Arsenijevic et al. 2008;Walker et al. 2011;Blondin et al. 2011;Nordin et al. 2011b;Chotard et al. 2011). The least-squares linear fit to the data is shown as the solid line in the top plot of Figure 9 Si II λ4000 and coded each low-redshift point based on its Benetti type; they found that the FAINT objects fell below the linear relationship (i.e., they had smaller than expected pEW values). In BSNIP II it was shown that FAINT (and similarly, Ia-91bg) objects have, if anything, larger than average pEW values (especially for the Si II λ4000 feature). Figure 9 shows no Ia-91bg objects since SALT/2 is unable to fit that spectral subtype. However, when the BSNIP values of x1 are plotted against the pEW of Si II λ4000 and coded by Benetti type, the two FAINT objects fall at the upper-left end of the linear correlation. The cause of this discrepancy between the two studies is unclear. The middle plot of Figure 9 shows no real evidence that the pEW of the Si II λ4000 feature is correlated with c. The correlation coefficient we find for all of the objects is 0.095, and when removing objects with c > 0.5, the coefficient only increases to 0.20. This is slightly smaller than what was found by Blondin et al. (2011), and significantly smaller than what was found by Nordin et al. (2011a). While the former claim no observed correlation, the latter do claim that the pEW of Si II λ4000 is correlated with c. The bottom panel of Figure 9 shows the Hubble residuals when corrected only for SALT2 colour versus the pEW of Si II λ4000. Blondin et al. (2011) saw a relatively weak correlation between these parameters and found that a distance model involving c and the pEW of the Si II λ4000 feature led to a "marginal improvement" over the standard (x1, c) model. We find a strong correlation (coefficient of 0.81, significant at > 3σ), and the (c, Si II λ4000 pEW) model performs nearly as well as the (x1, c) model (∆x 1 ,c = 0.012 ± 0.036). Thus, the BSNIP data are in agreement with the finding of Blondin et al. (2011) that the pEW of the Si II λ4000 feature is essentially a replacement for the x1 parameter and is an accurate measurement of light-curve width. Figure 10 shows the Hubble diagram residuals for the (c, Si II λ4000 pEW) model and the colour-corrected-only model verus redshift, with the WRMS for each model as the grey band. Interestingly, we found that a model involving c, x1, and the pEW of the Si II λ4000 feature actually leads to a ∼ 10 per cent decrease in WRMS and a ∼ 28 per cent decrease in σ pred . For this model, ∆x 1 ,c = −0.026 ± 0.15, which implies that the improvement has a significance of about 1.8σ. The correlation between pEW of Si II λ4000 and Hubble residuals corrected for colour and light-curve width is only 0.20, and thus perhaps the combination of c, x1, and pEW of the Si II λ4000 feature is not actually adding much new information. However, the utility of a model including all three of these parameters should be investigated with other SN samples. Figure 11 contains Hubble residuals for the (x1, c, Si II λ4000 pEW) model as well as the standard (x1, c) model (using the same set of objects) versus redshift. Also shown, as the grey band, is the WRMS for each model. While the above investigation focused on the SALT2 light-curve fitter, we can investigate correlations between the pEW of the Si II λ4000 feature and photometric parameters from MLCS2k2. Figure 12 shows the 63 BSNIP SNe which have MLCS2k2 fits and measured pEW values for the Si II λ4000 feature within 5 d of maximum brightness. The pEWs are plotted against ∆ and AV . The correlation coefficient between pEW of Si II λ4000 and ∆ is 0.74, which is larger than previously observed Figure 10. Hubble diagram residuals versus z CMB for the (c, Si II λ4000 pEW) model (top) and the colour-corrected-only model (bottom). The grey band is the WRMS for each model. Colours and shapes of data points are the same as in Figure 5. (Nordin et al. 2011b) and implies that these parameters are highly correlated (at a significance > 3σ). This is expected based on the high degree of correlation between the pEW of Si II λ4000 and x1 since both ∆ and x1 are measurements of the width of SN Ia light curves. In Figures 12 and 9 the Ia-99aa objects lie at the extreme low-pEW end of the relationship, though perhaps they are slightly systematically below the trend in the top panel of Figure 12. On the other hand, the Ia-91bg objects in the top panel of Figure 12 fall significantly below the linear trend. As in Nordin et al. (2011b), there is no significant correlation between the pEW of Si II λ4000 and AV , even when objects with AV > 0.5 mag are removed (correlation coefficients of < 0.13 in both cases using the BSNIP data). In all of the plots presented in this section, HV and Ia-norm objects overlap significantly. This is expected since in BSNIP II it was shown that the pEW of the Si II λ4000 feature is extremely similar for these two subclasses. We also note that the plot of ∆m15(B) versus the pEW of Si II λ4000 looks nearly identical to the top panel of Figure 12 and has a larger correlation coefficient of 0.85 (again, significant at > 3σ). Nordin et al. (2011b) found that the ∆pEW of Fe II within 3 d of maximum brightness is well correlated with SALT colour. In Figure 13 we present the 63 SNe in the BSNIP sample which have SALT2 fits and a measurement of the pEW of the Fe II complex within 5 d of maximum brightness. The two observables plotted have a correlation coefficient of 0.49 (with significance > 3σ), increasing slightly for objects with c < 0.5. If, however, only spectra within 3 d of maximum brightness are used, the correlation increases slightly but the sample size decreases by nearly one-quarter. The strength of this correlation is slightly higher than that found by Nordin et al. (2011b), though we point out that they used SALT colour while we use SALT2 colour, and they used ∆pEW (i.e., the difference between the measured pEW and the average pEW evolution) while we use the ac- tual measured pEW. Ia-99aa objects appear to have typical values of both the pEW of Fe II and c, though there are a very small number of objects of this spectral subtype. On the other hand, HV SNe seem to be both redder and have larger pEWs, but as before, there is significant overlap with Ia-norm objects as well. Fe II and Mg II The pEW of the Mg II complex is also correlated with c, as seen in Figure 14. In that figure, 64 SNe within 5 d of maximum brightness are plotted, and they have a correlation coefficient of 0.44 (for objects with c < 0.5) with the significance of the correlation ∼3σ). Once again, when using only spectra within 3 d of maximum brightness the correlation becomes slightly stronger at the expense of significant decrease in the sample size. This correlation has been observed previously, though at slightly lower significance and with about one-third the number of low-z SNe (Walker et al. 2011). The data presented in Figure 14 of Mg II pEW values and shows nearly no correlation with c. There seems to be effectively no difference between the various spectroscopic subtypes in this parameter space. The significant correlations between c and both the pEWs of Mg II and Fe II are promising. Measuring a pEW of a broad feature in a single spectrum near maximum brightness is much simpler than obtaining photometric data for a full light curve that needs to then be modeled by a lightcurve fitter (such as SALT2). Furthermore, since interstellar reddening cannot affect pEWs significantly, it seems that the intrinsic colour of the SN is correlated with both the pEWs of Mg II and Fe II. As with all other spectral measurements discussed in this work, we constructed Hubble diagrams using the pEW of Mg II and Fe II and models of the form shown in Equa-tions 4-7. All but one of these models performed worse than the standard (x1, c) model. The model including x1, c, and the pEW of Mg II was only as accurate as the standard model (∆x 1 ,c = −0.004 ± 0.004). However, using the pEW of Mg II or Fe II as a replacement for c or in addition to c (along with x1) is a tantalising possibility that should be explored using future datasets. Furthermore, we attempt two Hubble diagrams using no light-curve information whatsoever. One uses only the pEWs of the Si II λ4000 feature and the Mg II complex, and the other uses only the pEWs of the Si II λ4000 feature and the Fe II complex. The idea is that the pEW of Si II λ4000 is a good proxy for x1, and the pEWs of the Mg II and Fe II features are reasonably good proxies for c. These Hubble diagrams included only a subset of the data used in the flux-ratio study (see Section 4.10) since they needed to have well-measured pEWs. While both the Mg II and Fe II models had quite low WMRS values (0.274 and 0.297, respectively), they were not as low as the WRMS values using the standard (x1, c) model (0.118 and 0.121, respectively). S II "W" As discussed in Section 4.3, the depth of the bluer absorption of the S II "W" feature relative to the pseudo-continuum was shown by Blondin et al. (2011) to decrease the scatter of Hubble residuals by about 10 per cent. In that section we showed that the relative depths of the redder absorption of the S II "W" in the BSNIP data were marginally correlated with x1, opposite to what was seen by Blondin et al. (2011). However, both studies agree that c and the colourand width-corrected Hubble residual are uncorrelated with the relative depth of the S II "W." As discussed in BSNIP II, the relative depth of a spectral feature relies on a spline fit to the spectra and can fairly easily be contaminated by local noise. The pEW, however, is less prone to this type of contamination, relies only on the definition of the pseudo-continuum (and not any additional fit to the data), and often contains the same information as the relative depth. For these reasons the pEW values were used in favor of the a values in the analysis performed in BSNIP II. Furthermore, both Blondin et al. (2011) and BSNIP II measure the pEW of the entire S II "W" feature, and thus pEW values are a more fair comparison between the two studies than are the a values. We find that c is uncorrelated with the pEW of the S II "W" (correlation coefficient of 0.15 for objects with c < 0.5). In contrast with the relative depth of the S II "W," however, x1 is also uncorrelated with the pEW of the S II "W" (correlation coefficient −0.16). This is significantly weaker than the correlation between x1 and the relative depth of this feature found in Section 4.3. Equations 4-7 were used to create Hubble diagrams involving the pEW of the S II "W," but none of these models led to an improvement in the WRMS. Si II λ5972 In BSNIP II it was shown that the pEW of the Si II λ5972 feature correlated well with the spectral luminosity indicator (Si II) (see Section 4.5 for more information on this parameter). Thus, one might expect this pEW to be an accurate luminosity indicator as well, and in fact evidence for a correlation between the pEW of Si II λ5972 and both x1 and ∆m15(B) has been seen in previous work (Nordin et al. 2011b;Hachinger et al. 2006). Figure 15 shows the 55 SNe which have a SALT2 fit as well as a measured pEW for the Si II λ5972 feature. We find a correlation coefficient of −0.66 with a significance of > 3σ, which is stronger than what was found by Nordin et al. (2011b). As in the relationship between x1 and the pEW of Si II λ4000, the Ia-99aa objects appear to follow the relation. Similarly, a strong linear correlation has been observed between the pEW of Si II λ5972 and ∆m15(B) (Hachinger et al. 2006). This relationship is also found in the BSNIP data, as shown in Figure 16: there are 62 SNe, the parameters have a correlation coefficient of 0.76 (significant at > 3σ), and as with the relationship with x1, Ia-99aa objects occupy the bottom of the correlation while the Ia-norm and HV objects are highly overlapping. Hachinger et al. (2006) denote the Benetti type of each object on their plot of the pEW of Si II λ5972 versus ∆m15(B) and note that FAINT objects are found at the top of the correlation while LVG SNe are found at the bottom (with most HVG objects occupying the middle of the trend). The BSNIP data show a similar behaviour for the FAINT objects (again, much like the Ia-91bg objects in Figure 16); however, our data show no differentiation between the LVG and HVG objects in this parameter space. Finally, we note that when plotting the pEW of the Si II λ5972 feature against the MLCS2k2 ∆ parameter, nearly the exact same results are seen as those in Figure 16. As with the S II "W," no distance model utilising the pEW of Si II λ5972 led to an improvement in the Hubble residuals. Si II λ6355 Much like the pEW of Si II λ5972, the pEW of Si II λ6355 has been seen to correlate marginally well with x1 and to separate various spectral subtypes when compared to In Figure 17 the 66 SNe with SALT2 fits and pEW values for the Si II λ6355 feature are shown. The pEW values are plotted against x1, c, and Hubble residuals corrected for light-curve width and colour (for objects that are part of the Hubble diagram). A correlation coefficient of −0.58 is calculated for the top panel, which is significant at > 3σ. This is consistent with what was observed in the data studied by Nordin et al. (2011b). As in the previous two relationships between x1 and the pEW of Si II features, the Ia-99aa objects follow the linear relation. We find that c is somewhat correlated with the pEW of Si II λ6355 (correlation coefficient of 0.43 and significance of ∼3σ) for objects with c < 0.5. This pEW is even less correlated with Hubble residuals corrected for x1 and c (correlation coefficient 0.23). A distance model which includes x1, c, and the pEW of the Si II λ6355 feature leads to a 4 per cent decrease in WRMS, a 6 per cent decrease in σ pred , and is significant at the 1.2σ level. So while this is technically an improvement over the standard (x1, c) model, it may not actually be very helpful. Plotting the pEW of Si II λ6355 versus ∆m15(B), Hachinger et al. (2006) are able to separate FAINT, LVG, and HVG objects relatively accurately. This is also seen, though at a lower significance, in the BNSIP data. In Figure 18 we plot 80 SNe; the parameters have a correlation coefficient of 0.45 (again with a significance of ∼ 3σ), but the Ia-91bg, Ia-99aa, and the lone Ia-91T objects are all reasonably well separated from the bulk of the SNe. There is even some evidence for a difference between HV and Ianorm objects in this parameter space. As mentioned above, Hachinger et al. (2006) denote the Benetti type of each object on their plot of the pEW of Si II λ6355 versus ∆m15(B) and state that the three subtypes are well separated. Again the BSNIP data support this conclusion, but at a weaker significance. FAINT objects are found in the same part of parameter space as the Ia-91bg objects, while HVG and HV SNe tend to occupy a different part of parameter space compared with the LVG and Ia-norm/91T/99aa objects. How- The correlations for the full sample and the less reddened sample are weak (coefficients 0.12 and 0.16, respectively). Qualitatively, this matches the work of , even though they plot intrinsic colours and we plot observed colours. The pEW of the Si II λ6355 feature is less correlated with observed (B − V ) max than the velocity near maximum brightness of that same spectral feature. Ca II and O I Much like Hachinger et al. (2006), we searched for possible correlations between the pEW of each spectral feature investigated and various photometric parameters. Many of the strongest and most interesting of these possible correlations have been discussed in the preceding sections. For Ca II H&K as well as the O I triplet, no pairs of pEWs and photometric parameters were found to have correlation coefficients > 0.4. However, the pEW of the Ca II near-IR triplet is found to correlate with ∆ (with a correlation coefficient of 0.66 and significance of > 3σ) and with c (with a correlation coefficient of 0.50 for SNe with c < 0.5, though the significance of this correlation is only at the ∼2σ level). One of these correlations is marginal; however, this spectral region has been studied very little in the past. A strength of the BSNIP data is that the average wavelength coverage (3300-10,400Å, BSNIP I) is significantly wider than that of most other SN Ia spectral datasets. For example, one of the largest previously published SN Ia spectral datasets had an average wavelength coverage of 3700-7400Å (Matheson et al. 2008). Thus, the Ca II H&K feature, the O I triplet, and the Ca II near-IR triplet have been ignored almost entirely in past spectral analyses like the one presented here. We once again constructed Hubble diagrams from Equations 4-7 and using the pEW values of Ca II H&K, the O I triplet, and the Ca II near-IR triplet. All but two models were significantly worse at measuring distances than the standard (x1, c) model. The (x1, c, Ca II H&K pEW) and (x1, c, O I triplet pEW) models both slightly decreased the WRMS, but at almost imperceptible levels (∆x 1 ,c = −0.0023 ± 0.0096 and ∆x 1 ,c = −0.0063 ± 0.0148, respectively). The Si II Ratio Historically, one of the first spectral luminosity indicators investigated was the Si II ratio, (Si II), defined by Nugent et al. (1995) as the ratio of the depth of the Si II λ5972 feature to the depth of the Si II λ6355 feature. Hachinger (2006) redefined the Si II ratio as the pEW of Si II λ5972 divided by the pEW of Si II λ6355. In BSNIP II it was shown that these are nearly equivalent definitions, so in order to be consistent with that work we define the Si II ratio for the present study to be The Si II ratio has been shown to correlate with maximum absolute B-band magnitude and ∆m15(B), which is why it has been used as a spectral luminosity indicator (e.g., Nugent et al. 1995;Benetti et al. 2005;Hachinger et al. 2006). Figure 20 shows 62 SNe Ia with both ∆m15(B) and (Si II). The data are correlated with a correlation coefficient of 0.62, which is significant at the > 3σ level. Ia-91bg objects appear at the upper right of the plot and form a continuous relationship with the Ia-norm objects. Ia-99aa objects appear to lie above the main trend (though there are only a handful of these SNe in the figure), while HV objects lie below the main trend. When removing the Ia-99aa objects, the correlation increases slightly. If we instead tag each data point in Figure 20 by its Benetti type, we find that the FAINT objects are found in the upper right of the main trend (similar to the Ia-91bg objects and as seen in previous studies, e.g., Benetti et al. 2005;Hachinger et al. 2006). The LVG and HVG objects are found in the lower-left portion of the plot with a significant level of overlap between the two subclasses. This differs from previous work, where there have been claims that LVG objects have larger Si II ratios and lie above the main trend (Benetti et al. 2005;Hachinger et al. 2006), though these studies and the BSNIP data both observe larger scatter in (Si II) values in the lower ∆m15(B) objects. When removing the LVG objects, the correlation is effectively unchanged. Comparing (Si II) to the MLCS2k2 ∆ parameter yields results similar to those seen in Figure 20. The BSNIP distribution of ∆m15(B) values, while not evenly distributed, is more continuous than in previous studies similar to the present one. For example, the data in Hachinger et al. (2006) contained only one object with ∆m15(B) between 1.5 and 1.7 mag, while the BSNIP data have 6 objects in that range. A more continuous distribution of ∆m15(B) values, combined with the spectroscopic subclasses presented in Figure 20, complicates the relatively simplistic view that underpins the basic Phillips relation. For 0.95 ∆m15(B) 1.0 mag there are Ia-99aa, Ia-norm, and HV objects. On the other end of the ∆m15(B) distribution, between ∼1.75 and ∼1.95 mag there are Ia-91bg, Ia-norm, and again HV objects. Thus, for a given light-curve width (or decline rate), there exist SNe Ia of significantly different subclasses. As discussed in BSNIP II, objects tagged by SNID as Ia-91bg or Ia-99aa are the most spectroscopically peculiar objects and probably only represent the extreme ends of a continuous distribution of spectra. If true, this means that the most spectroscopically peculiar objects may not have the most extreme light curves. The reverse may also be true, namely that the SNe with the most extreme light curves may not be the most spectroscopically peculiar. This is further supported by the relatively wide scatter in the main trend of Figure 20. At any value of ∆m15(B) (with a significant number of objects) there is a broad range in (Si II) values. In BSNIP II, it was pointed out that the relative strength of the two Si II features that go into calculating (Si II) is fairly robust at differentiating between the various "SNID types" and "Wang types." Thus, from a spectrum, either using SNID or the pEWs of Si II features, one may declare an object to be Ia-91bg or Ia-99aa, whereas based on the light curve of the same object it might be considered relatively normal. The significant amount of scatter in the correlation between the Si II ratio and ∆m15(B) also cautions one against simply measuring (Si II) from a single spectrum and then using that value and a fit to the data in Figure 20 to calculate a ∆m15(B) value. In Figure 21 we show the 51 BSNIP SNe which have SALT2 fits and Si II ratios within 5 d of maximum brightness. (Si II) is plotted against x1, c, and Hubble residuals corrected for colour only (for SNe which are used in the Hubble diagram). From the BSNIP data we find that the Si II ratio is only marginally correlated with x1 (top plot of Figure 21) with a correlation coefficient of −0.40 and significance of ∼2σ. This is a weaker relationship than what has been found before (Blondin et al. 2011). The Ia-99aa objects appear to be above the main trend and most of the HV objects seem to be below it. The middle plot of Figure 21 shows no evidence for a correlation between (Si II) and c, with a correlation coefficient of 0.14 for objects with c < 0.5. The bottom plot of Figure 21 shows a low-significance correlation between the Si II ratio and Hubble residuals corrected for colour only (coefficient of 0.34), and it is again significantly weaker than what has been found before (Blondin et al. 2011). In fact, Blondin et al. (2011) go so far as to say that (Si II) acts as a replacement for x1, but the BSNIP data do not support such a claim. Using the current sample, the best model which includes the Si II ratio also includes both x1 and c, but it is only about as accurate as the standard (x1, c) model (∆x 1 ,c = 0.0156 ± 0.0114). Interestingly, Blondin et al. (2011) found that the subluminous (but Ia-norm) SN 2000k is a 2σ outlier in their plot of (Si II) versus c, but part of the main correlation of (Si II) versus x1. This object is in the BSNIP dataset and, while we agree that it is subluminous and spectroscopically normal, it is not a significant outlier in any of the three plots in Figure 21. The Ca II Ratio The Ca II ratio was defined by Nugent et al. (1995) as the ratio of the flux at the red edge of the Ca II H&K feature to the flux at the blue edge of that feature. In the notation from BSNIP II this is Like the Si II ratio, it has been found to correlate with maximum absolute B-band magnitude (Nugent et al. 1995). In BSNIP II it was shown that the Ca II ratio and the Si II ratio are uncorrelated, even though both of them have been used as spectral luminosity indicators. In Section 4.5 it was shown that (Si II) is correlated with ∆m15(B). Figure 22 illustrates that (Ca II) is correlated with ∆m15(B) as well (correlation coefficient 0.70, significance > 3σ). The plot contains 65 SNe; the least-squares linear fit to the data is shown as the solid line in Figure 22, and the dashed lines are the standard error. Interestingly, the HV objects seem to occupy a relatively narrow region of parameter space that is surrounded on all sides by mainly Ia-norm SNe. Ia-99aa objects mostly make up the lowest end of the linear trend, while one of the two Ia-91bg objects in the plot perhaps does not follow the main relationship. Unsurprisingly, comparing ∆ to (Ca II) results in the same trends seen in Figure 22. Figure 23 displays the 64 BSNIP SNe which have SALT2 fits as well as Ca II ratios within 5 d of maximum brightness. (Ca II) is plotted against x1, c, and Hubble residuals corrected for light-curve width and colour (for objects used when constructing the Hubble diagram). The Ca II ratio appears to be well anticorrelated with x1 (correlation coefficient −0.53 with significance > 3σ) and correlated with c (coefficient of 0.46, again with significance > 3σ). The bottom plot of Figure 23 shows that the x1 and c corrected Hubble residuals and (Ca II) are not well correlated with each other (coefficient of 0.24). However, a model that uses x1, c, and the Ca II ratio decreases the WRMS by ∼ 6 per cent and the σ pred by ∼ 33 per cent, although the significance of this improvement is only at the 1.1σ level (∆x 1 ,c = −0.0207 ± 0.0191). The "SiS" Ratio Analogous to the Ca II ratio, the "SiS ratio" was introduced by Bongard et al. (2006) as the ratio of the flux at the red edge of the S II "W" feature to the flux at the red edge of the Si II λ6355 feature. In the notation used in BSNIP II this is In a sample of 8 SNe, (SiS) has been seen to correlate with maximum absolute B-band magnitude in the same way as (Ca II) (Bongard et al. 2006). The SiS ratio and the Si II ratio were found to be only marginally correlated in BSNIP II, and Figure 24 (which contains 72 SNe) shows that (SiS) is anticorrelated with ∆m15(B) (correlation coefficient −0.50 and significance ∼ 3σ). Note that this relationship is in the opposite sense of the one between the Ca II ratio and ∆m15(B). Here Ia-99aa/91T objects lie above the main relationship while the Ia-norm and HV SNe are well mixed. Once again, comparing ∆ or x1 to (SiS) yields similar results to what is seen in Figure 24. The SiS ratio appears to be well correlated with c, when including the most reddened objects. In Figure 25 are 71 SNe, and the correlation coefficient is −0.61 (significant at the > 3σ level). However, if one removes the most highly reddened objects with c > 0.5, the correlation weakens slightly to −0.56 (still with significance > 3σ). No distance model involving the SiS ratio is more accurate than the (x1, c) model. However, when (SiS) is combined with just c or both x1 and c, the accuracy is on par with the standard (x1, c) model (∆x 1 ,c = 0.0209 ± 0.0223 and ∆x 1 ,c = 0.0074 ± 0.0105, respectively). We also find (in Section 4.10) that out of 17,822 flux ratios combined with c, the most accurate distances are calculated using flux ratios that are effectively the SiS ratio, and that it is nearly as accurate as using the standard (x1, c) model. Finally, we note that Blondin et al. (2011) found that (SiS) performs significantly worse than the usual (x1, c) model. The "SSi" Ratio Yet another possible spectroscopic luminosity indicator is the ratio of the pEW of the S II "W" to that of the Si II λ5972 feature (Hachinger et al. 2006). This SSi ratio is defined in BSNIP II as Hachinger et al. (2006) found that the SSi ratio is linearly anticorrelated with ∆m15 (which is opposite to the relationship between (Si II) and ∆m15). The analysis in BSNIP II seemed to confirm this observation by showing that the SSi ratio was strongly anticorrelated (nonlinearly) with the Si II ratio. (S,Si) is plotted against ∆m15(B) for 59 SNe in Figure 26. The results of Hachinger et al. (2006) and the speculation in BSNIP II are confirmed: the SSi ratio is strongly anticorrelated with ∆m15(B) (correlation coefficient of −0.67 with significance > 3σ). Here, the Ia-91bg and Ia-99aa objects follow the main trend and are found at the lower and upper ends of the correlation, respectively. There are only a few HV objects in Figure 26, but there is some evidence that they have larger than average (S,Si) values (which was also seen in Fig. 16 of BSNIP II). Plots of ∆ and x1 versus (S,Si) display trends like that of Figure 26. However, the Ia-99aa objects fall off of the main correlation in both of these parameter spaces. In both cases these SNe have lower (S,Si) values than one would expect from the main correlation. The "SiFe" Ratio Analogous to the SSi ratio, the "SiFe ratio" was defined as the ratio of the pEW of the Si II λ5972 feature to that of the Fe II complex, and it was shown to be an accurate spectroscopic luminosity indicator (Hachinger et al. 2006). In BSNIP II, (Si,Fe) was defined as and found to be strongly correlated with the Si II ratio. We plot (Si,Fe) versus ∆m15(B) for 53 SNe in Figure 27. The results of Hachinger et al. (2006) and the speculation in BSNIP II are again confirmed: the SiFe ratio is strongly (linearly) correlated with ∆m15(B), with a correlation coefficient of 0.68 (and with significance > 3σ). The solid line in the figure is the linear least-squares fit and the dashed lines are the standard error of the fit. Similar to (Si II), Ia-99aa objects are found at the lowest end of the linear trend while the Ia-91bg objects in the plot appear to be above the main relationship. In Figure 27 there are only a few HV SNe, but they appear to have smaller than average (Si,Fe) values (which was also seen in Fig. 17 of BSNIP II). When comparing ∆ and x1 to (Si,Fe), the basic trend seen in Figure 27 is recovered, but with larger scatter (even though the correlation coefficients are similar). Bailey et al. (2009) found that by using ratios of fluxes from a single, binned SN Ia spectrum they could decrease the scatter in their Hubble diagrams. These ratios are defined as R(λy/λx) ≡ F (λy)/F (λx), where λy and λx are the restframe central wavelengths of given bins. 5 The spectra are forced to cover a wavelength range of exactly 3500-8500Å and are binned into 134 equal-sized (in ln λ) bins (corresponding to 2000 km s −1 per bin). The data are also deredshifted and dereddened using the redshift and reddening values presented in Table 1 of BSNIP I and assuming that the extinction follows the Cardelli et al. (1989) extinction law modified by O'Donnell (1994). As in Bailey et al. (2009) and Blondin et al. (2011), a colour-corrected version of this flux ratio (R c (λy/λx)) is also used, and it is defined as the ratio of fluxes as measured from a spectrum that has been corrected for SALT2 c using the colour law from Guy et al. (2007). We use these colour-corrected flux ratios when testing models that also adopt the SALT2 colour parameter (i.e., Equations 6 and 7). Arbitrary Flux Ratios As with the rest of the current study, we only investigate spectra within 5 d of maximum brightness since it was shown in BSNIP II that the spectra do not evolve significantly during these epochs. Also, as mentioned above, we do not use only spectra within 2.5 d of maximum (as has been done previously, Bailey et al. 2009;Blondin et al. 2011) because the significance of our results would be weakened due to the smaller number of objects. Since the average spectrum in BSNIP extends to 3300Å (BSNIP I), we perform the current flux-ratio analysis with the requirement that all spectra cover a wavelength range of 3300-8500Å. No ratios involving wavelengths below 3500Å are found to decrease the WRMS significantly. We also vary the binning of the spectra used in the flux-ratio analysis and investigate data with bin sizes of 4000, 8000, and 10,000 km s −1 . We find that as the spectra are binned more, the WRMS values increase for all ratios. This can be explained by the idea that is the reciprocal of the definition used by Blondin et al. (2011). However, this only really matters for the plots of λy versus λx in Figure 29. Thus, each panel in Figure 7 of Blondin et al. (2011) is the transpose of the panels in Figure 29. When using either definition, note that the first wavelength listed for a given R is the numerator in the actual ratio of fluxes. larger bins will "blend" wavelength bins of flux ratios that decrease the Hubble residuals with ones that do not and will thus add "noise" into flux ratios. Since we are utilising SALT2 fits and Hubble diagrams, we again require that SNe have z helio > 0.01, c < 0.50, and reduced χ 2 < 2. Blondin et al. (2011) also require that the absolute difference between B − V colour at maximum brightness derived from the spectrum and derived from the photometry be less than 0.1 mag. This is used as a proxy for their relative spectrophotometric accuracy. In BSNIP I it was shown (in Table 3) that the relative spectrophotometric accuracy is often < 0.1 mag for the BSNIP data. In fact, B − V colour is only inaccurate at the 0.1 mag level for the oldest (t > 20 d) and noisiest (S/N < 20) BSNIP spectra. Therefore, the spectra investigated here should all be spectrophotometrically accurate enough for the flux-ratio analysis. Of the data studied here, 62 objects have flux ratios calculated for the entire wavelength range mentioned above and reliable SALT2 fits that pass our Hubble diagram criteria (see Section 3.3). We randomly divide our sample into 9 groups of 7 SNe and 1 group with 6 SNe when doing 10-fold CV. Flux-Ratio Results The "best" flux ratios for each model are chosen to be the ones with the lowest WRMS values. Ranking by other parameters, such as the intrinsic prediction error (as used in Blondin et al. 2011), yields different values for the bestperforming flux ratios. However, since our main goal is to minimise the scatter in the Hubble diagram, and since the WRMS has a relatively straightforward interpretation, we rank the best flux ratios by their WRMS values. As discussed in Section 3.3, for each model involving a flux ratio (Equations 4-7) we calculate, in addition to the WRMS, the intrinsic prediction error (σ pred ), the intrinsic correlation (ρx 1 ,c) of the residuals with residuals using the (x1, c) model (Equation 8), and the difference (∆x 1 ,c) in intrinsic prediction error with respect to the (x1, c) model and its significance. These parameters, along with the wavelengths, of the top 10 ratios for each model which includes a flux ratio (Equations 4-7), are shown in Table 1. Also displayed in Table 1 is the WRMS and σ pred of our benchmark (x1, c) model. Figure 28 shows the Hubble diagram residuals for this model versus redshift for the 62 SNe Ia mentioned above. The grey band indicates the WRMS for the model. Figure 29 shows the WRMS (left column) and absolute Pearson correlation coefficient of the correction term (either γR or γR c ) with the uncorrected Hubble residuals (right column) for all 17,822 (= 134 × 133) flux ratios in all four models involving a flux ratio: R, (x1, R), (c, R c ), and (x1, c, R c ) (top to bottom, respectively). All ratios with WRMS values 2σ above the mean are displayed using the same colour. The left column of Figure 29 is a proxy for the overall scatter in the model while the right column indicates how much "new" information is gained by adding in the correction term (γR or γR c ). In general, a model may have a low WRMS (or σ pred ), meaning that the model is fitting the data well, but since the data have uncertainties associated with 8). The grey band is the WRMS for the model. Colours and shapes of data points are the same as in Figure 5. them, the model might be overfitting the data and actually end up fitting noise. One way to discern whether this is the case is to see how well the correction terms correlate with the uncorrected Hubble residuals. As described in Section 4.2.2, if the terms that do not contain xc or c (i.e., γR or γR c ) are well correlated with the uncorrected Hubble residuals, then the measured observable is fitting information that actually exists in the data (as opposed to noise). However, a large correlation does not necessarily imply a good model. This is obvious (for example) in the top row of Figure 29, where most flux ratios have large WRMS values, including ones that also have quite high correlations between the correction term and the uncorrected Hubble residuals. Model 1: R Only Using only a flux ratio (Equation 4) leads to no improvement over the usual (x1, c) model (Equation 8). In fact, this model seems to perform significantly worse, as can be seen by the relatively large ∆x 1 ,c values. The WRMS and σ pred of the "best" ratios are quite a bit larger than those of the (x1, c) model. This differs from the conclusion of previous work, which found that models using a flux ratio alone could perform as well as the (x1, c) model (Bailey et al. 2009;Blondin et al. 2011). The best-performing ratio in the R-only model, R (7770/3750), is not correlated with x1, but is highly correlated with c (correlation coefficient 0.81). This implies that R (7770/3750) is effectively a colour indicator. The best ratio using this model seen by Blondin et al. (2011), R (6630/4400), was found in their study to be similarly correlated with SALT2 colour and to improve the Hubble diagram residuals over using the (x1, c) model (albeit with a low significance). We tested all of our flux-ratio models using a randomly selected subset of 26 SNe from the BSNIP sample in order to match the number of objects used by Blondin et al. (2011). These models all yielded WRMS values similar to what was found by Blondin et al. Blondin et al. (2011) found that the (x1, R) model did better than the (x1, c) model. The best flux ratio using the (x1, R) model, R (6990/3750), is (like the R-only model) not correlated with x1, but strongly correlated with c (coefficient of 0.83). The ratio R (6990/3750) is therefore a proxy for c, and so it is unsurprising that this is the top-ranked ratio in a model employing only a ratio and x1. Using this model Blondin et al. (2011) again found that the ratio R (6630/4400) was best and, as mentioned above, it was similarly correlated with SALT2 c. Nearly all of the top ten ratios for the Ronly and (x1, R) models have very similar numerator and denominator wavelengths, and the difference in wavelength between the two fluxes is significant (3000-4000Å). This again supports the idea presented above that the top-ranked flux ratios for these two models are effectively proxies for colour. Model 3: R c and c Some of the top-ranked flux ratios with the SALT2 colour parameter c (Equation 6) are consistent with the results when using the (x1, c) model. This lack of improvement is once again at odds with what was seen by Blondin et al. (2011). However, as mentioned in Section 4.10.2, the apparent improvement in Blondin et al. (2011) is likely due to their smaller sample size and the relatively poor performance of the (x1, c) model. Many ratios appear to have large correlations between the correction terms and uncorrected residuals, but the lowest WRMS values are tightly clustered in wavelength space. This is also apparent in Table 1, where all but one of the top ten ratios for the (c, R c ) model involve wavelengths near 5600Å and 6300Å. Flux ratios which include these wavelengths are effectively the same as (SiS), the SiS ratio (Section 4.7). The SiS ratio was shown above to be anticorrelated with ∆m15(B) (as well as correlated with x1). The best flux ratio using the (c, R c ) model, R c (5580/6330), is strongly correlated with x1 (correlation coefficient 0.83) and effectively uncorrelated with c (correlation coefficient −0.18). Therefore, the top-ranked ratio for this model can be thought of as equivalent to (SiS) and/or x1. Furthermore, the lack of correlation between R c (5580/6330) and c implies that dereddening the data using the SALT2 c and colour law is working as intended. With the (c, R c ) model, Blondin et al. (2011) showed that their top-ranked ratio was R c (6420/5290). This is similar to the reciprocal of the best ratio found with the BSNIP data; thus, it is not surprising that they find a strong anticorrelation between R c (6420/5290) and x1. Their ratio also leads to a larger decrease in WRMS (∼ 15 per cent, Blondin et al. 2011). Bailey et al. (2009) again showed an improvement when using their top ratio for this model, R c (6420/5190), which is also very close to the best ratio of Blondin et al. (2011), as well as the reciprocal of the best ratio presented here. Despite this similarity in wavelength space, the top-ranked ratios for this model from Bailey et al. (2009) 7). In fact, the top ten ratios lead to improvements at about the 1-2σ level (see Table 1). Furthermore, a good fraction of the ratios lead to some improvement over the standard model. Figure 30 shows a histogram of the WRMS values for the (x1, c, R c ) model (solid line) and the expected Gaussian distribution (dotted line). The short-dashed and long-dashed lines are the peak of the WRMS distribution and the WRMS value of the standard (x1, c) model, respectively. The distribution is roughly Gaussian when one ignores the high-WRMS tail (i.e., WRMS 0.17, or ratios with WRMS values that are > 5σ away from the peak of the distribution). The bottom panel of Figure 30 shows a close-up view of the smallest WRMS values. The fact that the best performing flux ratios (i.e., the ones yielding the smallest WRMS values) lie quite a bit above the Gaussian expectation seems to indicate that these are in fact statistically significant decreases in the WRMS, as indicated by the last column of Table 1. The (x1, c, R c ) model and its best ratio, R c (3780/4580), decrease the WRMS by ∼ 10 per cent and σ pred by ∼ 34 per cent from the (x1, c) model. The Hubble diagram for the top-ranked flux ratio using the (x1, c, R c ) model is shown in Figure 31, along with the WRMS for the best model (the grey band). The decrease in WRMS using this model is smaller than previously seen, but the decrease in σ pred is larger and the overall improvement is at a higher significance; Blondin et al. (2011) found a 20 per cent decrease in WRMS with a 1.6σ significance. The fact that the current dataset has nearly three times as many objects as the sample studied by Blondin et al. (2011) is likely the reason why the (x1, c, R c ) model yields more significant improvements in WRMS values. Most ratios have low values of WRMS and extremely large correlations between the correction terms and uncorrected residuals, implying that the (x1, c, R c ) model is performing better overall than any of the other models investigated. This is consistent with Blondin et al. (2011), although they found no flux ratio to have a strong correlation between the correction terms and the uncorrected Hubble residuals. The wavelengths of six of the top ten ratios for this model are near ∼3750Å and ∼4550Å with wavelength baselines of ∼800Å. These approximately correspond to the midpoint of the Ca II H&K feature and the border between the Mg II and Fe II complexes. Figure 32 shows the best flux ratio using the (x1, c, R c ) model, R c (3780/4580), versus the SALT2 parameters x1 and c. There is a slight correlation of this ratio with x1 (correlation coefficient 0.46) and effectively no correlation with c. Since this ratio is essentially uncorrelated with both SALT2 parameters and it decreases the WRMS at the ∼2σ level, it yields useful information about each SN beyond light-curve stretch and colour. It is intriguing that this new information is found at the blue end of the optical range, since there is evidence that spectral features in this region do not correlate with light-curve parameters, yet contain information related to SN Ia luminosity (e.g., Foley et al. 2008). Note that Blondin et al. (2011) found that flux ratios with wavelengths near ∼5300Å and baselines of < 400Å gave the best results for the (x1, c, R c ) model. Using the best ratio from their study, R c (5690/5360), the BSNIP data yield a WRMS of ∼0.15 mag, which is not as good as our top ten ratios. CONCLUSIONS This is the third paper in the BSNIP series and presents a comparison between spectral feature measurements and photometric properties of 108 low-redshift (z < 0.1) SNe Ia within 5 d of maximum brightness. The spectral data all come from BSNIP I, and the photometric data come mainly from the LOSS sample and are published by Ganeshalingam et al. (2010). The details of the spectral measurements can be found in BSNIP II, and the light-curve fits and photometric parameters are in Ganeshalingam et al. (in preparation). A combination of light-curve parameters (specifically . The grey band is the WRMS for the model. Colours and shapes of data points are the same as in Figure 5. the SALT2 stretch and colour parameters x1 and c) and spectral measurements are used to calculate distances to SNe Ia. We then compare the residuals from these models to the standard model which only uses light-curve stretch and colour. Future BSNIP papers will incorporate host-galaxy properties and SN spectra at later epochs into the analysis presented here. Summary of Investigated Correlations The velocity gradient (Benetti et al. 2005) is compared to the light-curve width, and it is shown that, as in BSNIP II, the various classifications based on the value ofv overlap significantly. Similarly, velocities at maximum brightness (v0) are compared to photometric observables and classifications based on velocity gradient, and there is a large amount of overlap in all of these parameters as well. In earlier work, HV and HVG objects have been used almost interchangeably, as have normal velocity and LVG objects (e.g., Hachinger et al. 2006;Pignata et al. 2008;Wang et al. 2009). However, the analyses of BSNIP II and this work show that these associations are not as distinct as previously thought. The measured velocities of the Si II λ6355 and Si II λ5972 features near maximum brightness are uncorrelated with observed (B − V ) max . Furthermore, the HV objects and normal-velocity objects have similar distributions of observed (B − V ) max values, though the HV objects tend to have slightly larger observed (B − V ) max . When distances to SNe Ia are computed using light-curve width (x1) and colour (c) parameters, no significant improvement in the accuracy of the distances is found when the velocity of the Si II λ6355 feature is added. This is contrary to what was seen by Blondin et al. (2011). Furthermore, models involving x1, c, and the velocity of the Ca II H&K feature fare as poorly. However, the velocity of the S II "W," when used in conjunction with x1 and c, leads to a decrease in the WRMS of the distances at the 1. none of the features analysed here leads to an improvement in the Hubble residuals. The pEW of the Si II λ4000 feature is strongly anticorrelated with x1, which confirms many previous studies (Arsenijevic et al. 2008;Walker et al. 2011;Blondin et al. 2011;Nordin et al. 2011b;Chotard et al. 2011). Furthermore, when using a model that includes x1, c, and the pEW of this feature, the residuals are as low as when using the standard (x1, c) model. The pEWs of the Mg II and Fe II complexes are both correlated with c, and since interstellar reddening cannot affect pEWs significantly, it seems that it is the intrinsic colour of the SN which is correlated with both of these pEWs. However, when using either of these pEWs (as a proxy for c) combined with the pEW of the Si II λ4000 feature (as a proxy for x1), the Hubble diagram residuals are significantly larger than when simply using x1 and c. The pEWs of both Si II λ5972 and Si II λ6355 are well correlated with x1 and correlated with c, but the use of the Si II λ5972 pEW does not improve distance calculations. However, using the Si II λ6355 pEW (along with x1 and c) leads to an improvement in the WRMS residuals at the 1.2σ level. Finally, the Ca II near-IR triplet is correlated with c and MLCS2k2 light-curve width parameter ∆. This feature and the O I triplet have not been investigated thoroughly in studies similar to this one since other large SN Ia spectral datasets often do not include these spectral regions. The Si II ratio, used as a luminosity indicator previously (e.g., Nugent et al. 1995;Benetti et al. 2005;Hachinger et al. 2006), is found to be well correlated with ∆m15. However, we caution that at a given value of ∆m15, there can exist various spectroscopically classified subtypes of SNe Ia. This ratio is also found to be correlated with observed (B − V ) max , which has been seen in other work (Altavilla et al. 2009). We also show that the Si II ratio is not an accurate proxy for x1 when calculating distance moduli. A model using c and (Si II) performs significantly worse than the usual (x1, c) model, contrary to the conclusion of Blondin et al. (2011). On the other hand, the Ca II ratio is found to be a good indicator of light-curve width, as it is well correlated with the MLCS2k2 ∆ parameter and with ∆m15. The BSNIP data also indicate that the SiS ratio is correlated with both x1 and c, and distance models using (SiS) with just c or with both c and x1 perform as well as the standard (x1, c) model. Finally, we confirm the results of Hachinger et al. (2006) that the SSi and SiFe ratios are both accurate luminosity indicators, as they are both well correlated with ∆m15. Following Bailey et al. (2009) and Blondin et al. (2011), we calculate Hubble diagram residuals using models which include combinations of the usual light-curve parameters (width and colour) and arbitrary sets of flux ratios. A total of 17,822 different ratios of fluxes are used alone, with x1, with c, and with both x1 and c to investigate whether any of these models might improve the accuracy of SN Ia distance measurements. No models utilising only a flux ratio or a flux ratio and x1 are found to decrease the Hubble residuals. A handful of models using a flux ratio and c are seen to perform as well as the standard (x1, c) model. Interestingly, most of these best ratios are extremely close to the SiS ratio mentioned above. These results differ from those of the previous studies of Bailey et al. (2009) and Blondin et al. (2011), both of which found that flux ratios alone or in conjunction with lightcurve information would usually perform better than the (x1, c) model. This may be due to the fact that our "standard model" (without a spectral indicator) already performs significantly better than that of Bailey et al. (2009) or Blondin et al. (2011. The differences may also be caused by the larger number of spectroscopically peculiar SNe Ia in the BSNIP sample. Finally, when combining a flux ratio with both x1 and c, our top-performing ratio, R c (3780/4580), decreases the Hubble residuals by 10 per cent, which is significant at the 2σ level. The WRMS of the residuals using this model is 0.130 ± 0.017 mag, as compared to 0.144 ± 0.019 mag when using the same sample with the standard (x1, c) model. This Hubble diagram has one of the smallest scatters ever published and at the highest significance ever seen in such a study. The wavelengths involved in most of the bestperforming ratios in the (x1, c, R c ) model approximately correspond to the midpoint of the Ca II H&K feature and the border between the Mg II and Fe II complexes. This sup-ports previous work which has shown that near-UV spectra of SNe Ia contain information related to SN Ia luminosity which is not necessarily captured in the photometry (e.g., Foley et al. 2008). The Future New large-scale surveys are already obtaining SN Ia data, and they are observing to higher redshifts and gathering larger amounts of data than what is in the BSNIP sample (e.g., Rau et al. 2009;Law et al. 2009;Kaiser et al. 2002). Even larger surveys at even higher redshifts are also planned (e.g., LSST, WFIRST). Many more SNe Ia will be discovered than can be rigorously observed; we are quickly entering the age of SN research where we are limited by the follow-up observations. Thus, there must be significant effort put forth to determine the most efficient way to monitor and utilise such vast quantities of objects. That is one of the major goals of BSNIP. Soon, for the vast majority of objects, there will only be (at best) a handful of photometric observations near maximum brightness. Those, combined with a relatively low S/N spectrum near maximum, will likely be all the follow-up observations we get. From the work presented here (and in BSNIP II) we have shown that there still is hope. The pEW of the Si II λ4000 feature is a good indicator of light-curve width, and the pEWs of the Mg II and Fe II complexes are relatively good proxies for colour. Unfortunately, the correlations between these spectral measurements and the corresponding photometric properties is not perfect, and distance calculations that employ only these spectroscopic measurements do not perform as well as the standard model which uses light-curve width and colour. However, this is still a promising avenue for further investigation using new datasets that are even larger than BSNIP. Other correlations that appear marginal in the BSNIP dataset, or models tested here that performed only equally as well as the usual (x1, c) model, should also be reexamined in the future. Occasionally, one will be fortunate enough to have sufficient photometric observations to produce a light curve for which SALT2 (or another light-curve fitter) is able to determine a width and colour. In these cases it appears that the light-curve parameters can be combined with a flux ratio from a spectrum near maximum brightness to improve the accuracy of SN Ia distances. The best ratios for this, as determined from the BSNIP data, are all near R c (∼3750/∼4550). This is all somewhat heartening for surveys that will discover and monitor SNe Ia at higher redshifts. Si II λ4000, the Mg II and Fe II complexes, and R c (∼3750/∼4550) all involve spectral features which are toward the blue end of the optical range. This is critical for higher-z surveys since, as pointed out in BSNIP II, the red wing of the typical, near-maximum Si II λ6355 feature becomes redshifted beyond ∼1 µm for z 0.6. Furthermore, as discussed multiple times in BSNIP II, measuring fluxes and pEWs directly from a spectrum is much easier and less reliant on smoothing models or functional form assumptions than velocities, for example. To quote the concluding paragraph of Blondin et al. (2011), "Do spectra improve distance measurements of SN Ia? Yes, but not as much as we had hoped." We have to agree with the authors of this quote both on the objective part (i.e., spectra do improve distance measurements), as well as on the subjective part (i.e., we hoped they would improve things even more).
2012-05-04T23:44:08.000Z
2012-02-09T00:00:00.000
{ "year": 2012, "sha1": "ed64652234dc6fb27144a755973fd7ab4333e3cc", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/425/3/1889/3027933/425-3-1889.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ed64652234dc6fb27144a755973fd7ab4333e3cc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
99601466
pes2o/s2orc
v3-fos-license
Design of porous Eudragit® L beads for floating drug delivery by wax removal technique Graphical Abstract The floating beads were fabricated by a novel wax removal technique. Metronidazole was successfully loaded into the Eudragit® L beads. Adding wax into the beads improved floating properties and drug release behavior. Release kinetics were revealed for better understanding of drug release mechanism.Unlabelled image Introduction Oral delivery is the preferred route for drug administration due to its ease of use, low cost, and high patient compliance. Most of the conventional oral drug delivery systems have shown some limitations related to fast gastric emptying time and poor bioavailability of certain drugs due to incomplete absorption and degradation in the gastrointestinal tract [1]. A controlled drug release delivery system has therefore been developed to provide predetermined drug release at a predictable and controlled rate [2,3]. Nevertheless, differences in gastrointestinal (GI) physiology, such as pH and motility, result in subject variability, demonstrating significant effects on drug delivery behavior. To overcome this obstacle, retention of drug delivery systems have been discovered to prolong the overall GI transit time, thereby resulting in improved oral bioavailability of poorly water-soluble drugs [4]. Furthermore, gastric retention with drug release may be an advantageous strategy for Helicobacter pylori eradication in the stomach mucosa [3]. Various GI targeting and retaining dosage forms such as intragastric floating systems [5], mucoadhesive systems, swelling or expanding systems [6], magnetic systems and unfoldable systems, have been developed to overcome these limitations [5]. One of the thriving trends in enhancing drug residence in the stomach is the floating drug delivery system (FDDS). Several approaches have been used to encourage buoyancy of the dosage form in the stomach. The principal rule is to provide a density lower than the gastric fluid so that they are capable of floating on the gastric juice in the stomach. Based on the buoyancy mechanism, FDDS may be roughly grouped into: hydrodynamically balanced systems, gas-generating systems, raftforming systems and low-density systems [7,8]. Numerous polymers such as polycarbonate, Eudragit ® S, cellulose acetate, calcium alginate, agar and low methoxylated pectin are commonly used as drug carriers in FDDS [9]. Among several FDDSs, low-density system (density < 1 g/cm 3 ) offers immediate floating on the stomach contents. This can eliminate the problem of premature evacuation of FDDS through the pyloric sphincter. However, one disadvantage of this technique is the high initial burst release associated with this type of system [10]. Moreover, the efficacy of the system is dependent on the presence of enough liquid in the stomach, requiring frequent drinking of water [11]. In order to overcome the drawbacks mentioned above, floating beads have been developed via many techniques including solvent evaporation, incorporation of a gas-forming agent (such as CaCO3) or porous structural elements in the system [5]. In this study, poly(methacylic acid-co-methyl methacrylate) or Eudragit ® L (referred to as EL) was used as a drug carrier in the form of spherical EL beads. The EL beads are a multiple-unit system, which may be more beneficial than single-unit systems by circumventing all-or-none emptying from the stomach during houskeeper waves. This study aimed to fabricate porous EL beads containing metronidazole (MTZ), an antibiotic used for eradication of H. pylori [12]. The pores were produced using a wax removal technique after dispersing wax, either cetyl alcohol or white petrolatum, into the EL beads during bead formation process. The effects of various amounts of cetyl alcohol and white petrolatum as well as curing time on floating behavior and drug release in gastric fluid were also investigated. Materials MTZ, cetyl alcohol and white petrolatum were obtained from P.C. Drug Center Co., Ltd. (Bangkok, Thailand). Eudragit ® L (EL) was received from JJ-Degussa Chemical (Thailand) Ltd. (Bangkok, Thailand). Acetone and dichloromethane were purchased from RCI Labscan Ltd. (Bangkok, Thailand). All other chemicals were of standard pharmaceutical grade and were used as received without further purification. Floating bead preparation The drug-loaded floating beads were prepared by dissolving a mixture of EL and MTZ (at a ratio of 4:1) in acetone. The different amounts of the waxes (i.e., cetyl alcohol or white petrolatum) were added to the mixture of EL and MTZ, and then homogeneously mixed by a magnetic stirrer. The dispersion containing wax was placed into a glass syringe and then extruded into dichloromethane. The beads formed were cured by gentle stirring for 5 or 20 min at room temperature and then filtered through filter paper and dried at 37°C for 12 h. The formulations of the drug-loaded floating beads are presented in Table 1. Determination of bead size The mean diameter of 20 dried beads was determined by optical microscopy (model BH-2, Olympus, Japan). The microscope eyepiece was fitted with a micrometer by which the size of the beads could be determined. Morphology of beads The surface and internal morphology of the bead samples were observed using a scanning electron microscope (SEM; model Maxim-2000, CamScan Analytical, England), under an accelerating voltage of 15 keV. The samples were fixed onto a SEM stub with double-sided adhesive tape and then coated in a vacuum with a thin gold layer before investigation. To study the internal structure of the beads, the beads were cut with a razor blade before being fixed onto the SEM stub. Floating properties of the beads The floating properties of the beads such as floating time and time-to-float were monitored by placing the bead samples (n = 20) into an Erlenmeyer flask filled with 50 mL of simu- Drug loading and drug encapsulation efficiency The drug loading in the EL beads was determined by weighing 35 mg of the beads and then dissolving them in 100 mL phosphate buffer solution (pH 7.4). The MTZ content in the beads was analyzed using a UV-visible spectrophotometer (model U-2000, Hitachi, Japan) at a maximum wavelength of 277 nm (n = 3). The percentage of drug loading was calculated using Equation (1). Drug loading Total amount of drug in beads Weight of % ( ) = × 100 beads taken (1) The drug encapsulation efficiency of the EL beads is defined here as the percentage of determined drug loading relative to the nominal (theoretical) loading. The percentage of drug encapsulation efficiency was calculated using Equation (2). 2.7. In vitro drug release studies MTZ release from the different formulations of the beads was investigated using a USP dissolution apparatus I (Erweka, Germany) equipped with baskets, which were operated at a speed of 100 rpm. Nine hundred milliliters of SGF (pH 1.2), as the dissolution media, was placed into the glass vessel, the apparatus assembled, and the dissolution medium was equilibrated to 37 ± 0.5°C. Test fluid (5 mL) was taken at various time intervals, i.e., 15, 30, 60, 90, 120, 150, 180, 210, 240, 300, 360, 480 and 600 min. The amount of MTZ released was then analyzed using a UV-visible spectrophotometer at 277 nm. Each in vitro release study was conducted in triplicates. Drug release kinetics The kinetics of drug release were computed by fitting the dissolution curve to standard empirical equations, that is, Korsmeyer-Peppas, Higuchi, zero order kinetics and first order kinetics equations [14,15] where Mt/M∞ is the fraction of drug released, k is a constant incorporating structural and geometric characteristics of dosage form, and n is the diffusional exponent. The equation was treated logarithmically to determine the value of release exponent, n [14-16]. Statistical analysis Analysis of variance (ANOVA) and Levene's test for homogeneity of variance were performed using SPSS version 10.0 for Windows (SPSS Inc., USA). Post hoc testing (P < 0.05) of the multiple comparisons was performed by either the Scheffé or Games-Howell test, depending on whether Levene's test was insignificant or significant respectively [17]. Formation of porous EL beads The EL beads containing MTZ and wax (either cetyl alcohol or white petrolatum) could be prepared using the solvent diffusion technique as described in previous studies [18]. The wax incorporated into the EL beads was removed by using dichloromethane as a displacement solvent. Dichloromethane was chosen because it is chemically inert in relation to the desired EL beads (poor solvent for the EL) but a relatively good solvent for the wax under the contacting conditions, miscible with the dilution solvent. After wax removal, pores were created inside the EL bead structure. In the meantime, EL beads were solidified in dichloromethane [1]. Curing time of the beads in dichloromethane therefore played a crucial role in the solidification process and porous structure formation. In this study, the effect of curing time, i.e., 5 and 20 min, on the properties of EL beads was investigated. Size of EL beads The mean diameter of the drug-loaded EL beads was observed by microscopic method. The size of the beads ranged from 2.4 to 2.8 mm. The amounts of cetyl alcohol and white petrolatum did not significantly affect the mean size of the prepared beads. The results found here are consistent with a previous report [1]; the portion of used waxes insignificantly influenced the mean diameter of the beads. During bead formation, the solution of EL, MTZ and wax in acetone continuously grew through a needle until its mass achieved a critical value at which moment the droplet detached from the tip of the needle and fell into the dichloromethane. This suggested that the size of the obtained beads principally resulted from the diameter of the extruding needle used in the study [1]. Morphology of beads The SEM images of the surface and internal structures of the EL beads and the EL beads containing 1% w/w cetyl alcohol or white petrolatum are presented in Fig. 1. All EL bead formulations showed somewhat spherical beads with a fairly smooth surface. The internal or cross-sectional structure of the beads demonstrated numerous micropores. This is because the acetone evaporated and diffused from the beads during the bead formation in dichloromethane, which contributed to the porosity of the matrix beads, as discussed above. The pore size of the beads using waxes, cetyl alcohol or white petrolatum, as a pore former was around 3-5 μm which were greater than those of the beads containing no wax (around 2-3 μm). This is because the added waxes gradually dissolved from the beads into the displacement solvent (dichloromethane), resulting in larger internal pores compared with the formulations containing no wax. The higher concentration of waxes also increased the porosity of the matrix beads (data not shown). Floating properties of the beads The MTZ-loaded EL beads with/without different percentages of waxes instantaneously floated in SGF and remained floating for at least 24 h, as illustrated in Fig. 2. Incorporation of various amounts of cetyl alcohol or white petrolatum did Fig. 2 -Photo images showing the floating EL beads. not influence the floating behavior of the beads. Good in vitro floating behavior in SGF was observed in all formulations. The floating properties of the EL beads with/without wax may be attributed to the low apparent density of the porous structured beads, as confirmed by the SEM images. Even though the interior pore size of the EL beads containing wax was greater than that of the EL beads without wax, the fine porous structure generated by the acetone diffusion and evaporation could also maintain the buoyancy of the beads as soon as the beads were immersed in a liquid medium [19]. Moreover, the beads could float for long period (more than 24 h) because EL does not dissolve in an acidic medium; therefore, the porous structure of the beads still remained [20]. Table 2 demonstrates the percentages of drug loading and drug encapsulation efficiency of the prepared EL beads at curing times of 5 and 20 min. MTZ loading in the beads ranged from 7.3% to 12.2% and encapsulation efficiency was between 29.0% and 48.7%. The type of wax, amount of wax and curing time insignificantly influenced the drug loading and drug encapsulation efficiency. This may have been due to the insolubility of MTZ in both cetyl alcohol and white petrolatum. For this reason, the drug loading and encapsulation efficiency of this system did not hinge on the type and amount of the waxes used. On the other hand, MTZ is soluble in acetone and freely soluble in dichloromethane [21]; therefore, some amounts of MTZ could have diffused from the beads, resulting in a decrease in drug encapsulation. In vitro drug release studies The in vitro drug release study was performed in SGF in order to mimic gastric conditions and investigate the suitability of the beads as an intragastric floating drug delivery system. The in vitro drug release profiles of the EL matrix beads containing different percentages of cetyl alcohol are shown in Fig. 3. The MTZ exhibited an initial burst of drug release, followed by a lag phase exhibiting slow release [22]. The initial burst of drug release has been attributed to its tendency to move to the bead surface during the preparation or drying processes. From Fig. 3, it can be seen that drug release from the EL beads without wax was the lowest; the addition of wax during bead preparation and then remove out from the beads could enhance the drug release. This is probably due to the formation of internal pores after wax removing from the beads, as indicated in the bead morphology results. This makes medium diffusing through the beads faster, resulting in rapid drug release [4]. Among the different wax added beads, the drug release is not deepened on the portion of the wax. The curing time of the beads in dichloromethane played a vital role on the drug release. The longer curing period (20 min) yielded stronger beads and, consequently, resulted in a slower drug release [23]. Fig. 4 presents the percentage of drug release after 2 h from different EL bead formulations. The drug release from the beads using cetyl alcohol as a pore former was faster than that from the beads using white petrolatum. This might be because cetyl alcohol can be dissolved in dichloromethane faster than white petrolatum, resulting in higher porosity and drug release [1]. Drug release kinetics Dissolution data were processed using linear regression analysis for estimation of drug release mechanism or kinetics to test the goodness of fit with zero order, first order, Higuchi and Korsmeyer-Peppas release models. A correlation coefficient (R 2 ) was chosen to define the approximation accuracy of an individual model (Table 3). Acceptable correlation was achieved when R 2 values were equal to 0.970 or higher [24]. The values of the correlation coefficient of the Korsmeyer-Peppas model, also known as the "power law" model, for the obtained release data of almost all formulations were greater than 0.970, as demonstrated in Table 3. Only F1 (5 min curing time) and F7 (5 min curing time) showed R 2 values less than 0.970. The Korsmeyer-Peppas model has been used very often to describe the drug release from several different pharmaceutical modified-release dosage forms. There are several simultaneous processes considered in this model, for example, diffusion of water into the beads, swelling of the beads as water entered, formation of gel, diffusion of drug out of the beads, and dissolution of the polymer matrix. In this model, the mechanism of drug release is characterized using the release exponent ("n" value). For a spherical particle, an "n" value of 0.85 corresponds to zero-order release kinetics (case II transport); 0.43 < n < 0.85 means an anomalous (non-Fickian) diffusion release model; n = 0.43 indicates Fickian diffusion, and n > 0.85 indicates a super case II transport relaxational release [14]. The results revealed that most of the release profiles obeyed super case II transport relaxational release, since they fitted well with the Korsmeyer-Peppas model (R 2 are in range of 0.902-0.985 and exponent values (n) are greater than 0.85) [25]. Super case-II transport refers to drug release by two mechanisms which are diffusion and relaxation of polymer chain [26]. This might be because EL did not dissolve in the SGF. Consequently, MTZ gradually diffused through the relaxed polymer layer. As for approximation of experimental results with the Higuchi model, the correlation coefficient ranged from 0.665 to 0.993. This model fits well to data of MTZ release from a few formulations, i.e., F1 (5 and 20 min curing time), F8 (20 min curing time) and F11 (5 min curing time), indicating that the release of MTZ followed the Higuchi release kinetics and diffusion was the dominating mechanism for drug release. It is clearly indicated that the formulations did not follow zeroorder and first-order release models because the regression values for all formulations did not show high linearity. Conclusion The porous EL beads containing cetyl alcohol or white petrolatum were in spherical shape and floated in SGF for more than 24 h. The curing time in dichloromethane, amount and type of wax played a vital role in the drug release. A short curing time and presence of wax during bead preparation could enhance the drug release. Most of the drug release kinetics from the EL beads were super case II kinetics. The results suggested that the bead fabricated by wax removal technique is promising for the development of a floating drug delivery system.
2019-04-08T13:10:50.860Z
2016-12-23T00:00:00.000
{ "year": 2016, "sha1": "0a81a4f90434bf329696407b5fc088c3c074adb4", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ajps.2016.12.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a992a875ee928361450c81b0e185fdcd6cd5f4d", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
248215554
pes2o/s2orc
v3-fos-license
Oxygen production potential of trees in India This study deals with the oxygen production potential of India taking baseline data from ISFR 2019. The Indian forests have an oxygen production potential of 7896.14 million tonnes (mt) and the annual potential was 28.04 mt yr for 2019. Considering oxygen production potential of the top 10 tree species from forests and those outside forests, Shorea robusta (Sal) and Mangifera indica (Mango) ranked first, i.e. 657.87 and 214.39 mt respectively. The fast-growing agroforestry tree species exhibit a net oxygen production rate in the range of 1.03–34.15 tonnes ha yr. Bamboo being a fast-growing and higher biomass-producing species showed oxygen production of 27.38 mt yr. Overall this provides huge scope for establishing oxyparks in India. OXYGEN is one of the important elements necessary for the survival of every species in this planet. Forests and trees are the major source of oxygen and an important reservoir of carbon dioxide. They meet half of the oxygen demand, producing 26 billion tonnes per year and are thus referred to as 'oxygen factories' 1 . Among the different types of forest, tropical forests, and savannas account for 34% and 26% of global photosynthesis and amazon rainforests hold one-half of the world's tropical rainforests 2 . Since 1990, the area of naturally regenerating forests has been decreasing due to deforestation, but the area of planted forests has increased by 123 million ha 3 . However, the Indian scenario shows an increasing trend in terms of forest and tree cover (80.73 million hectares), which is 24.56% of the total geographical area of the country 4 . The decrease in the number of trees/plants can result in a decrease in oxygen production 5 . Therefore, in this study we estimate the oxygen production potential of India under the following sub-headings: (a) Annual production potential of oxygen based on forest carbon, (b) Oxygen production potential of Indian forests (state-wise), (c) Top ten tree species of Indian forests and trees outside forests (TOF), (d) Agroforestry tree species, (e) Bamboo species. The baseline data were collected from the Indian State of Forest Report (ISFR) by Forest Survey of India (FSI), Dehradun 4,5 and net oxygen release was calculated based on organic carbon produced by trees or local plants [6][7][8] . (Note: 32 is the molecular weight of oxygen and 12 is the molecular weight of carbon.) Based on the wood density of different species according to FAO estimates (http://www.fao.org/3/w4095e/ w4095e0c.htm), the mass of the species was calculated. In simple terms Wood density = Biomass/volume. Biomass = Volume × wood density. The obtained biomass of trees from volume provides the aboveground biomass. Therefore, to calculate belowground biomass, the aboveground biomass is multiplied with the IPCC-driven universal conversion factor of 0.26. Then, the total dry biomass is multiplied by carbon content (50% of wood is carbon) to obtain the carbon sequestration of woody species. The results indicate that the net oxygen production potential was 28.04 million tonnes per year, of which aboveground oxygen production (25.37 mt yr -1 ) was more than belowground oxygen production (2.67 mt yr -1 ) ( Figure 1). The total oxygen production potential of Indian forests is 7896.14 million tonnes (mt). Arunachal Pradesh (1151.40 mt) ranks first, followed by Madhya Pradesh (613.29 mt) and Jammu and Kashmir (582.13 mt), whereas the least oxygen production potential is from the Union Territories of Daman and Diu (0.12 mt), Chandigarh (0.22 mt) and Puducherry (0.20 mt). The production potential of oxygen is linked to greenery, growing season, stems per unit area, age, geographical area and forest cover of a state. Moreover, the area of very dense forest (21,095 km 2 ) and medium dense forest (30,557 km 2 ) in Arunachal Pradesh is more in comparison with the other states 4 . Therefore, if forest canopy is increased and sustained over a period, net carbon dioxide will be removed and more oxygen will be produced 9 . Figure 2 shows the state-wise oxygen production potential in India. The oxygen production potential of the top ten tree species from forests ranged from 63.48 mt (Picea smithiana) to 657.87 mt (Shorea robusta). For trees outside forest (TOF), Mangifera indica (214.39 mt) had the highest oxygen potential, followed by Azadirachta indica (154.46 mt), Borassus flabellifer and Madhuca latifolia ( Table 1). M. indica (mango) is considered as the 'King of fruits' and is commercially cultivated in the tropical regions of the world. However, A. indica (neem) is consi-dered a versatile tree species, which is distributed throughout India. It is both a naturally grown and cultivated species on roadsides, field boundaries and associated with rituals. Agroforestry is gaining importance for expanding greenery and increasing tree cover outside the forests. It is considered as 'low hanging fruit' 10 due to its various outputs of both tangible and intangible benefits. Therefore, based on net carbon sequestration rate reported by various researchers, the most prominent agroforestry tree species were chosen to calculate the oxygen production potential ( Table 2) [11][12][13][14][15][16][17][18][19][20][21] . In India, Populus deltoides and Eucalyptus tereticornis are widely cultivated due to their importance in pulp and paper production and sustainable wood supply. Both these fast-growing trees have high oxygen production potential around 33 tonnes ha -1 yr -1 . The net oxygen production rate ranges from 1.04 to 34.15 tonnes ha -1 yr -1 . Again this depends on location, number of trees, diameter distribution, annual timber increment, tree health, age and management techniques. Bamboo is one of the fast-growing multipurpose species widely adapted to different climatic conditions, comprising 125 indigenous species and 11 exotic species 21 . It also releases 35% more oxygen than an equivalent volume of other trees 22 . Nath et al. 23 compiled numerous published information and reported average biomass of bamboo as 124 tonnes ha -1 (with a range 60-242 tonnes ha -1 ). They also found that mean carbon storage and sequestration rate ranged from 30 to 121 Mg ha -1 and 6 to 16 Mg ha -1 yr -1 respectively. This highlights that bamboo has a huge potential to capture CO 2 and produce more oxygen than other tree species. It is considered as a major oxygen source and an 'Oxygen Park' of bamboo has been established at Tamil Nadu Agricultural University (TNAU), Coimbatore. A fully grown bamboo species generates 300 kg of oxygen every year per person 24 . The calculated oxygen production potential of bamboo was 178.61 mt, both from Reserve Forest and TOF in India, whereas oxygen production potential per year was 53.36 mt from Reserve Forests and 1.39 mt from TOF respectively. However, annual production of oxygen from bamboo was 27.38 mt yr -1 (Table 3). Holistically, the emerging oxygen crisis and increasing CO 2 concentration is a common phenomenon all over the world. In order to mitigate this, the focus must be shifted to encourage the proportion of urban vegetation coverage. Moreover, after the initiation of the Millenium Development Goals, several efforts are also made to quantify the services provided by tree species.
2022-04-17T15:13:17.033Z
2022-04-10T00:00:00.000
{ "year": 2022, "sha1": "6b6a9897d2641b42eb121732322d0e8a7feed949", "oa_license": null, "oa_url": "https://doi.org/10.18520/cs/v122/i7/850-853", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c5896aecf5937e66d2bfb0676a95c454839a4899", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
247206357
pes2o/s2orc
v3-fos-license
The Role of Endothelial Progenitor Cells in Atherosclerosis and Impact of Anti-Lipemic Treatments on Endothelial Repair Cardiovascular complications are associated with advanced atherosclerosis. Although atherosclerosis is still regarded as an incurable disease, at least in its more advanced stages, the discovery of endothelial progenitor cells (EPCs), with their ability to replace old and injured cells and differentiate into healthy and functional mature endothelial cells, has shifted our view of atherosclerosis as an incurable disease, and merged traditional theories of atherosclerosis pathogenesis with evolving concepts of vascular biology. EPC alterations are involved in the pathogenesis of vascular abnormalities in atherosclerosis, but many questions remain unanswered. Many currently available drugs that impact cardiovascular morbidity and mortality have shown a positive effect on EPC biology. This review examines the role of endothelial progenitor cells in atherosclerosis development, and the impact standard antilipemic drugs, including statins, fibrates, and ezetimibe, as well as more novel treatments such as proprotein convertase subtilisin/kexin type 9 (PCSK9) modulating agents and angiopoietin-like proteins (Angtpl3) inhibitors have on EPC biology. Introduction Atherosclerosis is a vascular disease caused by the build-up of plaques in the innermost layers of arteries, leading to arterial wall thickening and hardening, and narrowing of the arterial lumina [1]. Non-modifiable risk factors of atherosclerosis include advanced age and male sex, while modifiable risk factors, including arterial hypertension, diabetes, obesity, physical inactivity, smoking, and hypercholesterolemia, are globally addressed in cardiovascular prevention programs. Other important risk factors, including chronic kidney disease, familial hypercholesterolemia, various endocrine disorders or a previous history of cardiovascular events, may require a more specific intervention [2,3]. The initial event in the pathogenesis of atherosclerosis is endothelial injury and dysfunction, followed by mononuclear adhesion and migration into the arterial subendothelial space [1,4]. Furthermore, oxidative stress and insulin resistance stimulate the overproduction of proinflammatory cytokines and other inflammatory mediators resulting in a state of virtually permanent low-grade inflammation. Smooth muscle cells migrate to the arterial intima, differentiate into fibroblasts, and produce matrix molecules like elastin and collagen, eventually leading to plaque growth and formation of the fibrous caps. Fibrous caps may rupture, exposing the underlying extremely thrombogenic core. Such events prompt further thrombus formation and release of more inflammatory mediators, resulting in arterial stenosis or occlusion [1]. This leads to end organ damage, and depending on the anatomic site of the injured vessel, presents as myocardial infarction, ischemic stroke, or critical limb ischemia [5][6][7]. In general, atherosclerosis is still viewed as an irreversible process, strongly linked to aging. Advanced atherosclerotic lesions both in humans and animals consist of necrosis, calcification, and fibrosis, making lesion regression and possible dissolution unlikely [8], despite various conservative and invasive treatment options. The discovery of endothelial progenitor cells (EPCs) at the end of the twentieth century provided new insights to the pathophysiology of atherosclerosis and offered new prospects in treatment. Due to the unique characteristics of these cell lines, the idea that atherosclerosis regression was possible emerged. These myeloid derived cells are capable of virtually endless division and differentiation into healthy and functional endothelial cells at the site of vessel injury, repairing the vessel wall, and promoting neovascularization [9,10]. This review focuses on the role of endothelial progenitor cells in atherosclerosis, and the impact standard as well as novel lipid-lowering treatments have on EPC-related vascular repair. Endothelium Biology and Function The endothelium was once seen as a simple barrier between the blood and the vascular wall to prevent blood extravasation. It is now perceived as a metabolically active tissue important for healthy vessel function. The human endothelium exceeds the size of many organs, with a surface area of more than 800 m 2 , and weight of approximately 1500 g. More than 250 biologically active substances are produced and secreted by endothelial cells, some responsible for vascular tone regulation, cell adhesion, thromboresistance, smooth muscle cell proliferation, and inflammation. In addition, anticoagulant, antiplatelet and fibrinolytic factors are synthesized in these cells. Thus, the endothelium is a large endocrine and paracrine organ interplaying with virtually all other tissues and organs [11,12]. Vasoconstriction and vasorelaxation are important features of the endothelium, responsible for optimal blood supply to end organs and tissues, maintaining cell respiration and other metabolic demands. The endothelium itself produces vasoactive molecules affecting vascular tone locally, interplaying with circulating vasoactive mediators [13,14]. One of the most important vasoactive substances released by the endothelium is nitric oxide (NO). NO was first identified and named after its vasorelaxant properties as endothelium-derived relaxing factor. It is synthesized in endothelial cells from L-arginine in the presence of endothelial NO synthase and several cofactors. Once released, NO acts locally by penetrating smooth muscle cells in the vascular wall leading to guanylate cyclase activation and cyclic guanosine monophopsphate (cGMP)-mediated vasodilatation [15]. NO also inhibits the synthesis of vascular cell adhesion molecule-1 (VCAM-1) and monocyte chemoattractant protein-1 (MCP-1), resulting in decreased expression of nuclear factor κB (NFκB), which action is further impeded by oxidative phosphorylation in mitochondria and S-nitrosylation of cysteine residues of NFκB. NF-κB target genes in endothelial cells are vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 and Eselectin. These substances are responsible for extravasation and are linked to inflammation, thus aggravating endothelial dysfunction [16]. Guanylate cyclase activation leading to vasodilation can also be achieved by other distinct signaling molecules like bradykinin, adenosine, vascular endothelial growth factor (during hypoxia), serotonin (during platelet aggregation), prostaglandins, C-type natriuretic peptide, and gaseous molecules like carbon monoxide (CO), and hydrogen sulfide (H 2 S) [16,17]. Other important molecules involved in vasomotion synthesized in endothelial cells are endothelin, vasoconstricting prostanoids, prostacyclin, and angiotensin convertase at the endothelial cell surface. Prostacyclin, in addition, blocks platelet thrombus formation and hinders clotting [16,18]. Vasomotion is also mediated by intercellular potassium ion accumulation due to endothelium-derived hyperpolarizing factor, changing vascular electric conductivity and enhancing propagation of electrical signals along blood vessels affecting gap junctions and smooth muscle cells in the arterial wall [14]. The anticoagulant properties of the endothelium are linked to the synthesis of thrombomodulin and heparin sulfate proteoglycan, and the secretion of tissue factor pathway inhibitor. All these substances interfere with the coagulation cascade, inhibiting thrombus formation and/or enhancing fibrinolysis [19]. Pathophysiology of Atherosclerosis The hallmark of atherosclerosis is the progressive build-up of atherosclerotic plaques. Plaques develop slowly through a complex series of cellular events within the arterial wall, influenced by a variety of local vascular circulating factors. Plaques typically develop in the arterial intima, although atherosclerotic changes may involve all three layers of the arterial wall [1,20]. There are two basic theories on atherosclerosis development, the response-to-injury theory and the response to retention theory. Both theories see endothelial injury as an early event preceding other developments [1]. In the response-to-injury hypothesis, the initial injury due to mechanical or chemical factors leads to endothelial dysfunction and allows inflammatory cells to penetrate the arterial wall, followed by fat accumulation and proliferation of smooth muscle cells [21]. Not only are major arteries involved, but vasa vasorum, blood vessels supplying oxygen and nutrients to the artery wall of mother arteries may also be affected [21]. In the response-to-retention theory, lipoprotein retention in response to predisposing mechanical strain and cytokines in the vessels' extracellular matrix is the triggering event further resulting in atherogenesis [22]. Following endothelial injury, fat is accumulated in deeper regions of the subendothelial layer, forming lipid pools, rich in proteoglycans and hyaluronan, covered with a layer of vascular smooth muscle cells [1,23]. Nonetheless, plaques develop in regions with nonlaminar blood flow, as in artery arches, branches, or curvatures. Blood flow alterations are detected by endothelial cells' blood flow sensing organelles, primary cilia, leading to a myriad of functional and structural alterations. In these microlocations endothelial cell lining exhibits different, cuboidal morphology compared to regions with laminar blood flow, where the cells are commonly aligned in the direction of blood flow [24]. On the molecular level, this phenotype of endothelial cells shows epigenetic changes due to altered DNA methylation, with activated pro-inflammatory NF-κB pathways, as well as suppressed protective factors like Kruppel-like factor 4 (KLF4), and impaired production of NO [24,25]. At the same time, the endothelial barrier is more permeable to lipoproteins, making it vulnerable to low density lipoprotein (LDL) accumulation and cell migration [1]. Endothelial cells in nonaffected regions express a more anti-inflammatory and anti-thrombotic phenotype and are organized differently [26]. Inflammatory cells and fatty deposits interplay in a vicious way, with fat causing further mononuclear cell migration and activation into monocyte-derived macrophages in the vessels' intima, and pro-inflammatory cells aggravate endothelial dysfunction allowing easier fat accumulation [1]. The most important contributing factor is oxidized lipoprotein particles in the vessel wall, originating from LDL particles in blood plasma. A complex set of biochemical reactions regulates the oxidation of LDL, involving enzymes such as lipoprotein associated phospholipase A2 (Lp-LpA2), and free radicals produced through oxidative stress in the endothelium. The entire process is accelerated by low levels of high-density lipoprotein (HDL), which removes excess cholesterol from peripheral tissues and carries it back to the liver. Activated macrophages secrete pro-inflammatory mediators like tumor necrosis factor alpha (TNFα), interleukin-1 (IL-1), and interleukin-6 (IL-6) [20,27]. At this stage, endothelial cells secrete less NO and more vasoconstrictive cytokines like endothelin 1 and molecules like VCAM, ICAM, and monocyte chemotactic protein 1 (MCP-1). These vasoconstrictive and adhesive molecules further enhance the adherence and migration of monocytes to the injured vessel wall [1,20]. Monocyte derived macrophages then ingest cholesterol, resulting in foam cell formation. As the process progresses, these cells create fatty streaks, visible on the artery wall in early atherosclerosis. This process is still reversible, as fatty streaks can disappear under certain conditions. Foam cells may become apoptotic, enhancing inflammation locally [28]. Another event is the migration of smooth muscle cells from the artery's muscle layer into the vascular intima due to cytokines secreted by the damaged endothelial lining and present foam cells. Smooth muscle cells further proliferate and ingest lipids in an already pro-inflammatory area, where locally acting growth factors, oxidized low density lipoprotein, and homocysteine contribute to plaque build-up [1,29]. Furthermore, smooth muscle cells can transform into chondrocytes, osteocytes, adipocytes, or macrophage-foam cells, depending on the local environment [30]. The bulk of these lesions is made of excess fat, collagen, and elastin. The fibrous cap contains smooth muscle cells, providing stability to the whole structure. As more fat accumulates, the plaque size increases, progressively changing the vessels' architecture. No apparent narrowing is present at this stage, but blood flow may become more turbulent, increasing shear stress on the vessels' wall, causing further endothelial microlesions, and perpetuating the entire process of atherogenesis [20,24]. Later, the proliferation rates of smooth muscle cells slow down, possibly due to increased expression of cell cycle inhibitors like p16 and p21, and impaired response to growth factors. In addition, their phenotype changes from contractile to a more synthetic. Dysregulated production of pro-inflammatory cytokines, growth factors, and extracellular matrix modifiers occurs, accelerating the process of vascular remodeling [31]. These mature smooth muscle cells release pro-inflammatory cytokines and matrix metalloproteinases affecting the collagen fraction of the plaque, making the plaque more vulnerable to rupture. Pro-inflammatory cytokines are able to change the phenotype of activated macrophages, promoting M1 and M4 subtypes, linked to plaque instability [32,33]. These advanced atherosclerotic plaques may undergo necrotic changes in their core and fibrinoid tissue. As cells die, calcifications may occur. The exact mechanisms of plaque calcifications are still unclear [34]. Stenosis due to plaque enlargement is a late event, which may even never occur, or it may be clinically asymptomatic if the blood flow is not significantly compromised. Dramatic and life-threatening complications like infarction are linked to plaque erosion or rupture, sometimes without previous symptomatic stenosis. This triggers injured endothelial cells to excrete excess thrombotic factors (e.g., von Willebrand factor [VWF] and thromboxane A2 [TXA2]), and decreased amounts of antithrombotic factors (e.g., heparin) leading to clot formation and enlargement [35]. Complete vessel obstruction within a short time may occur resulting in tissue ischemia and necrosis [36,37]. A schematic summary of atherosclerosis pathophysiology and its clinical correlates is shown in Figure 1. Endothelial Repair Exposure to various cardiovascular risk factors may result in functional and structural endothelial damage, ranging from delicate metabolic alterations in endothelial cells to cell loss by apoptosis. Damaged vessels lose their vasoactive ability due to impaired synthesis of vasoactive substances, but also because of increased rigidity due to structural changes of the vessel wall. Endothelial dysfunction may be the first step leading to more serious conditions like accelerated atherosclerosis and associated vascular complications [16,38]. Basically, vessel integrity can be restored if the inherent reparatory mechanisms are functional [9,10,39]. These mechanisms involve cell replication and replacement of unfunctional endothelial cells [40]. First, already existing mature endothelial cells may undergo mitotic processes, but because they are mostly terminally differentiated cells, their ability to proliferate is rather low [41,42]. The second mechanism leading to repair of the damaged endothelial lining is mediated through circulating EPCs. These immature cells have the capacity to proliferate and differentiate into mature endothelial cells. They originate from the bone marrow, and some other tissues like the spleen, liver, or fat. They circulate in the blood stream and may adhere to the damaged endothelium. Once they are embedded in the damaged endothelium, they proliferate and differentiate into mature functional and structural endothelial cells [9,10,41]. Circulating EPCs are in various stages of differentiation. There are at least two distinct types of cells: the early (less differentiated) and late (better differentiated) EPC [9]. Distinct features of early EPC include a spindle shape cell phenotype, their colony forming units (CFU), and the presence of several harboring markers (CD31, CD34, CD45, CD133, Tie2). Late EPCs in addition express vascular endothelial growth factor receptor-2 (VEGF-R2), vascular endothelial (VE)-cadherin and vWF and have a cobblestone shape. Late EPCs can produce nitric oxide. The maturation level of these cells influences their role in vascular repair. The role of less differentiated cells is mostly restricted to their paracrine function, providing growth factors, while more differentiated cells are able to provide more mature cells needed for actual vascular cellular repair [9,10,43]. The entire process of vascular repair mediated through EPCs consists of several distinct events like progenitor cell mobilization from their organ of origin, circulation in the blood stream, harbouring at the place of damaged endothelium ("homing") and finally, further differentiation and cell maturation [10]. Different substances are involved in the mobilization of progenitor cells from their place of origin into the blood stream, like NO, but also growth factors and cytokines, including vascular endothelial growth factor (VEGF), stromal-cell-derived factor-1α (SDF-1α), impaired glucose metabolism, erythropoietin, thyroid hormones and estrogens [44][45][46]. Once in the blood stream, EPCs migrate towards damaged endothelial regions where they adhere to the damaged vessel surface. This process is significantly enhanced by certain molecules like stromal derived factor (SDF)-1α. The concentration of SDF-1α is upregulated in the damaged endothelium due to tissue hypoxia. SDF-1α interacts with CX chemokine receptor 4 (CXCR4) on the endothelial surface [46]. After being embedded in the injured endothelium, progenitor cells proliferate and maturate and thus physically replace damaged mature endothelial cells. In addition, they synthesize and excrete vasculogenic cytokines and growth factors enhancing replication of already present mature endothelial cells [47]. Endothelial Repair in Patients with Lipid Disorders Decreased EPC numbers and their impaired replicatory and migratory properties are seen in several cardiovascular risk factors including diabetes mellitus, arterial hypertension, lipid disorders, smoking, physical inactivity, and unhealthy eating habits [18,47]. Hypercholesterolemia is linked to mechanical endothelial injury and dysfunction. In the context of endothelial repair, there is accumulating evidence that hypercholesterolemia may reduce the availability and function of EPCs, thus limiting vascular repair [50]. Delivery of cholesterol-rich lipoproteins to the endothelium is an important process in the pathogenesis of atherosclerosis. It is influenced by lipoprotein type and concentration, and the integrity of the endothelium. Importantly, LDL cholesterol may induce vascular endothelial cell apoptosis, due to its increased toxicity after being oxidized to oxidized LDL (ox-LDL) within macrophages and change the permeability of the endothelial barrier by inducing inflammation. A vicious cycle with the interplay of LDL arterial wall retention, inflammation, smooth muscle cell proliferation, macrophage activation, and coagulation irregularities is involved [1,20,28]. Ox-LDL has been shown to impair proliferation, migration, and adhesion capacity of EPCs. This has been explained by the activation of transcriptional regulator NF-κB [53]. Interestingly, sexual dimorphism of ox-LDL concentrations has been reported [54], affecting EPC differently [55], at least in mice. In contrast to LDL cholesterol's deleterious effects in the pathogenesis of atherosclerosis, high density lipoprotein (HDL) cholesterol has been shown to be protective [56]. There is far less data about its impact on EPC health. So far, HDL has been shown to improve the viability of early EPC, and to a lesser extent their functionality, in terms of adhesion properties [57]. Dysfunctional HDL does not benefit EPC biology [58,59]. Triglycerides have raised less interest in this field in comparison to hypercholesterolemia. However, elevated triglycerides contribute to overall cardiovascular risk. Hypertriglyceridemia has also been shown to negatively affect EPC biology. Hypertriglyceridemia leads to endothelial dysfunction and injury by interfering with SDF-1/CXCR-4 binding and NO pathways, thus affecting mobilization, migration, homing, and the vasculogenic properties of EPC [60,61]. Changes in EPC biology in dyslipemic states may help us better understand how well-established and newer therapeutic strategies can prevent, delay and possibly reverse atherosclerosis. For that reason, some contemporary trials correlated treatment effects on EPC biology using endothelial function tests. To date, there is a myriad of invasive and non-invasive methods to assess endothelial function in addition to cell cultures, and specific endothelial biomarkers. Some of them are used to evaluate vascular tone modulation and tissue perfusion, others to assess dynamic permeability, or anticoagulation and fibrinolysis [62]. Flow-mediated dilation (FMD) is an imaging technique used for endothelial vasomotion assessment. Other methods used to assess vascular dilation include laser-based techniques, venous occlusion plethysmography and finger plethysmography [62][63][64][65]. In addition to endothelium-dependent vasodilation, arterial stiffness (compliance) determination and pulse wave analysis can provide more insight into vessel health, but it is not recommended for routine clinical use [66]. Furthermore, the anticoagulant and fibrinolytic properties of the endothelium can be determined. Tissue plasminogen activator inhibitor, factor X and thrombin blood levels in basal circumstances, after stimulation with various substances (e.g., substance P for tissue plasminogen activator inhibitor) or in cell cultures can give valuable information on endothelial status [62,67]. Biomarkers of endothelial function include different molecules like angiopoietins (angiopoietin 1 and 2), selectins like intercellular adhesion molecule-1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1), and platelet endothelial cell adhesion molecule-1 (PECAM-1), growth factors like VEGF and its soluble VEGF receptor-1 (VEGFR-1), and platelet derived growth factor (PDGF), together affecting angioneogenesis, inflammation and endothelial permeability. Additionally, endothelial breakdown products such as syndecan-1, chondroitin sulfate, dermatan sulfate, serum hyaluronic acid, and heparan sulfate are markers of endothelial injury and dysfunction [62]. Similarly, increased counts of circulating endothelial cells (CECs) originating from the mature endothelium were observed in people with cardiovascular risk factors and acute myocardial infarction. Thus, CEC can be considered as a marker of endothelial injury and dysfunction [68,69]. Furthermore, small membranous particles released from endothelial cells named endothe-lial microvesicles correlate with endothelial dysfunction in different states like infections, cancer, and autoimmune diseases [70] Standard Treatment for Lipid Disorders: Statins, Ezetimibe and Fibrates Statins have been used for decades in the treatment of hypercholesterolemia and became the fundamental therapy of atherosclerosis and its complications. Statins improve outcomes in primary and secondary cardiovascular prevention and have a central place in modern guidelines for atherosclerosis and cardiovascular disease treatment [71][72][73]. Their primary mode of action is inhibition of 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase, the key enzyme responsible for cholesterol synthesis, but their beneficial effects reach beyond cholesterol lowering. Even before any notable changes in lipid concentrations are observed, a number of pleiotropyic effects including anti-inflammatory, anti-oxidative, anti-thrombotic and profibrinolytic, increased endothelial NO production, and antiapoptotic actions, alleviate atherosclerotic progression [74,75]. In addition, favorable effects on EPC biology are reported. Statin therapy has been associated with increased circulating EPCs due to enhanced mobilization, differentiation, and increased longevity, as well as enhanced homing to sites of vascular injury and re-endothelization via enhanced expression of on EPC cell surface adhesion molecules [75]. Substance-specific effects on EPC biology are shown in Table 1. Clinical correlates are shown where appropriate. Statins were shown to increase the endothelial progenitor cell blood count as early as 1 week after treatment initiation, reaching a plateau within 3 to 4 weeks. This effect and further differentiation are mediated through NO synthesis pathways, which increases CXCR4 expression on the surface of circulating EPCs [98]. In addition, statins decrease micro non-coding RNA levels named miR 221 and 222 leading to up-regulated EPC differentiation and mobilization [99]. Statins also interact with the phosphoinositide 3-kinase (PI3K)/protein kinase B (Akt)/mammalian target of rapamycin (mTOR) pathway resulting in increased VEGF levels affecting angiogenesis directly and indirectly through increased NO levels [74,75]. Finally, statins modulate oxidative stress by diminishing oxLDL production in vessel walls and thus enhancing EPC numbers, mobilization, function and ability to migrate and or integrate into vasculature [75]. Studies have shown favorable results for different statins, like simvastatin, pravastatin, pitavastatin, atorvastatin and rosuvastatin, indicating the effect on EPCs could be a class effect [75,91,93,95,97]. Interestingly, even herbal remedies containing lovastatin, later produced as a first generation statin, positively impact EPC biology [40]. Increased EPC count induced by statin therapy remains stable [100]. The favorable statin-mediated effects on EPC count, may be dose-dependent, at least for atorvastatin. Higher EPC counts were observed in patients receiving 80 mg of atorvastatin as compared to lower doses in various populations. In addition, atorvastatin reloading in patients receiving moderate dose statin therapy and undergoing percutaneous coronary intervention, triggered an acute increase in EPC count, and benefited their functionality, while decreasing inflammatory markers like high sensitivity C reactive protein (hCRP) [81,101]. Furthermore, a decreased 30-day adverse event rate in NSTEMI and unstable angina patients was observed [102]. In some studies there was a favourable effect on FMD and endothelial biomarkers [79][80][81][82][83][84]90,92,96,97]. It must be noted that statin side effects may hamper their use and limit potential therapeutic benefits in patients with atherosclerosis. The most important side effects include myopathy, liver damage, drug-induced diabetes, and neurological disturbances, and may lead to therapy discontinuation. Caution should be used in elderly people or in patients with chronic kidney disease since pharmacodynamic and pharmacokinetic drug properties may differ from those of the general population [103]. Ezetimibe is usually prescribed when cholesterol levels are not well controlled with statin monotherapy, or as monotherapy in specific patients when statins are contraindicated or not well tolerated. Ezetimibe impairs cholesterol absorption in the gut, targeting the Niemann-Pick C1-Like1 (NPC1L1) sterol transporter, which is responsible for the intestinal absorption of cholesterol and phytosterols [104]. Data suggests that ezetimibe does not benefit EPC biology. Ezetimibe even showed negative effects on the endothelium, increasing circulating endothelial microparticles [105], indicating enhanced apoptosis of endothelial cells. In addition, when used in combination with simvastatin, no further improvements in EPC count was found in patients with coronary heart disease, suggesting that treatment benefits are not related to EPC biology [93]. Fibrates are another class of antilipemic drugs that are widely used. They stimulate peroxisome proliferator activated receptor (PPAR) alpha, affecting gene expression involved in triglyceride and cholesterol metabolism. They have been shown to reduce triglyceride levels, and to a lesser extent, LDL levels, while increasing HDL concentration. A single published study showed beneficial effects of fenofibrate on EPCs in cell cultures obtained from patients with chronic heart failure [106]. In summary, evidence suggests that statins have a pronounced beneficial effect on EPC count and function, which is probably independent of their lipid-lowering effect, ezetimibe is ineffective, and data on fibrates are still limited. PCSK9 acts via a canonical pathway to reduce LDL-receptor (LDL-R) recycling in the liver, thus decreasing LDL-R bioavailability. PCSK9 binds to LDL-Rs on the cell surface, resulting in receptor degradation, lowering the number of disposable LDL-Rs. Consequently, circulating LDL cannot be properly removed from the blood, and LDL concentrations rise. PCSK9 is also expressed in other tissues and organs, like the intestine, kidneys, and blood vessels. PCSK9 of kidney and blood vessel origin is secreted into the blood and downregulates LDL-R levels at other cells, including hepatocytes and macrophages, decreasing LDL clearance. Furthermore, it seems that PCSK9 enhances the migratory capacity of monocytes and inhibits reverse cholesterol transport in macrophages, favoring foam cell formation in atherosclerotic plaques. PCSK9 expressed in smooth muscle cell vessel walls was shown to promote inflammation and contribute to endothelial cell apoptosis through the Bcl-2/Bax-Caspase9-Caspase3 mitochondrial pathway and the p38/Jun N-terminal kinases/mitogen-activated protein kinases (p38/JNK/MAPK) signaling pathway, disrupting endothelial integrity, and resulting in endothelial dysfunction and atherosclerosis development [108]. PCSK9 inhibition is highly effective in reducing LDL cholesterol levels, with a decrease of 60% from baseline seen within days. PCSK9 inhibitors reduce plaque size, measured by intravascular ultrasound and serial magnetic resonance [110,111]. Furthermore, a significant reduction in cardiovascular risk was demonstrated for alirocumab and evolocumab in large international blinded randomized trials [112,113]. Considering EPC biology, a recent cross-sectional clinical study in humans indicated beneficial effects. Namely, endogenous PCSK9 levels were inversely correlated with circulating EPC count in patients with type 2 diabetes mellitus on statin therapy, as well as in the entire cohort of patients. No correlation was found in patients not taking statins [114]. Furthermore, a small clinical study demonstrated favorable effects of PCSK9 inhibitors, alirocumab and evolocumab, on EPC biology in patients with coronary artery disease. There was a significantly higher EPC count and proliferative capacity in patients treated with PCSK9 inhibitors, detected as early as one month after therapy initiation. Increased VEGF levels accompanied the effect of PCSK9 inhibitors. The study was too small to examine the role of evolocumab and alirocumab separately [115]. Possible side effects of alirocumab and evolocumab include flu like symptoms and local injection site reactions, giving a more favourable safety profile in comparison to statin therapy [116]. The impact of inclisiran, a small interfering mRNA molecule inhibiting the translation of PCSK9, on EPC biology has not been investigated so far. Angiopoietins are growth factors with a prominent role in embryonal and adult vasculogenesis. There are some molecules closely related to angiopoietin with a different impact on blood vessels [117][118][119]. Angiopoietin-like protein 3 (Angtpl3), a protein secreted by the liver, raised interest as a potential target for lipid lowering drugs, because loss of function mutations was shown to be protective in terms of lipid derangements, atherosclerosis, and cardiovascular risk. There are several gene variants; carriers of loss of function variant develop the phenotype of familial hypolipidemia and those with complete Angtpl3 deficiency have lower triglycerides and LDL cholesterol, and raised HDL cholesterol, and are prone to longevity [120,121]. Considering lipid metabolism, Angptl3 enhances the cleavage of lipoprotein lipase (LPL) by proprotein convertases in target tissues, leading to LPL dissociation from the cell surface. In addition, Angptl3 inhibits in vitro endothelial lipase (EL) activity, increasing HDL catabolism. The mechanism of action involves a complex of Angptl3 with the related protein Angptl8, that promotes Angptl3 effects [120,122,123]. Beyond lipid-lowering, inhibition of Angptl3 was shown to improve endothelial function [124]. This effect may be mediated indirectly through improved lipid levels, but also directly by binding endothelial integrin v3, and by stimulating Wnt/-catenin signaling [12]. Evinacumab, a recombinant monoclonal Angptl3 antibody, was recently approved for the treatment of familial hypercholesterolemia. It is highly effective, leading to a reduction of LDL cholesterol by 50% in patients with refractory familial hypercholesterolemia, independent of the LDL receptor, with overall mild side effects, like flu like symptoms and injection site reactions [125]. Other Angptl3 inhibitors, including antisense oligonucleotide (ASO) are at different stages of clinical trials [12]. A single study of Angptl3 inhibition on neoangiogenesis demonstrated that Angptl3 enhances cell to cell adhesion via integrin αvβ3 and migration of endothelial cells [126]. This research was done in the early era of stem cell research, on human microvascular venous endothelial cells, and not EPCS, but activation of the same pathways was later confirmed to be important for improving EPC biology [127]. Recently, a hypothesis pointing to beneficial effects of Angptl3 on EPC biology was published, but has not been confirmed [128]. In short, evidence regarding the impact of Angptl3 inhibition on EPCs is lacking, and future high-quality research is needed. Plasma Apheresis LDL apheresis acutely removes circulatory LDL particles by extracorporeal filtration. It was first introduced in the 1970s, and has been used for patients with familial hypercholesterolemia unresponsive to statin therapy, with and without documented atherosclerotic vascular disease [129,130]. There is still no consensus regarding the frequency of this procedure, with reported intervals ranging from once weekly to once in two months [131]. LDL apheresis improves circulating PCSK9 and Lp(a) levels, and upregulates LDL-Rs in tissues, enhancing statin sensitivity [132]. Patients with familial hypercholesterolemia treated by LDL apheresis and statins for over one year, show coronary plaque area reduction and an increase in arterial luminal diameter [133]. A recent study demonstrated additional beneficial effects of LDL apharesis on blood lipids and EPC count in patients with percutaneous coronary interventions for acute coronary syndrome. All patients were assigned to moderate-to high-intensity statin therapy with either atorvastatin ≥40 mg or rosuvastatin ≥20 mg, regardless of being in the study or control group [134]. There was an additional robust acute decline in LDL levels at hospital discharge in patients treated with apheresis, but there was no significant difference between groups noted after 30 days. Interestingly, a more sustained mobilization of endothelial progenitor cells was noted up to three months after randomization, peaking 30 days after the coronary procedure. There were marginal changes in coronary artery architecture in terms of reduced nonculprit coronary plaque size in patients treated with LDL apheresis. This trial had a relatively small sample size, short follow-up, and was not powered to assess clinical outcomes. However, changes in non-culprit coronary plaque size was comparable to similar trials in patients with acute coronary syndrome [134]. In short, LDL apheresis results in fast and profound LDL reduction and increases EPC count. LDL apheresis may mobilize EPC in a durable fashion, but this method is accompanied by adverse effects related to the method itself like access-related complications (bleeding, fistula infection), device-related complications, hypotension and rarely, arrhythmia [135]. Conclusions Endothelial repair mediated by endothelial progenitor cells (EPCs) has raised substantial interest in the scientific community. Small clinical trials have shown the beneficial effect several standard and novel lipid-lowering treatments have on EPC biology. Statins are widely studied, and clinical data suggest that their pleiomorphic, rather than their lipid-lowering properties impact EPC count and function. At least for some statins this effect is dose-dependent. However, side effects and statin intolerance, and in some cases their insufficient efficacy may limit their benefits in real life thus requiring add-on therapies. So far, ezetimibe has not shown any beneficial impact on EPC biology, data for fibrates are still scarce. Considering newer anti-lipemic drugs and treatments like PCSK9 inhibitors, Angplt3 antibodies and LDL apheresis, initial results are promising, but further research is needed to determine their mode of action and effectiveness on EPC count, and migratory and proliferative potential. Author Contributions: V.A. and L.S.K.B. Both authors were equally involved in the acquisition, analysis, or interpretation of data; drafting the work and revising it. Both authors have approved the submitted version and agree to be personally accountable for their own contributions and for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated, resolved, and documented in the literature. All authors have read and agreed to the published version of the manuscript. Funding: The article processing charges will be covered by a grant of Amgen d.o.o. Croatia. No other specific funding was received for preparing this article.
2022-03-03T16:26:48.427Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "2fc04916c84a27c0a73830743a0256241f1cddb3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/5/2663/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fe85f35a59eac820c8eaddd89a15d2cca4070893", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261366197
pes2o/s2orc
v3-fos-license
An assessment of ventilator-associated pneumonias and risk factors identified in the Intensive Care Unit Objectives: Ventilator-associated pneumonia (VAP) is a significant cause of hospital-related infections, one that must be prevented due to its high morbidity and mortality. The purpose of this study was to evaluate the incidence and risk factors in patients developing VAP in our intensive care units (ICUs). Methods: This retrospective cohort study involved in mechanically ventilated patients hospitalized for more than 48 hours. VAP diagnosed patients were divided into two groups, those developing pneumonia (VAP(+)) and those not (VAP(-)).\ Results: We researched 1560 patients in adult ICUs, 1152 (73.8%) of whom were mechanically ventilated. The MV use rate was 52%. VAP developed in 15.4% of patients. The VAP rate was calculated as 15.7/1000 ventilator days. Mean length of stay in the ICU for VAP(+) and VAP(-) patients were (26.7±16.3 and 18.1±12.7 days (p<0.001)) and mean length of MV use was (23.5±10.3 and 12.6±7.4 days (p<0.001)). High APACHE II and Charlson co-morbidity index scores, extended length of hospitalization and MV time, previous history of hospitalization and antibiotherapy, reintubation, enteral nutrition, chronic obstructive pulmonary disease, cerebrovascular disease, diabetes mellitus and organ failure were determined as significant risk factors for VAP. The mortality rate in the VAP(+) was 65.2%, with 23.6% being attributed to VAP. Conclusion: VAPs are prominent nosocomial infections that can cause considerable morbidity and mortality in ICUs. Patient care procedures for the early diagnosis of patients with a high risk of VAP and for the reduction of risk factors must be implemented by providing training concerning risk factors related to VAP for ICU personnel, and preventable risk factors must be reduced to a minimum. INTRODUCTION Intensive care units (ICUs) are life support units intended to care for patients requiring intensive care due to organ failure, that are equipped with advanced technology, where vital signs are monitored and where treatment is administered. 1 The majority of patients monitored in these units receive mechanical ventilation (MV) support and invasive procedures such as central venous catheterization. However, patients develop a disposition to infections as a result of these procedures. 2 Ventilator-associated pneumonia (VAP) is the most common infection in intensive care patients, and can lead to prolongation of intensive care and an increased risk of mortality. 2 Compromise of patient defense mechanisms, colonization by pathogen micro-organisms and the presence of micro-organisms with high virulence all occupy and important place in the pathogenesis of VAP. The purpose of this study was to determine the incidence and risk factors in patients developing VAP in our ICUs. METHODS We received permission for present study from local ethics committee of Kanuni Education and Research Hospital, Turkey. and the study was performed retrospectively at the same hospital, which has a 605-bed capacity, including 46 adult ICU beds. Patients hospitalized in the ICU for longer than 48 hour and administered MV between January 1, 2011 and 31st December 2013 were included in the study. Our hospital contains four adult ICUs (Anesthesia and Reanimation, Surgical, Medical and Neurology). Due to nurse shortages, the nurse-patient ratio in our ICUs ranges between 1:3 and 1:4, and may even rise to 1:6 on some nights. Patients' demographic and clinical characteristics were recorded onto study forms by examination of medical files, infection control committee surveillance data, ICU records, pharmacy records and processing data. The Acute Physiology and Chronic Health Evaluation (APACHE) II scores used were those calculated in the first 24 hour of hospitalization. 3 Charlson co-morbidity index scores were obtained by examining all patients' medical records. 4 Identification of microorganisms and testing for antimicrobial susceptibility were conducted using the Phoenix system (Becton Dickinson), the disk diffusion test, and classic methods. Patients' demographic and clinical characteristics (APACHE II score, Charlson co-morbidity index. Length of hospitalization, treatments administered and invasive procedures performed) and prognoses were recorded. VAP was diagnosed on the basis of CDC criteria. 5 Patients were divided into two groups, those developing pneumonia (VAP(+)) and those not developing pneumonia (VAP(-)). Statistical Analysis: Descriptive statistical analysis was performed for all parameters. The Kolmogorov-Smirnov test was used to determine the eligibility of variables. Data in conformity with normal distribution were analyzed using Student's t-test, and those not conforming to normal distribution were analyzed using the Mann Whitney-U test. Data obtained by measurements are given as mean ± standard deviation. Data obtained by counting are given as numbers (%); analyses were performed using the Chi-square test. P<0.05 was regarded as significant. RESULTS MV was administered to 1152 (73.8%) of the 1560 patients with an ICU stay exceeding 48 hour. The MV use rate was 0.52. Two hundred fourteen VAP Mean APACHE II score in the VAP(+) patients was 21.5±5.4 and mean APACHE II score in the VAP(-) patients was 19.2±4.9. APACHE II score elevation was statistically significantly correlated with VAP development (p<0.001). Charlson comorbidity index in the VAP(+) patients was 3.9±1.6, compared to 2.7±3.0 in the VAP(-) patients. A statistically significant correlation was observed between Charlson co-morbidity index elevation and VAP development (p<0.001) In terms of underlying diseases, chronic obstructive pulmonary disease (COPD), diabetes mellitus (DM), cerebrovascular disease and organ failure levels differed significantly between the two groups (p<0.001, p=0.003, p=0.007, p=0.007). Previous hospitalization, a history of antibiotherapy, reintubation and enteral nutrition were assessed as significant risk factors for VAP (p<0.001). DISCUSSION MV in ICUs is a life-saving medical procedure in the event of respiratory failure. More than 300,000 patients in the USA receive MV every year. 1 According to an American Thoracic Society (ATS) report, the prevalence of VAP ranges between 9% and 27%. 2 A study from France reported a level of 14.5-27.6%. 6 In our study, VAP developed in 15.4% of patients administered MV, which is compatible with the literature. Centers for Disease Control and Prevention data report an incidence of VAP of 0.0-5.8/1000 ventilator days in the ICUs of various hospitals. 5 However, the incidence of VAP reported in studies in the literature is as high as 58. 7,8 The incidence of VAP in our study was 15.7/1000 ventilator days. Although our findings are higher than that CDC data, they are better than those of other studies. The presence of various negative factors in terms of infection, such as the fact that our hospital data were obtained from ICUs in four different branches, the high number of patients per nurse in the ICU, the lack of isolation rooms, the low square meter area per bed and the distance between beds being less than two meters may be reasons for the incidence of VAP differing from the CDC. VAP prolongs length of hospitalization and duration of MV 2 . Mean duration of MV and length of stay in the ICU in this study were higher in patients with VAP than in VAP(-) patients (p<0.001). Every day that patients spend in the ICU and on MV increases the risk of infection. Factors facilitating infection include underlying diseases, comorbid factors, malnutrition, nasogastric tube use, gastroesophageal reflux, sedation, invasive procedures to the respiratory system and aspiration of contaminated secretions accumulating on the endotracheal cuff. 9,10 MV indications in patients hospitalized in the ICU must therefore be assessed daily, and patients must be removed from MV and the ICU as quickly as possible. APACHE II scoring is a system used to measure the severity of diseases in ICUs. 3 Though many studies have identified severity of underlying diseases as a potential risk factor, contradictory results have also been reported. While some studies have reported that APACHE II score is associated with mortality but not with infection, other studies have suggested that a high APACHE II score is a risk factor for VAP. 3,11 Apostolopoulou reported that a score of 18 or higher is an independent risk factor for VAP, and Meric et al. reported that APACHE II score is not a risk factor for hospital-acquired infection but that it is a risk factor for mortality. 12,13 In our study, high APACHE II score emerged as a risk factor for VAP (p<0.001). Charlson co-morbidity index, the total score of co-morbid diseases, was 3.9±1.6 for the VAP(+) patients and 2.7±3.0 for the VAP(-) patients. There was also a statistical significance between a high Charlson co-morbidity index and VAP (p<0.001). This indicates that underlying diseases and the presence of severe disease increase the risk of VAP. Prolongation of stay in intensive care patients and a history of recurrent hospitalization are reported to affect development of infection. 14, 15 Meric et al. reported that hospitalization longer than seven days increases the risk of infection. 13 A case-control study by Agarwal reported that a mean hospitalization time of 13 days for VAP(+) patients and 8 days for VAP(-) subjects. 14 In our study, prolonged stay in the ICU and a history of recurrent hospitalization increased the risk of VAP (p<0.001, OR=2.25). Patients monitored in the ICU receive antibiotics for postoperative surveillance, for prophylactic reasons based on infections, and for pre-emptive as well as therapeutic purposes. Off-label and inappropriate length of use of prophylactic antibiotics are not recommended since this will increase colonization by resistant pathogens and the risk of infection. 12 Some studies have shown that antibiotic use increases the risk of VAP risk, although other studies have reported conflicting results results. 10,15 In our study, a previous history of antibiotics increased therisk of VAP 2-fold (p<0.001, OR=2.0). Recurrent intubations increase the risk of VAP by leading to the aspiration of nosocomial bacteria colonizing the oropharynx. 16 Therefore, instead of reintubation of an extubated patient, noninvasive MV should be applied as far as possible. 17 Karthikeyan et al. has reported that reintubation was an important risk factor for VAP. 7 In our study, reintubation increased the risk of VAP 9.36-fold (p<0.001, OR=9.36). Although enteral nutrition is recommended for intensive care patients, it has been reported as a risk factor for VAP in several studies. 18,19 This may be related to issues such as enteral nutrition technique, ineffective follow-up of gastric residual volume, frequent nasogastric procedures, an inappropriate patient head position during nutrition, and inadequate tracheal cuff pressure. In our study, enteral nutrition increased the risk of VAP 2.71-fold (p<0.001, OR=2.71). Considering VAP development in terms of primary and underlying diseases, we observed significantly more VAP development in patients with disease, such as COPD, DM and organ failure(p=0.007 for organ failure, p=0.003 for DM). Previous studies have reported that underlying diseases, and particularly COPD and ARDS, lead to gram negative bacteria colonization, affect the mucociliary system, impair local and systemic defense mechanisms and affect the phagocytic functions of alveolar macrophages as well as neutrophils, thus leading to an increase in VAP development. 20 Some studies have reported a correlation between COPD and VAP, although other studies do not describe COPD as a risk factor. 14,18,21 In our study, COPD increased the risk of VAP 4.19-fold times (p<0.001, OR=4.19). Organ failures may predispose for VAP in association with deterioration of underlying condition and facilitation of bacterial translocation. 22 In addition to studies reporting no relation between organ failure and devlopment of VAP, Agarwal et al. reported a relation between chronic kidney disease and VAP. 14,18 In our study, three diseases (heart failure, renal failure, and hepatic failure) were identified as a risk factor for VAP, increasing VAP development 1.73-fold (p=0.007, OR=1.73). DM was also a risk factor for VAP and increased VAP development 1.86-fold (p=0.003, OR=1.86). Arozullah et al. 23 identified DM as a risk factor, but Agarwal et al study did not. 14 VAP has a direct effect on mortality in hospitalassociated infections. Bacteremia (particularly Pseudomonas aeruginosa or Acinetobacter spp.), medical diseases, severity of primary disease, inadequate empirical treatment, prolonged hospitalization, and advanced age are reported to increase mortality rate. 2,6,10,22 In their metaanalysis, Melsen et al. reported a level of mortality attributable to VAP in surgical patients of 69%, and a level of mortality attributable to VAP of 36% in patients with intermediate severity of illness scores. 24 In our study, 65.2% of patients with VAP died, and the level of mortality attributable to VAP was 23.6%. CONCLUSION VAPs are nosocomial infections that cause significant morbidity and mortality in ICUs and that prolong hospitalization. These infections are more common in patients with APACHE II score and Charlson co-morbidity index elevation, with extended hospitalization and MV use and with underlying predisposing diseases. Reintubation increases the risk of VAP 9.3-fold. Guidelines must be adopted in the prevention of these infections, and every country, hospital and ICU must adopt infection control procedures in the light of its own local problems. Training must be provided for ICU personnel on the subject of VAP-related risk factors. Patients' MV requirements must be assessed daily. The probability of reintubation must be reduced to a minimum, and prolonged MV must be prevented. Patients at high risk for VAP must be diagnosed early and patient care procedures to reduce risk factors must be implemented, and preventable risk factors must be reduced to a minimum.
2018-04-03T03:45:03.307Z
1969-12-31T00:00:00.000
{ "year": 2016, "sha1": "6dc1d12131ceade910a09cef7bda2abf12c33296", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc5017083?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6dc1d12131ceade910a09cef7bda2abf12c33296", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258168717
pes2o/s2orc
v3-fos-license
Probiotic Bacteria from Human Milk Can Alleviate Oral Bovine Casein Sensitization in Juvenile Wistar Rats This study aims to see if probiotic bacteria from human milk could ameliorate oral cow’s milk sensitization. The probiotic potential of the SL42 strain isolated from the milk of a healthy young mother was first determined. Rats were then randomly gavaged with cow’s milk casein without an adjuvant or assigned to the control group. Each group was further subdivided into three groups, with each receiving only Limosilactobacillus reuteri DSM 17938, SL42, or a phosphate-buffered saline solution. Body weight, temperature, eosinophils, serum milk casein-specific IgE (CAS-IgE), histamine, and serum S100A8/A9 and inflammatory cytokine concentrations were measured. The animals were sacrificed after 59 days; histological sections were prepared, and the spleen or thymus weights, as well as the diversity of the gut microbiota, were measured. On days 1 and 59, SL42 abridged systemic allergic responses to casein by dropping histamine levels (25.7%), CAS-specific IgE levels (53.6%), eosinophil numbers (17%), S100A8/9 (18.7%), and cytokine concentrations (25.4–48.5%). Analyses of histological sections of the jejunum confirmed the protective effect of probiotic bacteria in the CAS-challenged groups. Lactic acid bacteria and Clostridia species were also increased in all probiotic-treated groups. These findings suggest that probiotics derived from human milk could be used to alleviate cow’s milk casein allergy. Introduction Food allergies are becoming more common around the world, particularly in developed countries, and are no longer a rare occurrence in Africa, where food allergies account for 5 to nearly 50% of allergic reactions [1]. In Algeria, food allergies affect 8.5% of schoolchildren, according to Yakhlef and Souiki [2]. Cow's milk protein allergy (CMPA) is a type of food allergy that is most common in infants and children under the age of three. The most common symptoms are dermatitis, urticaria or oral allergy syndrome, and gastrointestinal (GI) disorders such as changes in stool frequency and consistency, mucous or blood spots in stools, infantile colic, nausea, vomiting, and gastroesophageal reflux [3]. CMPA affects approximately 8%of infants and young children. After egg, peanut, and fish allergies, it is the fourth-most common food allergy in babies [4]. A variety of factors may play a role in the complex development of food allergies. A disruption in the development of oral tolerance has been observed in infants with food allergies, characterized by defects in the induction of regulatory T cells, and the production of allergen-specific neutralizing IgA antibodies [4]. Furthermore, even though the specific properties of the allergens themselves contribute to the degree of allergenicity, defects in the epithelial barrier, both in the skin and in the intestine, as well as changes in the pH of the stomach are thought to promote allergy. Furthermore, we are only now beginning to understand how the microbiome can help with allergy problems [5,6]. Breastfeeding is undeniably beneficial, according to scientific evidence. Breast milk is the only milk that can be permanently adapted to the needs of a growing infant. One of the advantages of breastfeeding is that it helps to prevent allergies [7]. However, while a large number of studies support breastfeeding's role in lowering the risk of allergy, other studies examining the effect of prolonged breastfeeding do not. Animal models and in vitro evidence indicate that the gut microbiome may protect against food allergy, and that probiotics may be a useful tool [8]. However, there is no consistent evidence for identifying the specific bacterial species, dosage, and optimal duration for achieving the desired immunomodulation [9]. Early-life probiotic supplementation via breast milk may be a useful approach to the primary prevention of a variety of human allergic diseases. Breast milk from healthy women contains approximately 10 3 to 10 4 CFU/mL commensal microbes and serves as a source of probiotics for infants [10]. Human milk microbiota diversity is influenced by maternal and environmental factors such as breastfeeding practice, behavior, other milk components, genetics, geographical region, race, and population [11]. The evidence as to whether probiotics can induce tolerance in food allergy is currently not clear. To the best of our knowledge, published studies conducted on the anti-food allergy potential of probiotic bacteria gave variable results [12][13][14], and no one used probiotic strains from human milk. Lactobacillus strains, including the L. casei strain Shirota, the L. plantarum strain L-137, and the L. acidophilus strain L-92, have been reported as probiotics that modify antigen-specific serum immunoglobulin (IgE) levels in animal models [12,15,16]. We hypothesized that the prevention of allergy using probiotics may be more effective at juvenile age and that bacterial strains from human milk would be the most appropriate probiotics for that purpose. Therefore, in the present study, we assess the preventive effects of dietary intervention with probiotics from human milk to clarify their tolerogenic effect in managing food allergy symptoms at juvenile age. Limosilactobacillus reuteri Protectis (DSM 17938) or isolated SL42, whose genetic and probiotic properties were characterized, were given to juvenile female Wistar rats, sensitized intragastrically (i.g.) with casein from cow's milk, as an animal model. Bacteria Used in This Study Limosilactobacillus reuteri Protectis DSM 17938 (formally known as Lactobacillus reuteri) was the reference strain from breast milk supplied by the PEDIACT laboratory BioGaia (Asnières-sur-Seine, France). SL42 is an isolated strain from the breast milk of a healthy young mother (Algeria). Each strain was grown in de Man, Rogosa, and Sharpe broth (Biomérieux, Craponne, France) supplemented with cysteine-HCl (MRS-cys) under anaerobic conditions at 37 • C for 24 h. Using16S rRNA Gene Sequencing to Identify SL42 Isolate Identification of SL42 was made primarily by partial sequencing of 16S rRNA genes. The extraction of bacterial genomic DNA was performed using the GF-1 Nucleic Acid Extraction Kit (Vivantis Technologies Sdn Bhd, Selangor DE, Malaysia) according to the manufacturer's instructions. The complete 16S rRNA gene region was amplified via primers, 1492R (5 -GGTTACCTTGTTACGACTT-3 ) and 27F (5 -AGAGTTTGATCCTGGCTCAG-3) (Vivantis Technologies Sdn Bhd, Selangor DE, Malaysia). The PCR products were further verified using 1% agarose gel electrophoresis and subjected to sequencing (Seri Kembangan, Selangor, Malaysia; https://apicalscientific.com/, accessed on 22 December 2022). The identification of the isolate was carried out by comparison with reference sequences using the NCBI BLAST algorithm (http://www.ncbi.nlm.nih.gov/blast, accessed on 22 January 2023). The neighbor-joining method was used to construct a phylogenetic tree (MEGA 6.0 program). Characterization of the Probiotic Potential To attribute the qualification of "probiotic" to the SL42 strain, the following tests were performed for both bacteria strains used in the in vivo part of this study, where the SL42 isolate was compared with the probiotic strain of DSM 17938 taken as reference. Bacterial loads were adjusted at 1 to 5 × 10 7 CFU/mL in all experiments. All microbiological components were purchased from Merck and chemicals from Sigma (France), unless otherwise specified. pH and Bile Tolerance Assays The method previously described by Ziar and Riazi [17] was slightly modified. Bacteria were cultivated (individually) into pH 2 MRS broths and incubated at 37 • C for 2 h. MRScys agar plates were used to determine viable counts every 30 min exposure. Bacteria bile salt tolerance was determined via the viable count method as previously described [18]. Following incubation for 24 h at 37 • C, the culture was centrifuged 5000× g at 40 • C for 10 min. Eventually, 0.02 mL of bacterial suspension was inoculated in freshly sterile phosphate buffer of pH 7.5 containing bile (0.3% v/v; Sigma-Aldrich, France). Following incubation at 37 • C for 24 h, viable counts were observed on MRS-cys agar plates. Detection of Antimicrobial Activity Both probiotic strains were tested for antimicrobial activity. Seven pathogenic indicator bacteria and one fungus, Candida albicans, were used. Probiotic bacteria were cultured in an MRS-cys medium for 24 h at 37 • C, then centrifuged at 8000× g for 15 min at 4 • C (Thermo Scientific, Waltham, MA, USA), and cell-free supernatant yielded before the assay. The pH was adjusted to 6.5 with 6 mol/L NaOH, and then filtered via a membrane with a pore size of 0.22 µm. The modified Oxford cup double plate method was used to determine the antimicrobial activity [19]. Oxford cup (5 mm) was placed on an agar surface and the pathogen indicator exponential phase (100 µL, 1 × 10 7 CFU/mL) was spread on the nutrient agar surface; then, 200 µL supernatant was added in wells. Following incubation at 30 • C for 24 h, the diameters of the clear inhibitory zone were measured. Hydrophobicity Hydrophobicity of bacteria was determined using xylene extraction according to the method of Perez et al. [20], and hydrophobicity percentage (H %) was calculated using Equation (1): where A0 and A are absorbance values measured before and after xylene extraction. Hemolytic Activity Bacterial cultures were observed on defibrinated sheep blood to a concentration of 5% (w/v) on blood agar plates [21], incubated for 24 h at 48 • C. Hemolytic activity was verified by β-hemolysis (bright zones around colonies), α-hemolysis (green zones around colonies), and γ-hemolysis (no zone around colonies) reactions. Cholesterol Uptake The method of Ziar et al. [22] was followed. In brief, MRS-THIO medium containing 2% (w/v) sodium thioglycolate was supplemented with 85 µg/mL of water-soluble cholesterol, previously sterilized by filtration on a membrane Millipore (C1145, cholesterol-PEG 600; Sigma). Bile at a final concentration of 0.3% and the bacterial inoculum (1%, w/v) were then added. The milieu was incubated at 37 • C/24 h in anaerobic conditions (anaerobic jar with CO 2 generating system, Anaerocult, Merckmillipore, Fontenay-sous-Bois, France). To estimate the amount of assimilated cholesterol, the cultures were centrifuged (2000× g/10 min at 4 • C) after 24 h of incubation, and the pellets were washed with sterile distilled water, and dried at 80 • C until the weight remained stable. Bacteria were tested for their cholesterol uptake capacities, expressed by the specific ability to reduce the available cholesterol from the culture medium after 24 h incubation, which was calculated according to the formula proposed by Pereira and Gibson [23]. Antibiotic Susceptibility An antibiotic susceptibility test was performed with 11 different antibiotics, including those used as cell wall or protein synthesis inhibitors, and the broad-spectrum antibiotics known to be effective against Gram-positive and Gram-negative bacteria [24]: amoxicillin, penicillin, gentamicin, streptomycin, chloramphenicol, norfloxacin, ciprofloxacin, sulfonamide, clindamycin, novobiocin, and vancomycin. Fresh overnight cultures of probiotic bacteria were inoculated separately at 0.5 McFarland in Mueller-Hinton agar. Subsequently, antibiotic discs were added onto inoculated Mueller-Hinton agar plates. The diameters of the inhibition zones around the antibiotic discs were observed following a 24-48 h incubation period at 37 • C. The sensitivity conditions were determined according to NCLS standards [25]. Animal Housing Female Wistar rats were shipped from Pasteur Institute (Algiers, Algeria) at 6 weeks of age (80-100 g). Housing rooms were kept at constant temperature (24 ± 2 • C) with an adequate light: dark cycle (12:12). All procedures were approved by the Animal Ethics Committee of the University of Mostaganem. Parental rats were housed in pairs in polycarbonate cages with ad libitum access to distilled and sterilized water and were fed a diet without allergens (SARL Aliment souris et rat: La Ration, Bouzaréah, Algiers, Algeria) for three successive generations. The third filial generation was used in this experiment at juvenile age and rats were acclimatized under the same conditions (2 rats/cage) cited above for 15 days before the beginning of the casein challenge. Before (−3 day) and during casein challenge, each rat from the probiotic-treated groups received 10 8 CFU of DSM 19738 or SL42 in 1 mL physiological water by gavage every other day. Casein challenge was started by giving i.g. 60 mg casein without adjuvant during the first 42 days, then was associated to 20 mg gluten during the rest of the period of challenge (days 43-57). After that, sensitization using cow's milk casein was interrupted for one day prior to sacrifice. Assessment of Macroscopic Casein Allergy Symptoms Macroscopic casein allergy symptoms were assessed by monitoring rats every 30 min, and during the 3 h following cow's milk casein sensitization. Clinical signs of anaphylaxis were scored depending on the gravity of the developed symptom: 0 for no signs; 1 for scratching and rubbing nose and head; 2 for bags around eyes and mouth; 3 for diarrhea; 4 for reduced activity with a satisfied respiratory rate, cyanosis around mouth and tail, and no activity; and 5 for death. Diarrhea score was classified using Bristol scale: 1 for separate hard lump, 2 for lumpy feces, 3 for sausage-shaped feces with cracks on the surface, 4 for smooth and soft form, 5 for soft blobs with clear-cut edges, 6 for mushy blobs with ragged edges, and 7 for entirely liquid feces. Before (−3 day) and during casein challenge, each rat from the probiotic-treated groups received 10 8 CFU of DSM 19738 or SL42 in 1 mL physiological water by gavage every other day. Casein challenge was started by giving i.g. 60 mg casein without adjuvant during the first 42 days, then was associated to 20 mg gluten during the rest of the period of challenge (days 43-57). After that, sensitization using cow's milk casein was interrupted for one day prior to sacrifice. Assessment of Macroscopic Casein Allergy Symptoms Macroscopic casein allergy symptoms were assessed by monitoring rats every 30 min, and during the 3 h following cow's milk casein sensitization. Clinical signs of anaphylaxis were scored depending on the gravity of the developed symptom: 0 for no signs; 1 for scratching and rubbing nose and head; 2 for bags around eyes and mouth; 3 for diarrhea; 4 for reduced activity with a satisfied respiratory rate, cyanosis around mouth and tail, and no activity; and 5 for death. Diarrhea score was classified using Bristol scale: 1 for separate hard lump, 2 for lumpy feces, 3 for sausage-shaped feces with cracks on the surface, 4 for smooth and soft form, 5 for soft blobs with clear-cut edges, 6 for mushy blobs with ragged edges, and 7 for entirely liquid feces. The weight of the rats was recorded during the experiment to assess the effect of CAS-induced sensitization on body weight. Rectal temperature was measured after cow's milk casein sensitization and every 30 min (RET-2probe, Kent Scientific, Torrington, CT, Figure 1. Experimental design of the present study. Bovine casein (Protifar©) was taken as allergen and was administered intragastrically to rats without the use of adjuvant. A total of 48 female rats (3 weeks old) were randomly divided into 6 groups (n = 8 rats per group): control group receiving only PBS (C group), nonsensitized group treated with SL42 strain (SL42 group), nonsensitized group treated with DSM 17938 strain (DSM 1938 group), casein-sensitized group (CAS group), caseinsensitized group treated with SL42 (CAS + SL42 group), and casein-sensitized group treated with DSM 19738 strain (CAS-+ DSM 19738 group). In all probiotic-treated groups, SL42 or DSM 17938 were given every other day from day −3 to day 58. The weight of the rats was recorded during the experiment to assess the effect of CASinduced sensitization on body weight. Rectal temperature was measured after cow's milk casein sensitization and every 30 min (RET-2probe, Kent Scientific, Torrington, CT, USA). Stress status of rats was estimated by measuring serum uric acid concentration determined by fluorometry (MAK077-1KT, Sigma-Aldrich, Saint-Quentin-Fallavier, France). Determination of Specific Casein IgE, Histamine, S100A8/A9, Inflammation-Associated Cytokines, and Eosinophil Number The levels of S100A8/A9, TLR4, NF-κB, TNF-α, IL-6, and IL-1β in the blood were determined using ELISA Assay Kit according to the manufacturer's instructions (R & D Systems, Minneapolis, MN, USA). Serum was collected 30 min after casein administration and tested for histamine using an ELISA kit according to the manufacturer's instructions (Beckman Coulter, Brea, CA, USA). Serum levels of casein-specific IgE (CAS-IgE) were also assessed using ELISA kit (BD Biosciences, San Jose, CA, USA). The number of eosinophils in the blood was determined using Hemogram technique (Mindray 2800 BC brand automatic blood count device, Bath, UK). Cultivation of Bacteria from Feces Fecal samples were collected before and after the challenge and plated on Hektoen/TSA for nonspecific bacteria, and MRS/MRS-cys media for lactic acid bacteria enumeration [26]. After 48-72 h incubation, the bacterial loads were expressed as CFU/g of fecal material. Clostridia species were counted after initial enrichment followed by plating on TSA II (4 days) with 5% sheep blood (Fisherscientific, CA) [27]. Determination of Spleen/Body Weight Index and Thymus/Body Weight Index The rats were weighed and sacrificed. The rats' spleens and thymuses were removed and weighed immediately. The spleen index (SI) and thymus index (TI) were calculated using Equation (2) [28]: SI or TI (mg/g) = (weight of spleen or thymus)/body weight (2) Histological Analysis Intestinal tissues (jejunum) were removed on the day of dissection, fixed in 10% phosphate-buffered formalin, and embedded in paraffin. The jejunum sections were stained with hematoxylin and eosin (HE) for the observation of inflammatory infiltrates and eosinophils and identification of goblet cells. Bacterial Translocation Test The spleen and the mesenteric lymph nodes (MLN) of rats were macerated and suspended in sterile physiological saline. Serial dilutions were plated and incubated overnight at 37 • C on MacConkey agar (24 h), blood agar, and MRS-cys agar (48 h) (Merck, France). Statistical Analysis The mean ± standard error of the mean or standard deviation was used to present experiment's data for statistical analysis implementation (SPSS statistics 26, Chicago, IL, USA). Statistical significance was determined using Student's t-test, and one-way ANOVA was used for parametric tests. Differences at p < 0.05 were considered statistically significant. Ethics Approval The animal research presented in this manuscript was carried out ethically in accordance with the Helsinki Declaration and the ARRIVE guidelines for in vivo experiments. Furthermore, the Ethics Committee of the Faculty of Life Science and Nature affiliated with Abdelhamid Ibn Badis University of Mostaganem approved this research (Approval No. 2019-013). SL42 Is a Lacticaseibacillus rhamnosus as Confirmed by 16S rRNA Analysis After catalase (negative), Gram tests (positive), and morphological (white and smooth colonies with approximately 2 mm diameter were picked from MRS-cys agar) and biochemical analyses, SL42 was subcultured at 37 • C to obtain pure cultures for molecular identification. The 16S rRNA gene sequence of strain SL42 was sequenced and compared against known strains based on BLAST searches. SL42 was subsequently identified as Lacticaseibacillus rhamnosus as confirmed by the results of a phylogenetic analysis ( Figure 2). L. rhamnosus SL42 was deposited in NCBI GenBank under the accession number OQ300076, showing a sequence similarity of 98-99% when compared with the known Lacticaseibacillus rhamnosus species (Table S1). identification. The 16S rRNA gene sequence of strain SL42 was sequenced and compared against known strains based on BLAST searches. SL42 was subsequently identified as Lacticaseibacillus rhamnosus as confirmed by the results of a phylogenetic analysis ( Figure 2). L. rhamnosus SL42 was deposited in NCBI GenBank under the accession number OQ300076, showing a sequence similarity of 98-99% when compared with the known Lacticaseibacillus rhamnosus species (Table S1). L. rhamnosus SL42 Expresses a Satisfying Probiotic Potential The L. rhamnosus SL42 showed high acid tolerance and survivability at pH 2 (93%) after 2 h. The L. rhamnosus SL42 was also tolerant to pancreatic and pepsin enzymes under simulated digestive conditions (data not shown). Moreover, approximately 90.5% of L. rhamnosus SL42 cells survived with 0.3% bile and assimilated 6.01 mg/g cholesterol. The isolated L. rhamnosus SL42 strain also showed a high hydrophobicity of 51% (Table 1). L. rhamnosus SL42 Expresses a Satisfying Probiotic Potential The L. rhamnosus SL42 showed high acid tolerance and survivability at pH 2 (93%) after 2 h. The L. rhamnosus SL42 was also tolerant to pancreatic and pepsin enzymes under simulated digestive conditions (data not shown). Moreover, approximately 90.5% of L. rhamnosus SL42 cells survived with 0.3% bile and assimilated 6.01 mg/g cholesterol. The isolated L. rhamnosus SL42 strain also showed a high hydrophobicity of 51% (Table 1). Table 1. Probiotic characteristics * of SL42 strain reported in this study. L. rhamnosus SL42 cells strongly inhibited E. coli and Pseudomonas aeruginosa with the highest inhibitory zones being 18 and 17 mm, respectively. The inhibitions of Candida albicans, Staphylococcus aureus, and Klebsiella pneumoniae were weaker, with inhibition zones between 11 and 15 mm ( Table 2). Macroscopic Symptoms Disappear after One-Week Casein Gavage Diarrhea was only observed during the first week of CAS gavage. The diarrheic score was 7 and 4 on the Bristol scale for rats receiving exclusively CAS (50% of rats) or CAS-probiotic bacteria (33% of rats), respectively. There were no differences in body weight and temperature between the experimental and the control groups (all p > 0.05) during the entire study ( Figure S1; Table S2). Uric acid levels in rat sera were significantly (all p < 0.05) increased from the 1st day to the 58th day in all CAS-sensitized rats (49.8%: CAS + SL42; 51.9%: CAS + DSM 17938; 74.7%: CAS) and remained unchanged in the control group and those receiving only individual probiotic bacteria ( Figure S2). Calprotectin, Eosinophils, and Cytokines Associated with CAS-Induced Allergy Were Successfully Decreased in Plasma of Rats Gavaged with the SL42 Strain Significantly (p < 0.05) higher levels of CAS-specific IgE and histamine were detected in the sera of all rats treated with CAS. Sensitization with casein triggered the production of specific IgE with an average of 34.25 ± 1.25 (IU/L) in CAS-treated rats (Figure 3a). CAS-probiotic-treated rats exhibited an almost 50% reduction (p < 0.05) with registered values of 15.89 ± 0.89 IU/L on average in the SL42-treated group, and 17.98 ± 0.53 IU/L on average in the DSM 17938-treated group. Similar trends were obtained for histamine levels with25.6 ± 1.6 nmol/L (p < 0.05) in the CAS-treated group compared with 19 ± 1 and 20.5 ± 0.8 nmol/L for the SL4-and DSM 17938-treated groups, respectively (Figure 3b). The control groups and those receiving only individual probiotic bacteria produced neither CAS-specific IgE nor histamine. in the sera of all rats treated with CAS. Sensitization with casein triggered the production of specific IgE with an average of 34.25 ± 1.25 (IU/L) in CAS-treated rats (Figure 3a). CASprobiotic-treated rats exhibited an almost 50% reduction (p < 0.05) with registered values of 15.89 ± 0.89 IU/L on average in the SL42-treated group, and 17.98 ± 0.53 IU/L on average in the DSM 17938-treated group. Similar trends were obtained for histamine levels with25.6 ± 1.6 nmol/L (p < 0.05) in the CAS-treated group compared with 19 ± 1 and 20.5 ± 0.8 nmol/L for the SL4-and DSM 17938-treated groups, respectively (Figure 3b). The control groups and those receiving only individual probiotic bacteria produced neither CASspecific IgE nor histamine. The plasma levels of S100A8/A9 were increased in the CAS group compared with the control group and the probiotic-treated groups. The level of S100A8/A9 was statistically different on days 1 and 59 (all p < 0.05), as seen in Figure 4a. SL42 decreased calprotectin (S100A8/9) by 18.7% in CAS + SL42-treated rats. The plasma levels of S100A8/A9 were increased in the CAS group compared with the control group and the probiotic-treated groups. The level of S100A8/A9 was statistically different on days 1 and 59 (all p < 0.05), as seen in Figure 4a. SL42 decreased calprotectin (S100A8/9) by 18.7% in CAS + SL42-treated rats. The levels of TLR4 were higher in the CAS group than in the control group or the probiotic-treated groups, and significant differences were found on days 1 and 59 (all p < 0.05), as seen in Figure 4b. SL42 decreased TLR4 by 25.45% in CAS + SL42-treated rats. Moreover, the number of eosinophils was also increased, with an average of 108.5 ± 10.66/mm 3 in CAS-sensitized rats without probiotic treatment ( Figure 5). Administration of the SL42 strain decreased the number to 90 ± 1.41/mm 3 (−16.67%), whereas treatment with the probiotic strain DSM17938decreased eosinophil numbers to 93 ± 7.03/mm 3 (−13.88%). Groups receiving only the individual probiotic bacteria exhibited~50% lower values, ranging from 51.25 to 52/mm 3 ( Figure 5). IL-6 by −14.31% to −48.58% in CAS + SL42-treated rats. Moreover, the number of eosinophils was also increased, with an average of 108.5 ± 10.66/mm 3 in CAS-sensitized rats without probiotic treatment ( Figure 5). Administration of the SL42 strain decreased the number to 90 ± 1.41/mm 3 (−16.67%), whereas treatment with the probiotic strain DSM17938decreased eosinophil numbers to 93 ± 7.03/mm 3 (−13.88%). Groups receiving only the individual probiotic bacteria exhibited ~50% lower values, ranging from 51.25 to 52/mm 3 ( Figure 5). Figure 5. Eosinophil numbers in blood as determined using hemogram technique (n = 8 rats/ group). Wistar rats were sensitized intragastrically by administration of casein without adjuvant (before 1st day, after 58th day). Data are presented as the mean ± SD. Statistical analysis was conducted by using one-way ANOVA with Tukey's multiple comparisons test. * p < 0.05. Control group receiving only PBS (control group); nonsensitized group treated with SL42 strain (SL42 group); nonsensitized group treated with DSM 17938 strain (DSM 17938 group); casein-sensitized group (casein group); casein-sensitized group treated with SL42 (casein + SL42 group); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938 group). Figure 6 shows the population of lactic acid bacteria (LAB) and nonspecific bacteria obtained by plating techniques. Before the CAS challenge (data not shown), the bacterial profile was comparable (p > 0.05) between nonsensitized and CAS-sensitized rats. Administration of probiotic bacteria markedly increased the density of LAB (Figure 6a) and Clostridia species, but not of nonspecific bacteria (Figure 6b). After the 58th day of the CAS challenge, the rats exhibited a diminished density of fecal LAB and an increased density of nonspecific bacteria (Figure 6a Figure 6 shows the population of lactic acid bacteria (LAB) and nonspecific bacteria obtained by plating techniques. Before the CAS challenge (data not shown), the bacterial profile was comparable (p > 0.05) between nonsensitized and CAS-sensitized rats. Administration of probiotic bacteria markedly increased the density of LAB (Figure 6a) and Clostridia species, but not of nonspecific bacteria (Figure 6b). After the 58th day of the CAS challenge, the rats exhibited a diminished density of fecal LAB and an increased density of nonspecific bacteria (Figure 6a Figure 6. Bacteria numbers as determined by plating technique (n = 8 rats/ group). Lactic acid bacteria (a), Clostridia species (b). Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL (feces were collected on the 59th day). The dose volume was 1 mL. Probiotic Administration Modifies LAB and Clostridia Populations in Rats The control group was fed with sterilized PBS solution. The data (MRS Figure 6. Bacteria numbers as determined by plating technique (n = 8 rats/ group). Lactic acid bacteria (a), Clostridia species (b). Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL (feces were collected on the 59th day). The dose volume was 1 mL. The control group was fed with sterilized PBS solution. The data (MRS , MRS-cys , TSA II , Hektoen ) are presented as the mean ± SD. Statistical analysis was conducted by using one-way ANOVA with Tukey's multiple comparisons test. * p < 0.05. Control group receiving only PBS (control group); nonsensitized group treated with SL42 strain (SL42 group); nonsensitized group treated with DSM 17938 strain (DSM 17938 group); casein-sensitized group (casein group); casein-sensitized group treated with SL42 (casein + SL42 group); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938 group). Figure 7 shows the effects of the two probiotic strains SL42 and DSM 17938 on the thymus and spleen indices of the rats. Compared with the control group, the thymus and spleen indices were not found to be significantly different in the SL42 and DSM 17938 probiotic-treated groups (all p > 0.05). Figure 7. Effects of probiotic bacteria on the thymus and spleen indices of rats. Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL. The dose volume was 1 mL. The control group was fed with sterilized PBS solution. Thymus and spleen samples from each group were collected on the 59th day (day of sacrifice). The thymus and spleen indices were measured as the ratio of the thymus or spleen weight to rat body weight. Values are means ± SD (n = 8 rats/group). No significant differences were observed at p > 0.05. Control group receiving only PBS (control); nonsensitized group treated with SL42 strain (SL42); nonsensitized group treated with DSM 17938 strain (DSM 17938); casein-sensitized group (casein); casein-sensitized group treated with SL42 (casein + SL42); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938). Figure 7 shows the effects of the two probiotic strains SL42 and DSM 17938 on the thymus and spleen indices of the rats. Compared with the control group, the thymus and spleen indices were not found to be significantly different in the SL42 and DSM 17938 probiotic-treated groups (all p > 0.05). Figure 7. Effects of probiotic bacteria on the thymus and spleen indices of rats. Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL. The dose volume was 1 mL. The control group was fed with sterilized PBS solution. Thymus and spleen samples from each group were collected on the 59th day (day of sacrifice). The thymus and spleen indices were measured as the ratio of the thymus or spleen weight to rat body weight. Values are means ± SD (n = 8 rats/group). No significant differences were observed at p > 0.05. Control group receiving only PBS (control); nonsensitized group treated with SL42 strain (SL42); nonsensitized group treated with DSM 17938 strain (DSM 17938); casein-sensitized group (casein); casein-sensitized group treated with SL42 (casein + SL42); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938). Figure 7 shows the effects of the two probiotic strains SL42 and DSM 17938 on the thymus and spleen indices of the rats. Compared with the control group, the thymus and spleen indices were not found to be significantly different in the SL42 and DSM 17938 probiotic-treated groups (all p > 0.05). Figure 7. Effects of probiotic bacteria on the thymus and spleen indices of rats. Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL. The dose volume was 1 mL. The control group was fed with sterilized PBS solution. Thymus and spleen samples from each group were collected on the 59th day (day of sacrifice). The thymus and spleen indices were measured as the ratio of the thymus or spleen weight to rat body weight. Values are means ± SD (n = 8 rats/group). No significant differences were observed at p > 0.05. Control group receiving only PBS (control); nonsensitized group treated with SL42 strain (SL42); nonsensitized group treated with DSM 17938 strain (DSM 17938); casein-sensitized group (casein); casein-sensitized group treated with SL42 (casein + SL42); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938). Figure 6. Bacteria numbers as determined by plating technique (n = 8 rats/ group). Lactic acid bacteria (a), Clostridia species (b). Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL (feces were collected on the 59th day). The dose volume was 1 mL. The control group was fed with sterilized PBS solution. The data (MRS , MRS-cys , TSA II , Hektoen ) are presented as the mean ± SD. Statistical analysis was conducted by using one-way ANOVA with Tukey's multiple comparisons test. * p < 0.05. Control group receiving only PBS (control group); nonsensitized group treated with SL42 strain (SL42 group); nonsensitized group treated with DSM 17938 strain (DSM 17938 group); casein-sensitized group (casein group); casein-sensitized group treated with SL42 (casein + SL42 group); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938 group). Figure 7 shows the effects of the two probiotic strains SL42 and DSM 17938 on the thymus and spleen indices of the rats. Compared with the control group, the thymus and spleen indices were not found to be significantly different in the SL42 and DSM 17938 probiotic-treated groups (all p > 0.05). Figure 7. Effects of probiotic bacteria on the thymus and spleen indices of rats. Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL. The dose volume was 1 mL. The control group was fed with sterilized PBS solution. Thymus and spleen samples from each group were collected on the 59th day (day of sacrifice). The thymus and spleen indices were measured as the ratio of the thymus or spleen weight to rat body weight. Values are means ± SD (n = 8 rats/group). No significant differences were observed at p > 0.05. Control group receiving only PBS (control); nonsensitized group treated with SL42 strain (SL42); nonsensitized group treated with DSM 17938 strain (DSM 17938); casein-sensitized group (casein); casein-sensitized group treated with SL42 (casein + SL42); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938). ) are presented as the mean ± SD. Statistical analysis was conducted by using one-way ANOVA with Tukey's multiple comparisons test. * p < 0.05. Control group receiving only PBS (control group); nonsensitized group treated with SL42 strain (SL42 group); nonsensitized group treated with DSM 17938 strain (DSM 17938 group); casein-sensitized group (casein group); casein-sensitized group treated with SL42 (casein + SL42 group); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938 group). Figure 7 shows the effects of the two probiotic strains SL42 and DSM 17938 on the thymus and spleen indices of the rats. Compared with the control group, the thymus and spleen indices were not found to be significantly different in the SL42 and DSM 17938 probiotic-treated groups (all p > 0.05). Figure 6. Bacteria numbers as determined by plating technique (n = 8 rats/ group). Lactic acid bacteria (a), Clostridia species (b). Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL (feces were collected on the 59th day). The dose volume was 1 mL. The control group was fed with sterilized PBS solution. The data (MRS , MRS-cys , TSA II , Hektoen ) are presented as the mean ± SD. Statistical analysis was conducted by using one-way ANOVA with Tukey's multiple comparisons test. * p < 0.05. Control group receiving only PBS (control group); nonsensitized group treated with SL42 strain (SL42 group); nonsensitized group treated with DSM 17938 strain (DSM 17938 group); casein-sensitized group (casein group); casein-sensitized group treated with SL42 (casein + SL42 group); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938 group). Figure 7 shows the effects of the two probiotic strains SL42 and DSM 17938 on the thymus and spleen indices of the rats. Compared with the control group, the thymus and spleen indices were not found to be significantly different in the SL42 and DSM 17938 probiotic-treated groups (all p > 0.05). Figure 7. Effects of probiotic bacteria on the thymus and spleen indices of rats. Wistar rats were sensitized intragastrically by administration of casein without adjuvant. The rats were fed every other day with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL. The dose volume was 1 mL. The control group was fed with sterilized PBS solution. Thymus and spleen samples from each group were collected on the 59th day (day of sacrifice). The thymus and spleen indices were measured as the ratio of the thymus or spleen weight to rat body weight. Values are means ± SD (n = 8 rats/group). No significant differences were observed at p > 0.05. Control group receiving only PBS (control); nonsensitized group treated with SL42 strain (SL42); nonsensitized group treated with DSM 17938 strain (DSM 17938); casein-sensitized group (casein); casein-sensitized group treated with SL42 (casein + SL42); and casein-sensitized group treated with DSM 17938 strain (casein-+ DSM 17938). Inflammation of Jejunal Tissue and Eosinophil Infiltration were Significantly Reduced by Probiotic Treatment Hematoxylin and eosin staining showed that the jejunal mucosa was inflamed in the CAS sensitization group (Figure 8). The histological inflammation score in that group was2 (mild to moderate), and eosinophil infiltration was also significantly increased (p < 0.05) compared with the control (Figure 8). In contrast, the intestinal inflammation scores and degrees of eosinophil infiltration were significantly reduced by probiotic treatment in comparison with the CAS sensitization group (casein) (Figure 8). Moreover, villus length in the casein group was significantly reduced (p < 0.05), although the probiotic treatment groups showed almost normal features similar to those in the controls (Figure 8). CAS sensitization group (Figure 8). The histological inflammation score in that group was2 (mild to moderate), and eosinophil infiltration was also significantly increased (p < 0.05) compared with the control (Figure 8). In contrast, the intestinal inflammation scores and degrees of eosinophil infiltration were significantly reduced by probiotic treatment in comparison with the CAS sensitization group (casein) (Figure 8). Moreover, villus length in the casein group was significantly reduced (p < 0.05), although the probiotic treatment groups showed almost normal features similar to those in the controls (Figure 8). Probiotic Bacteria SL42 and DSM17938 Prevent Bacterial Translocation to Mesenteric Lymph Nodes in Wistar Rats Sensitized with Casein The mesenteric lymph nodes were sterile in the control and probiotic groups while casein sensitization caused bacterial translocation (p < 0.05). Probiotic gavage completely eliminated bacterial translocation to MLN in rat groups subjected to the casein challenge (Table 4). The rats were fed every other day (−3rd to 58th day) with L. rhamnosus SL42 or L. reuteri DSM 17938 at 1 × 10 8 CFU/mL. The dose volume was 1 mL. The black and red arrows indicate eosinophil infiltration and goblet cells, respectively. Control group receiving only PBS (control); casein-sensitized group (casein); nonsensitized group treated with SL42 strain (SL42); casein-sensitized group treated with SL42 (Casein + SL42); nonsensitized group treated with DSM 17938 strain (DSM 17938); and casein-sensitized group treated with DSM 17938 strain (Casein-+ DSM 17938). Probiotic Bacteria SL42 and DSM17938 Prevent Bacterial Translocation to Mesenteric Lymph Nodes in Wistar Rats Sensitized with Casein The mesenteric lymph nodes were sterile in the control and probiotic groups while casein sensitization caused bacterial translocation (p < 0.05). Probiotic gavage completely eliminated bacterial translocation to MLN in rat groups subjected to the casein challenge (Table 4). Discussion The most common food allergy in children is cow's milk allergy. There is currently no effective treatment available to prevent or cure food allergies [4]. However, according to numerous studies, breastfeeding protects the infant from developing allergic diseases, helps young infants' immune systems mature, and protects them from infections [4,29]. There are many immunological components in human milk, such as probiotic bacteria, nondigestible oligosaccharides, secretory IgA, mucins, cytokines, long-chain PUFA, and hormones. Probiotic bacteria, in particular, may support immunocompetence, which is required for adequate capacity to induce oral tolerance, either directly or indirectly through stimulation of beneficial intestinal microbiota [30,31]. The benefits of breastfeeding to newborn health are meaningful, and the microbiome in milk may play a crucial role. In this context, we aimed to investigate the beneficial effects of probiotic bacteria found in human milk that may be associated with improved infant health, and could be incorporated in human milk formula. The goal of this study was to compare the effects of probiotic strains from human milk supplementation on the outcome of the allergic response in rats during oral sensitization with bovine casein. First, the strain SL42 isolated from the breast milk of a young and healthy mother was assessed for probiotic aptitudes and compared with the probiotic strain of Limosilactobacillus reuteri DSM 17938. The 16S rRNA analysis showed that the isolate belongs to the Lacticaseibacillus rhamnosus species. Kang et al. [32] also obtained two Lacticaseibacillus rhamnosus strains from the breast milk of healthy Chinese women, and this bacterial species is known to be one of the most prevalent bacterial species in human milk. Streptococcaceae, Pseudomonadaceae, Staphylococcaceae, Lactobacillaceae, and Oxalobacteraceae are the common bacterial families [33]. In our study, the isolated strain Lacticaseibacillus rhamnosus SL42 performed better than DSM 17938, including by having better tolerance to acidity and bile, and antimicrobial ability. For antibiotic susceptibility, both assayed strains showed similar trends, especially for vancomycin resistance, as was previously observed for Lacticaseibacillus rhamnosus and Limosilactobacillus reuteri species in several studies [34,35]. In general, our isolated SL42 strain passed all the tests to be considered as a safe, well-tolerated, and efficacious probiotic-like strain that is able to contribute to beneficial effects on gut health. It is known that Lactobacilli are an important part of normal human microbial flora that commonly colonize the mouth, the gastrointestinal tract, and the female genitourinary tract [36]. The scientific community agrees on the importance of strain specificity in the action of probiotic microorganisms on the health of their hosts. According to Xavier-Santos et al. [37], in addition to daily doses, researchers must consider the multiple action mechanisms that are unique to each species/strain. After being identified, both strains of SL42 and DSM 17938 were included individually in our in vivo casein-induced allergy study. L. reuteri DSM 17938 was chosen because it is already delivered as a drug to children for alleviating gastrointestinal symptoms. Numerous clinical studies have suggested that L. reuteri may be beneficial in modulating gut microbiota, thereby eliminating infections such as enteric colitis, antibiotic-associated diarrhea, Helicobacter pylori infection, irritable bowel syndrome, inflammatory bowel disease, and chronic constipation. L. reuteri reduces the duration of acute infectious diarrhea in both children and adults and relieves abdominal pain in patients with colitis or inflammatory bowel disease [10,36]. To define the proper food allergy model, we proposed an oral sensitization model without an adjuvant that mimicked what happens in humans by using female Wistar rats of juvenile age. We believe that our model is appropriate because the administration of an adjuvant may influence the IgE response or cause a false-positive IgE response with a non allergenic food [37]. Lacticaseibacillus rhamnosus SL42 or L. reuteri DSM 17938 were given to Wistar rats at 3 weeks of age and the rats were challenged orally with casein. In this second part, macroscopic symptoms after casein gavage, calprotectin, eosinophils, and cytokineassociated CAS-induced allergy, fecal bacteria enumeration, changes in spleen and thymus weights, jejunal tissue and eosinophil infiltration, and bacterial translocation to mesenteric lymph nodes were all examined. During the sensitization period, all rats appeared healthy with similar weights between all the groups. Furthermore, no severe symptoms, such as death, were observed. One rat in the group sensitized only with casein had a score of 1, six had a score of 2, and one had a score of 3. There were no abnormalities in either the control or probiotic-treated groups. This could indicate that the model proposed herein is mildly allergic, and that, despite causing mild intestinal inflammation, it may have a long-term impact on rat growth and development. In general, both the isolated SL42 strain and L. reuteri DSM 17938 acted similarly in vivo. Stanojevic et al. [29] revealed that early postnatal treatment with Lactobacillus rhamnosus LB64 appears to be effective in attenuating TNBS autoimmune encephalomyelitis. Similarly, early colonization with L. rhamnosus GG increased the richness and diversity of the colonic microbiota and promoted epithelial cell proliferation, differentiation, and mucosal IgA production in adults [38]. Torii et al. [12] found that L. acidophilus L-92 administration inhibited total IgE and OVA-specific IgE production in both in vivo and in vitro studies. Based on their findings, the authors hypothesize that LAB suppresses IgE production via a mechanism other than a shift to Th1-dominant immunity. In our study, oral administration of probiotic strains from human milk in CASsensitized rats could reduce symptom scores, CAS-specific IgE, calprotectin, allergenspecific cytokines, and histamine release levels. Although no specific mechanism could be determined based on these data, our principal aim was to assess the direct role of these strains, and our results seem to be in line with the literature confirming the possibility of alleviating allergy markers through probiotic administration. Neau et al. [39] described the protective effect of the Lactobacillus salivarius LA307 strain on sensitization, with a decrease in allergen-specific IgE and allergy. In addition to those findings, Esber et al. [40] demonstrated in mice that giving Lactobacillus rhamnosus LA305, L. salivarius LA307, or Bifidobacterium longum subsp. infantis LA308 for 3 weeks after sensitization and challenge altered the composition of the gut microbiota. Cytokine production was significantly reduced by all probiotic strains. According to the authors, the three probiotic strains tested alter immune responses by inducing tolerogenic allergies and anti-inflammatory responses. S100A8/A9 (calprotectin) is claimed to be a sensitive biomarker for inflammatory diseases such as rheumatoid arthritis, psoriasis, and vasculitis [41]. According to Zhu et al. [42], calprotectin, along with other inflammatory factors, may promote the inflammation seen in mild food allergies. S100A8/A9 is involved in innate immune responses in Baker's asthma pathogenesis and is regulated by TLR4 polymorphisms [43]. We also found that CAS sensitization alone changed the composition of gut microbiota in comparison with the controls, in terms of the relative abundance of LAB, nonspecific bacteria, and Clostridia species. Our probiotic bacteria intervention was able to restore beneficial microflora in all probiotic-treated rats. Similarly, Tulyeu et al. [31] recently reported that allergen immunization in a food allergy model induced profound changes in the composition of the gut microbiome. The impact on gut microbiota is a proof of concept in this study, even though rat microbiota cannot be compared to human gut microbiota. However, we believe that several changes in the microbiota caused by the SL42 strain may contribute to or enhance its protective effect. Many anti-inflammatory properties have been reported for L. reuteri DSM 17938 in the literature. It generates reuterin, a powerful antimicrobial compound capable of inhibiting the growth of Gram-positive and Gramnegative bacteria, fungi, and protozoa [36]. Furthermore, L. reuteri forms a probiotic-rich biofilm, inhibits the production of proinflammatory cytokines, and prevents intestinal overgrowth by other commensals, thereby maintaining a balanced gut environment [36]. The "leaky gut syndrome" and bacterial translocation are considered by some authors as triggering factors for the onset of the disease as they promote chronic systemic inflammation. The most reported health benefits were from oral probiotic administration and fecal microbial transplantation [44]. Therapies that focus on modulating the gut microbiota are a good option for pediatrics, especially because infants have developing microbial communities that are associated with immune system maturation [37]. Interestingly, the two tested probiotic strains were able to abolish bacterial translocation in our allergy model, suggesting that the beneficial effects may be due to gut barrier reinforcement. Conclusions Human milk is an excellent source of LAB strains, which are commonly used as probiotics. Lacticaseibacillus rhamnosus strain SL42 was isolated from the breast milk of healthy Algerian women. Its probiotic potential was assessed in vitro using L. reuteri Protectis DSM 17938 as the reference strain. In summary, our findings show that supplementing juvenile rats with L. rhamnosus SL42 induces tolerogenic responses and serves several purposes, from lowering the level of casein-associated allergy parameters to improving macroscopic symptoms and suppressing bacterial translocation to MLN. Its effects were similar to those expressed by the probiotic strain of L. reuteri DSM 17938. This research identified a potential probiotic candidate for use in the food and pharmaceutical industries. Clinical studies will be required to confirm these experimental findings.
2023-04-16T15:09:31.259Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "ac59b04b24f60573056ec201c4442b27ef94e51e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/11/4/1030/pdf?version=1681473730", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b4a7c7f74a54b104a26d56e86dd21d8dfa19c10", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
220964893
pes2o/s2orc
v3-fos-license
Association between HIV stigma and antiretroviral therapy adherence among adults living with HIV: baseline findings from the HPTN 071 (PopART) trial in Zambia and South Africa Abstract Objectives Adherence to antiretroviral therapy (ART) leads to viral suppression for people living with HIV (PLHIV) and is critical for both individual health and reducing onward HIV transmission. HIV stigma is a risk factor that can undermine adherence. We explored the association between HIV stigma and self‐reported ART adherence among PLHIV in 21 communities in the HPTN 071 (PopART) trial in Zambia and the Western Cape of South Africa. Methods We conducted a cross‐sectional analysis of baseline data collected between 2013 and 2015, before the roll‐out of trial interventions. Questionnaires were conducted, and consenting participants provided a blood sample for HIV testing. Poor adherence was defined as self‐report of not currently taking ART, missing pills over the previous 7 days or stopping treatment in the previous 12 months. Stigma was categorised into three domains: community, health setting and internalised stigma. Multivariable logistic regression was used for analysis. Results Among 2020 PLHIV self‐reporting ever taking ART, 1888 (93%) were included in multivariable analysis. Poor ART adherence was reported by 15.8% (n = 320) of participants, and 25.7% (n = 519) reported experiencing community stigma, 21.5% (n = 434) internalised stigma, and 5.7% (n = 152) health setting stigma. PLHIV who self‐reported previous experiences of community and internalised stigma more commonly reported poor ART adherence than those who did not (aOR 1.63, 95% CI 1.21 −2.19, P = 0.001 and aOR 1.31, 95% CI 0.96–1.79, P = 0.09). Conclusions HIV stigma was associated with poor ART adherence. Roll‐out of universal treatment will see an increasingly high proportion of PLHIV initiated on ART. Addressing HIV stigma could make an important contribution to supporting lifelong ART adherence. Introduction For people living with HIV (PLHIV), adherence to antiretroviral therapy (ART) is crucial for viral suppression [1][2][3] and reducing HIV-related morbidity and mortality [4], onward transmission [5][6][7] and drug resistance [8]. UNAIDS 90-90-90 targets captured the importance of achieving high levels of HIV testing and ART coverage, with the 'third 90' target being that by 2020 90% of those on ART were virally suppressed [9]. In 2016, an estimated 89% of PLHIV in Zambia who reported current ART use [10] and 85% of those registered in HIV care and taking ART in South Africa [11] were virally suppressed. Understanding the factors that influence adherence to ART is crucial if high levels of viral suppression are to be sustained and increased. HIV stigma can undermine ART adherence [12][13][14][15][16][17] and is a frequently reported barrier to adherence in sub-Saharan Africa [13]. HIV stigma is common in both Zambia and South Africa, with over 35% of PLHIV reporting some type of stigma [18]. Whilst ART adherence is consistently found to be worse among individuals experiencing stigma than among those who do not [19][20][21][22][23][24][25], a 2013 review concluded that all but one study was at risk of bias, and most had not used validated exposure or outcome measures [19]. Currently, data come mostly from facility-based or purposively sampled populations, and there is heterogeneity in the measurement of both ART adherence and HIV stigma. We analysed baseline data from the HPTN 071 (PopART) trial [26,27] to explore the association between HIV stigma and ART adherence for adults with HIV in a random population sample from 21 urban and peri-urban communities in Zambia and the Western Cape of South Africa. Data were collected between 2013 and 2015, after more than 10 years of scale-up of HIV treatment services and ART in both countries. We explored these associations among individuals who started ART prior to the implementation of the PopART universal test and treat (UTT) interventions. Methods HPTN071 (PopART) was a cluster-randomised trial conducted in Zambia and South Africa to assess the impact of a combination of HIV prevention interventions, including household-based HIV testing and an offer of universal ART initiation regardless of CD4 count or clinical stage for those testing HIV-positive, on HIV infection rates. Twenty-one urban communities were purposively selected for inclusion in the trial if they had a heath facility offering HIV and TB services, high HIV prevalence and a population of >20 000. In each country, study communities were matched in triplets based on HIV prevalence and geographic proximity and then randomised to one of three trial arms [26,27]. Between November 2013 and March 2015, approximately 2000 individuals were enrolled in each study community as a 'population cohort' to assess the effect of trial interventions on primary and secondary outcomes. From a simple random sample of households, household members were enumerated and one adult (18-44 years) per household randomly selected for inclusion in the cohort. Selected adults were asked for consent to enrol in the study and participate in a baseline survey and three follow-up surveys. For those giving consent, a venous blood sample was taken and analysed in-country using a single fourth-generation serologic assay. A second fourthgeneration assay was used to confirm HIV-positive results, and any discrepancies tested with additional assays to confirm HIV status. The baseline survey was conducted using face-to-face interviewer administered questionnaires, with data collected on electronic devices. Participants were asked about their HIV status and, if they were happy to do so, share the results of their last HIV test. All participants were offered an on-the-spot rapid HIV test. Our analysis was restricted to individuals who selfreported living with HIV, with confirmation from the laboratory HIV testing. Among this group, individuals were included if they reported ever starting ART before the 1 January 2014. We excluded participants if they had no information on the year of starting ART or reported starting ART for the prevention of mother to child transmission of HIV (PMTCT) but were no longer taking it, as this may have been due to earlier initiation guidelines and not reflect non-adherence. We excluded respondents if they had incomplete outcome data or missing data on all stigma questions. We created a primary outcome variable from three survey questions on ART adherence. We defined poor adherence as 'respondents self-reporting that they had ever started ART but were not currently taking ART, or currently taking ART but had either stopped in the past 12 months, or missed pills in the past seven days'. To explore whether our findings were sensitive to our primary definition of adherence, we looked at a secondary outcome, restricting our definition to those reporting they were currently taking ART but had missed taking pills in the previous seven days. Both outcome variables were binary. We used 11 survey questions on HIV stigma to generate composite 'yes/no' binary variables for experienced community stigma, experienced health setting stigma and current internalised stigma. Composite variables were only generated for participants responding to all stigma questions contributing to that variable. Reponses on internalised stigma were given on a 4-point Likert scale (0 = strongly disagree, 1 = disagree, 2 = agree and 3 = strongly agree) and later aggregated for each question (0/1 = disagree. 2/3 = agree). Questions on community and health setting stigma used pre-coded response categories capturing the frequency of experiences during the last year (0 = never, 1 = once, 2 = a few times, 3 = often and 4 = not applicable because no one knows my status ('never disclosed')). Those responding 'never' or 'never disclosed' were categorised as 'never experiencing either community or health setting stigma'. To create the three variables, respondents who disagreed or never experienced stigma on all the questions related to that variable were grouped as 'never experiencing' that type of stigma. Those agreeing or experiencing stigma on ≥1 question were categorised as 'ever experiencing' that type of stigma [18]. Our stigma measures were aligned with standardised measures that were approved by the UNAIDS' monitoring and evaluation reference group (MERG) in 2014 [18,28,29]. A priori knowledge on risk factors for ART adherence informed decisions on other explanatory variables to explore for inclusion in analysis. We considered demographic variables (country, community/ study triplet, gender, age and marital status), socio-economic factors (education, wealth, employment status and food security), mobility factors (nights spent away from home), behavioural factors (alcohol and drug use) and HIV-specific factors (year of HIV diagnosis, time on ART, hiding pills (responding to the question 'Have you ever hidden your ART pills so that others couldn't see them'), HIV status disclosure and reason for starting ART). For alcohol use, we categorised respondents using scores from the WHO Alcohol Use Disorders Identification Test (AUDIT), [30] and for wealth, we used quintiles derived using principal component analysis. The group identified at lowest risk of the outcome was used as the reference category. Where this was unclear, we used the group with the largest numbers. We developed a conceptual framework ( Figure 1) to structure our analysis using a hierarchical approach [31] based on previous work conceptualising HIV stigma [32] and associations between stigma and ART adherence [19]. We conducted analyses for the study population and then separately for each country. We first described our study participants. Second, we described the distribution of ART adherence, HIV stigma and other explanatory variables. Third, we used logistic regression to estimate unadjusted associations between HIV stigma and ART adherence. We also estimated unadjusted associations between the other covariates and ART adherence and did the same for HIV stigma to understand potential confounding factors and identify variables to consider further in multivariable models. We conducted an analysis of the association between HIV stigma and ART adherence, stratified on the other explanatory variables that were considered a priori confounders and also those showing evidence of associations (P < 0.05) with adherence from our earlier unadjusted analysis. Last, we conducted an adjusted analysis using multivariable logistic regression. We included groups of variables in our models in the stages identified in our conceptual framework, in order of their proximity to the outcome. Variables were included if they were considered potential confounders, either a priori and/or those showing an unadjusted association (P < 0.05) with the outcome. We excluded variables from our model if they were perceived to be on the causal pathway between stigma and ART adherence. To control for confounding by community-level factors, we adjusted for study community (in Zambia) and study triplet (in South Africa) in all multivariable analysis. Study triplet was used instead of community in South Africa due to small numbers in the study population for several communities. The same series of models were built for each of the three stigma variables. We considered internalised stigma proximal to ART adherence and community and health setting stigma distal, adjusting a final set of models for each of the experienced stigmas (health setting and community) to account for this. We ran our models again with our restricted outcome definition (only those reporting they were currently taking ART but had missed taking pills in the previous seven days). Written informed consent was obtained for all respondents enrolled in the population cohort. Ethics approval was obtained for the HPTN 071 (PopART) trial from the University of Zambia, Stellenbosch University, London School of Hygiene and Tropical Medicine. No data on any of the adherence variables (no outcome data) (n = 12) Results Our analysis initially included 2020 PLHIV (Zambia n = 1099; South Africa n = 921) ( Figure 2). The number of individuals per community ranged from three to 250, with a higher proportion of women (88.6%) than men (11.4%). 76.6% of the study population were over the age of 30, and 6.3% aged 18-24 years. Approximately half the population (49%) were married or living as married, but with a higher proportion in Zambia (62.3%) than in South Africa (33.1%). Upper secondary school or University education was reached by 45.5% of respondents, although this proportion was notably higher in South Africa (70.1%) than Zambia (24.8%). Similar proportions of the study population were diagnosed with HIV each year, from before 2007 up until 2012. Only 6.4% of respondents were initiated on ART prior to 2005, with >60% starting ART after 2010 in both countries. Disclosure of HIV status (to friends, a religious leader, a health worker, family or a partner) was common, (Table 1). Poor adherence to ART was reported by 320 (15.8%) respondents, with similar country-specific findings (Zambia n = 186, 16.9%; SA n = 134, 14.5%). Most of those categorised as poor adherers reported 'missing pills in the past seven days' (n = 244). Thirty-two respondents reported that they were not currently taking ART, and 80 respondents reported stopping in the previous 12 months. Poor adherence was slightly higher for men (18.7%) than women (15.5%), with similar distributions in each country ( Table 2). In the total study population, poor ART adherence was associated with explanatory variables including community/triplet (P < 0.001), higher alcohol consumption (P < 0.001), lower educational attainment (P = 0.04), increased mobility (P < 0.001) and hiding pills (P = 0.03). Of these, community/triplet showed strong evidence of an association with all three stigma variables (all P < 0.001). Higher alcohol consumption was associated with internalised stigma (P < 0.001), and hiding pills was associated with both internalised and health setting stigma (P < 0.001 and P = 0.02, respectively), but there was no evidence of an association with experienced community stigma (P = 0.73). These associations differed slightly in each country, for example, there was evidence that education was associated with poor adherence in South Africa but not Zambia and mobility in Zambia but not South Africa (Table 3). Stigma experienced in the community was more likely to be reported by those who had disclosed their HIV status to their family (OR 1.42 95% CI 1.08-1.87, P = 0.01) or friends (OR 1.38 95% CI 1.05-1.81, P = 0.02). There was little evidence that food security was associated with ART adherence (OR 1.03 95% CI 0.75-1.42, P = 0.83), but strong evidence that those experiencing HIV stigma were more likely to be food insecure than those who did not (community, OR 1.88, 95% CI 1.53-2.32, P < 0.001, internalised, OR 1.72 95% CI 1.38-2.14, P < 0.001 and health setting, OR 95% CI, P = 0.02). Multivariable analysis was restricted to individuals with complete data on all variables (Total n = 1888; Zambia n = 1034, South Africa n = 854). After adjusting for the potential confounding effects of demographic, socio-economic, mobility and behavioural factors and for the other domains of stigma in line with our conceptual framework, there remained strong evidence of an association between experienced community stigma and ART Table 4). In Zambia, there was strong evidence of an association between stigma experienced in the community poor adherence (aOR 2.03, 95% CI 1.40-2.94, P < 0.001), weak evidence of an association between internalised stigma and poor adherence (aOR 1.44; 95% CI 0.97-2.14; P = 0.09) and no evidence of an association between health setting stigma and poor adherence (aOR 0.80; 95% CI 0.39-1.65; P = 0.54) ( Table 4). In South Africa, there was a stronger association between health setting stigma and ART adherence than in Zambia, although the evidence for this association was weak (aOR 1.66 95% CI 079-3.47, P = 0.18). For community and internalised stigma, odds ratios were close to 1, and there was no evidence of associations with either (Table 4). Although the odds of poor adherence for those reporting stigma experienced in the community were different in each country (aOR 2.03 in Zambia vs aOR 1.01 in South Africa), there was only weak evidence that these associations were different (P = 0.08). There was no evidence that the associations for health setting stigma and ART adherence (P = 0.38) and internalised stigma and ART adherence (P = 0.57) differed in Zambia and South Africa. We conducted further analysis, restricting our outcome to individuals reporting they were currently on ART (n = 1861) and defining non-adherence as missing pills in the previous 7 days. Findings from our adjusted models for the whole study population were similar to our primary definition of ART adherence (community stigma aOR 1.60 95% CI 1.15-2.22 P = 0.005, internalised stigma aOR 1.28 95% CI 0.90-1.81, P = 0.17; health setting stigma aOR 0.86 96% CI 0.48-1.53 P = 0.60) (Table S1). Discussion Among a large population sample of PLHIV reporting ever taking ART in the 21 communities included in the HPTN 071 (PopART) study in Zambia and South Africa, 16% reported one or more of missing pills in the previous seven days (12%), currently taking ART but having stopped during the previous 12 months (4%), or no longer taking ART (2%). Approximately 25% reported ever experiencing community stigma, 20% internalised stigma and 8% health setting stigma. PLHIV reporting stigma experienced in the community were more than 1.5 times more likely to report poor ART adherence than those who did not. In Zambia, participants reporting experiences of community stigma were twice as likely to report poor adherence as those who did not, but we saw no such association in South Africa. Although there was only weak evidence that these associations were different in each country, it is also possible that they represent the different contexts. HIV stigma and poor adherence were both more common in Zambian than South African study communities. In the South Africa, a strong history of community led HIV treatment advocacy and awareness could have mitigated HIV stigma and its effect on ART adherence. Health setting stigma was less frequently reported and may play a less important role in adherence because people generally take their pills away from a health facility. In both countries, the association between internalised stigma and ART adherence was partly explained after adjustments were made for experienced stigma in community or health settings. We hypothesised that stigma experienced in the community may itself cause internalised stigma. Our findings are similar to previous cross-sectional studies looking at stigma and ART adherence [19][20][21][22][23][24][25], yet direct comparisons are challenging due to variation in the specific measures used to look at these concepts. Variation also exists in the statistical adjustments made when investigating these associations. We made our own theoretical assumptions on factors to include in our multivariable models. Alcohol was considered a potential confounder, as it has been in other studies exploring these associations [19,22,33]. Some studies have, however, identified alcohol as a means of coping with HIV status [19], compromising ability to adhere to treatment. Similarly, wealth was treated as a confounding factor in our analysis, but the relationship between economic security and HIV-related stigma is likely to be more complicated and potentially 'mutually reinforcing' [19]. We did not treat hiding pills and HIV status disclosure as confounders in our multivariable models as we suggest these variables lie on the causal pathway between experience of stigma and ART adherence. Including either of these variables in our models made little difference to the associations we saw between stigma and ART adherence. Hiding pills has been frequently reported in Zambia and South Africa [34] and, with strong unadjusted associations seen in this study, would be useful to explore in further work on stigma related to HIV treatment. Ours was a large study, and we used validated measures of HIV stigma [29] and measured a large number of characteristics providing the opportunity for a thorough assessment of potential confounding. We looked at the association between three stigma 'domains' on adherence to ART, giving an opportunity to identify the specific areas of stigma that had the strongest associations with ART adherence. We interpreted our findings based on a conceptual framework that considered some of the latest thinking on HIV stigma, enabling wider comparison and contributing to existing work in this field. A composite measure of ART adherence was used to ensure inclusion of poor adherence over a year, in line with our stigma measures. In a systematic review of self-report measures, seven-day recall was most commonly used and considered effective due to the inclusion of a shorter time period, whilst covering a weekend (where adherence is often lower), but longer recall also considered important for allowing greater variability in adherence [35]. We acknowledge that our composite adherence outcome could measure slightly different concepts, but tested this using a restricted outcome in our analysis and found similar results. There were relatively few missing data. There were also limitations. Our study communities were purposively sampled, and although we consider our findings generalisable to socio-economically disadvantaged, peri-urban communities with high HIV prevalence in Zambia and the Western Cape of South Africa [27,36], the generalisability of our findings to other sub-Saharan African settings may be limited. The greater proportion of women in our study population was reflective of the overall population cohort and the higher HIV prevalence among women (26%) than men (12%) [27], rather than a selection bias among individuals who had ever taken ART. Yet, this disparity limits the generalisability of our findings to men, who in previous research have shown worse ART adherence than women [15,37]. Our analysis excluded individuals who were not aware of or not willing to report their HIV status and those who reported no date for starting ART. Experiences of stigma may have been different among those not willing to disclose their HIV status to our research team and may have led to an underestimation of HIV stigma and of its association with ART adherence. Underreporting of poor ART adherence was possible due to it being contrary to clinical guidance. However, the extent of underreporting to our research team was unlikely to differ according to an individual's experience of stigma, and so, it is unlikely to have introduced bias to our findings. Our findings of approximately 84% adherence are compatible with viral suppression data on a random subsample of individuals who were HIV-positive at the time of the baseline survey; these data indicated that approximately 90% of HIV-positive individuals who were taking ART were virally suppressed [27]. Other factors also relied on self-report and were potentially prone to either under or over-reporting (e.g. alcohol consumption and wealth). Stigma questions specifically relating to HIV treatment [38] may have given a more specific indication of mechanisms for nonadherence and would be useful for consideration in future work. Conclusions Our analysis has provided additional evidence that HIVrelated stigma is associated with poor ART adherence and has identified the relative importance of the different types and components of stigma among a large sample of PLHIV across 21 communities in Zambia and South Africa. If we are to reach viral suppression among 90% of people on ART by 2020 and 95% by 2030, it will be important to learn whether interventions that reduce HIV stigma could also improve lifelong adherence to ART.
2020-08-05T13:06:25.955Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "7148eb685915fdea4d571106aa806bbd527dba57", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/tmi.13473", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2777638912aba9a9cea26064b672f7b416423bff", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
168635378
pes2o/s2orc
v3-fos-license
Study on Behavior Value Analysis and Decision Methodology of Grid Corporations in China Based on grid corporations, business environment and value characteristics in China, the article analyzes its behavior value factors, and then divides its behavior into three categories: the behavior only affecting the current Economic Value Added(EVA), the behavior affecting both the current and future EVA, and the behavior only affecting future EVA. Finally, the article studies such corporations, behavior value decision making based upon analyses and classifications above. Behavior value decision method of grid corporations Grid corporations should establish a set of behavior control system targeting at sustainable EVA growth. The system can clarify the value contribution of the management and the staff respectively to the enterprise, check whether the decision-making deviates from the VBM target, and guide the direction of corporate behavior. Corresponding with the behavior value factors, the behavior value decision method of grid corporations consist of three aspects: the behavior value decision method affecting the current EVA only, both the current and future EVA, and the future EVA only. The behavior value decision method affecting the current EVA only Specifically, it is made up of decision methods of income management, cost management, current assets management. The behavior value decision method of income management. The business process of electricity purchasing (Electricity Sales Income) reflects the company's operating scale and industry status, determined by electricity sales, electricity sales structure, the average unit price of selling. After the decomposition of revenue drivers, we can find associated indicators supporting that. The management can analyze the existing revenue through these indicators under VBM. Electricity purchasing cost The business processes of purchasing and selling electricity determine the gross profit level jointly. In terms of electricity purchasing cost analysis, we keep eyes on the quantity and unit price of purchasing electricity from different power suppliers. Doing such analysis is to clarify the impact of electricity purchasing structure and unit price on the electricity purchasing cost. After the decomposition of cost drivers, we can find associated indicators supporting that. The management can analyze the existing cost through these indicators under VBM [1]. Electricity transmission and distribution cost Transmission and distribution cost is the transportation and power grid enterprises to provide electrical energy costs that occurred in the transmission and distribution sectors. It includes the cost of fixed assets and grid operating costs. Other costs of grid corporations are costs that not directly related to the volume of business, including office expenses, travel, utilities, and vehicle usage fees. These costs result in an outflow of economic benefits and an reduction in EVA index directly. However, these costs are necessary to maintain the regular operation of company, naturally helping the current inflow of economic benefits. Therefore, we cannot conclude that it is of no value, and the company should carefully control such costs in the manner of quota management. The behavior value decision method of current assets management Current assets include cash and cash equivalents, inventory and accounts receivable, etc., belonging to the scope of the overall enterprise value stream. The quota management system is a good option for current assets, analyzing its specific behavior under VBM. It can determine the amount of current assets held by the best, strengthen the turnover, and improve the utilization efficiency [2]. Cash and cash equivalents management Companies hold cash and cash equivalents for three needs: transactional demand, cautious demand and speculative demand. Under the premise of guaranteeing regular operation, stronger cash management can make full use of surplus cash to capture potential investment opportunities. Obviously, the following question is how to determine the best cash holding. In theory, there are three modes: cost analysis model, inventory model and random model. 1) Cost analysis model suggests that the cost of holding cash includes opportunity cost, management cost and shortage cost. The opportunity cost is directly proportional to the amount of cash holdings. There is no significant relationship between the management cost and cash holdings (given that certain cash holdings are relatively unchanged). The shortage cost is inversely proportional to the amount of cash holdings. Therefore, there is optimal cash holding, minimizing the sum of the three kinds of cost. 2) Inventory model suggests that holding cash incurs opportunity cost (positive with holding cash) and transaction cost (negative with holding cash). The minimization of these two is the optimal amount of cash holding for companies. 3) Random model suggests that the cash demand is difficult to predict, the management determine the lower and upper of cash holding based on experience. When the cash demand exceeds the upper limit, the management converts securities into cash to make up for the insufficiency; when the cash demand is lower than the lower limit, the management investment securities for better return. Inventory management Inventory is a necessary part to maintain the regular operation of grid corporations, generally including spare parts for repairing, low-value consumptions and so on. Under VBM, inventory management can be analyzed in three aspects: the claim management system for existing inventories, the utilization efficiency of inventories in need, and the reserve quota. 1) The claim management system should be used for existing inventories. The management ought to claim inventories in details, and distinguish the inventories really needed from others, dealing with unnecessary ones in time. 2) In terms of the utilization efficiency of inventories needed, the management targets at the turnover ratio, checking whether the liquidity and the fund occupied is sensible. Along with the actual usage, the quantities of inventories need to buy can be decided at reasonable levels. 3) Given the maintenance of normal operation, the reserve quota is the economic order quantity. With the change of order quantity, ordering cost and storage cost shift. Economic order quantity is the quantity with the lowest sum of storage cost and ordering cost in a certain period [3]. The economic order quantity is the quantity that makes the total of these two kinds of cost minimum, as the formula expressed: Account Receivable Management In terms of consumer classification differentiated pricing, the power supply can be divided into large industrial electricity, general industrial electricity, commercial electricity, agricultural electricity and residential electricity. Among them, the residential electricity users pay in advance, and it does not incur accounts receivable. While other categories of electricity users, there are sales on credit resulting in accounts receivable. It weakens the current assets turnover, thus impacting on EVA. Based on accounts aging, the management should implement the responsibility system for corresponding accounts receivable recovery departments, and evaluate such departments in quota control and the reclaim speed. The quota control of accounts receivable can accelerate capital returns and reduce current assets occupancy. Meanwhile, enhancing the recovery rate can improve accounts receivable turnover. Both of them make a contribution to EVA increase directly. According to the control of quota and turnover, the budget of accounts receivable can be reasonable made for next period. Analysis and decision methods For the analysis and decision method of behaviors affecting both the current and future EVA, there are basically two methods: the quantitative evaluation method (formula method) for quantifiable project; the expert evaluation method for un-quantifiable project. 1) The quantitative evaluation method (formula method) Quantitative evaluation method is to calculate the economic benefits generated and costs and expenses incurred for specific enterprise behavior in each accounting period, thus estimating its impact on EVA for different periods and discounting it with reasonable rate. If the present value of EVA is greater than the cost, such spending is valuable. On the contrary, it damages the enterprise value. This method is suitable for the management of existing fixed assets and its overhaul, which is quantifiable. The formula for calculating the present value of EVA is: Expert evaluation method is suitable for un-quantifiable behavior, such as R&D expense. Expert evaluation method, based on qualitative analysis, uses scoring and other techniques to make a comprehensive evaluation of the results with mathematical statistic characteristics. Such method focuses on experts' experience and opinion determining the weight of each index, and obtains satisfying results with constant feedback and modification. Specific steps are as follows [4]: The first step is choosing 9~15 experts with both rich practical experience and solid theoretical foundation. The second step is to determine n indicators, and then ask selected experts to give independent weight values of each indicator under certain rules. The third step is to select the results, calculate the mean and standard deviations of each index weight, and check whether to re-determine the weight. The fourth step is to repeat the third step until the difference between index weight and its average does not exceed the pre-specified criteria, which means the experts' opinions are basically converging. The average index weight with constant modified is what the management wants. Existing fixed assets control method -claim system Up to 85% of total assets, the fixed asset management plays a significant role in the operation of grid corporations. For existing fixed assets, the claim system is a sensible choice, and it is a certain way to identify their real value and clarify the responsibility of corresponding departments. Specifically, each department claim necessary fixed assets in the classification of production equipment with power production management system, production equipment without power production management system, and non-production equipment. Comparing the difference between existing assets and claim assets, each department can determine the capital occupancy and utilization efficiency, and then the un-claim capital with proper depreciation rate is the measured impact on EVA. The behavior value decision method of capital structure Capital structure determines the capital cost, directly impacting on the EVA. Optimal design of capital structure should balance development and efficiency, comprehensively considering VBM factors. Considering the debt structure, liquidity, capital cost and financial leverage, it can be found the existing problems of grid corporations in capital structure [5]. The behavior value decision method affecting the future EVA only In terms of issues affecting the future growth value, the grid corporations target at fixed assets investment (including project management under construction) in four stages: feasibility study, construction, operation and economic benefit evaluation. Behavior value analysis of feasibility study By studying the investment behavior of grid corporations, we notice that the value consideration in project establishment is not comprehensive enough, without showing value contribution from certain project. In the existing feasibility study report, the management simply focuses on the process input of construction, while ignoring the feasibility of setting standards and the analysis of the value contribution. 1) The project increasing the economic return For projects with better economic return, the expected increasing electricity sales and corresponding revenue are quantifiable. Based upon the EVA formula, the expected EVA can be calculated with the capital occupied of fixed assets. According to the expected effective years, the sum of discounted EVA for each period is the quantifiable value of fixed asset invested. As the formula below: U(X): the expected contribution value of project X 3) The project improving the management level It is difficult to measure the value of project improving the management level mathematically. The reasons are: Firstly, they cannot directly make contribution to the growth in economic benefits. In the actual production process, the improvement on management level from such project can hardly be quantified by the economic benefits, and companies have less desire to collect and process the relevant information. 1) Duration management Extension of time will inevitably lead to additional costs, and the assets under construction do not produce value. It delays the output of future EVA, negative with EVA. It is necessary to set a clear requirement of duration during the feasibility study, and strongly follow the schedule under construction. 2) Cost control Cost and expense is the incremental items of capital invested, negative with EVA. These aspects should be considered: pre-project cost calculation, cost carry-over monthly and project settlement. As the main control factor, cost management plays a significant role throughout the power grid project. 3) Quality control There is a close interdependence of construction cost and quality level. The quality cost consists of control cost and failure cost. Control cost belongs to the quality assurance fee, proportional to the level of quality. While failure cost is simply loss, negative with the level quality. The quality goal should be reasonable determined with the standard of project contracts. When project goes into operation, it will create value and the value-creating ability reflects the scale structure of capital invested and its sustainable development. Based on the classification of investment projects, and the uncontrollably macro factors, the management should target at the enhancement of profitability, risk control ability, and asset management capability, through streamline, life-cycling and intensification. Behavior value analysis of evaluation Although there is already an project evaluation system, it is not comprehensive enough because the feasibility study has not closely attached to the value target indicators. In terms of the evaluation, together with stages analyzed above, the management should pay more attention to the project's impact on the future growth value of grid corporations, and the difference between the expected contribution value and the actual one. Conclusion The enterprise value-creation results from the enterprise behavior based on specific markets and resources background. Generally, the enterprise behavior consist of production, sales, cost management, asset management, capital cost management and so on. In terms of different value drivers, the enterprise behavior, or business activities, are reclassified. This paper considers that, due to the different impact on the current and future EVA , the enterprise behavior can be divided into three categories: behaviors only affecting the current EVA, behaviors affecting both the current and future EVA, and behaviors only affecting the future EVA. The enterprise should establish a set of behavior control system targeting at sustainable EVA growth. The system can clarify the value contribution of the management and the staff respectively to the enterprise, check whether the decision-making deviates from the VBM target, and guide the direction of corporate behavior. Based on this, this paper has built a behavior vale analysis and decision methodology for gird corporations, including ones affecting the current, the future and both periods EVA.
2019-05-30T13:17:37.478Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "adc9f9c0ff72fb7fc16bb328253ff9efab85c1b0", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/14/matecconf_gcmm2017_05044.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f896c6d050d029236efef93c9f91a6890f86036b", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
23525031
pes2o/s2orc
v3-fos-license
Orai1 Mediates the Interaction between STIM1 and hTRPC1 and Regulates the Mode of Activation of hTRPC1-forming Ca2+ Channels* Orai1 and hTRPC1 have been presented as essential components of store-operated channels mediating highly Ca2+ selective ICRAC and relatively Ca2+ selective ISOC, respectively. STIM1 has been proposed to communicate the Ca2+ content of the intracellular Ca2+ stores to the plasma membrane store-operated Ca2+ channels. Here we present evidence for the dynamic interaction between endogenously expressed Orai1 and both STIM1 and hTRPC1 regulated by depletion of the intracellular Ca2+ stores, using the pharmacological tools thapsigargin plus ionomycin, or by the physiological agonist thrombin, independently of extracellular Ca2+. In addition we report that Orai1 mediates the communication between STIM1 and hTRPC1, which is essential for the mode of activation of hTRPC1-forming Ca2+ permeable channels. Electrotransjection of cells with anti-Orai1 antibody, directed toward the C-terminal region that mediates the interaction with STIM1, and stabilization of an actin cortical barrier with jasplakinolide prevented the interaction between STIM1 and hTRPC1. Under these conditions hTRPC1 was no longer involved in store-operated calcium entry but in diacylglycerol-activated non-capacitative Ca2+ entry. These findings support the functional role of the STIM1-Orai1-hTRPC1 complex in the activation of store-operated Ca2+ entry. Orai1 and hTRPC1 have been presented as essential components of store-operated channels mediating highly Ca 2؉ selective I CRAC and relatively Ca 2؉ selective I SOC , respectively. STIM1 has been proposed to communicate the Ca 2؉ content of the intracellular Ca 2؉ stores to the plasma membrane storeoperated Ca 2؉ channels. Here we present evidence for the dynamic interaction between endogenously expressed Orai1 and both STIM1 and hTRPC1 regulated by depletion of the intracellular Ca 2؉ stores, using the pharmacological tools thapsigargin plus ionomycin, or by the physiological agonist thrombin, independently of extracellular Ca 2؉ . In addition we report that Orai1 mediates the communication between STIM1 and hTRPC1, which is essential for the mode of activation of hTRPC1-forming Ca 2؉ permeable channels. Electrotransjection of cells with anti-Orai1 antibody, directed toward the C-terminal region that mediates the interaction with STIM1, and stabilization of an actin cortical barrier with jasplakinolide prevented the interaction between STIM1 and hTRPC1. Under these conditions hTRPC1 was no longer involved in store-operated calcium entry but in diacylglycerol-activated non-capacitative Ca 2؉ entry. These findings support the functional role of the STIM1-Orai1-hTRPC1 complex in the activation of store-operated Ca 2؉ entry. Store-operated calcium entry (SOCE), 4 a process controlled by the filling state of the intracellular Ca 2ϩ stores (1), is a major mechanism for Ca 2ϩ influx in non-excitable cells. Because SOCE was first proposed two decades ago, many studies have been devoted to the identification of the mechanisms that communicate the Ca 2ϩ stores with the plasma membrane (PM) channels, as well as the nature of store-operated Ca 2ϩ (SOC) channels. The first identified and best-characterized store-operated current is I CRAC , but a number of other SOC currents activated by Ca 2ϩ store depletion have also been described (2). The discovery of mammalian homologues of the Drosophila transient receptor potential (TRP) channel proteins has focused attention on TRP channels, especially the canonical TRP (TRPC) channels, as candidates for the conduction of SOCE (3)(4)(5) and a functional coupling between several TRPCs and IP 3 receptor isoforms (IP 3 Rs) has been demonstrated in transfected cells and cells naturally expressing TRPC proteins (4,6,7). The recent identification of proteins STIM1 and Orai1 has shed new light on the nature and regulation of SOC channels. Orai1 (also named CRACM1 for Ca 2ϩ release-activated current (CRAC) modulator) has been proposed to form the pore of the channel mediating I CRAC (8). The involvement of Orai1 in I CRAC was identified by gene mapping in patients with hereditary severe combined immune deficiency syndrome attributed to loss of I CRAC (9,10). Orai1 has been demonstrated to form multimeric ion-channel complexes in the PM (11). The channel formed by Orai1 has been reported to be regulated by Ca 2ϩ store depletion with the participation of the intraluminal Ca 2ϩ sensor, stromal interaction molecule 1 (STIM1), a protein that has recently been presented as a messenger linking the endoplasmic reticulum (ER) to PM Ca 2ϩ channels. STIM1 is a Ca 2ϩbinding protein located mainly in the ER membrane with a single transmembrane region and a EF-hand domain in the NH 2 terminus located in the lumen of the ER (12), that might, therefore, function as a Ca 2ϩ sensor in the ER (13,14). Knockdown of STIM1 by RNA interference or functional knockdown of STIM1 by electrotransjection of neutralizing antibodies reduces SOCE in HEK293, HeLa, and Jurkat T cells and platelets (12,13,15) and I CRAC in Jurkat T cells (13). In support of the role of STIM1 in SOCE, mutation of the Ca 2ϩ binding EF-hand domain of STIM1 resulted in constitutive SOC channel activation without any detectable change in the content of the Ca 2ϩ stores (16). The cytoplasmic COOH-terminal domain of STIM1 has been suggested to interact with the NH 2 terminus of Orai1, facilitating the Orai1-STIM1 interactions required for the activation of I CRAC (17). In addition, hTRPC1 has been presented as an essential component of the SOC channels. Heteromeric interactions of TRPC1 with other TRPCs have been reported to lead to the generation of SOC channels with different biophysical properties (18). In human platelets, hTRPC1 forms a complex hTRPC6, the type II IP 3 receptor and SERCA3 activated by depletion of the intracellular Ca 2ϩ stores (19). In addition, a recent study has reported that TRPC1 associates with STIM1 and Orai1 in culture cells to form a ternary complex that is important for the formation of the SOC channel (20). Orai1 has been shown to confer to TRPCs STIM1-mediated store-operated sensitivity (21); however, it remains unclear whether hTRPC1 interacts directly with STIM1 or through Orai1 and whether the heteromeric Orai1-TRPC1 interaction forms a channel sensitive to store depletion independently of STIM1. In the present study we have investigated the interaction of endogenously expressed Orai1 with STIM1 and hTRPC1 at resting conditions and upon store depletion either by pharmacological tools or with the physiological agonist thrombin. In addition, we have investigated the role of the STIM1-Orai1 interaction on SOCE and the mode of activation of hTRPC1forming channels. Our results indicate that Ca 2ϩ store depletion stimulates rapid and transient interaction between Orai1 and both, STIM1 and hTRPC1. Electrotransjection with anti-Orai1 COOH terminus antibody or treatment with jasplakinolide (JP) prevented the interaction of STIM1 with Orai1 and hTRPC1, reduced SOCE, and changed the mode of activation of hTRPC1-forming channels. Platelet Preparation-Platelet suspensions were prepared as previously described (22) as approved by Local Ethical Committees and in accordance with the Declaration of Helsinki. Briefly, blood was obtained from healthy drug-free volunteers and mixed with one-sixth volume of acid/citrate dextrose anticoagulant containing (in mM): 85 sodium citrate, 78 citric acid, and 111 D-glucose. Platelet-rich plasma was then prepared by centrifugation for 5 min at 700 ϫ g and aspirin (100 M) and apyrase (40 g/ml) were added. Platelets were then collected by centrifugation at 350 ϫ g for 20 min and resuspended in HEPES-buffered saline (HBS), pH 7.45, containing (in mM): 145 NaCl, 10 HEPES, 10 D-glucose, 5 KCl, 1 MgSO 4 and supplemented with 0.1% bovine serum albumin and 40 g/ml apyrase. Cell viability was assessed using calcein and trypan blue. For calcein loading, platelets were incubated for 30 min with 5 M calcein-AM at 37°C, centrifuged, and the pellet was resus-pended in fresh HBS. Fluorescence was recorded from 2-ml aliquots using a Cary Eclipse spectrophotometer (Varian Ltd., Madrid, Spain). Samples were excited at 494 nm and the resulting fluorescence was measured at 535 nm. The results obtained with calcein were confirmed using the trypan blue exclusion technique. 95% of platelets were viable in our preparations. Measurement of Intracellular Free Calcium Concentration ([Ca 2ϩ ] i )-Human platelets were loaded with fura-2 by incubation with 2 M fura-2/AM for 45 min at 37°C. Fluorescence was recorded from 2-ml aliquots of magnetically stirred cellular suspension (2 ϫ 10 8 platelets/ml) at 37°C using a Cary Eclipse spectrophotometer (Varian Ltd.) with excitation wavelengths of 340 and 380 nm and emission at 505 nm. Changes in [Ca 2ϩ ] i were monitored using the fura-2 340/380 fluorescence ratio and calibrated according to a established method (23). Ca 2ϩ entry was estimated using the integral of the rise in [Ca 2ϩ ] i for 2.5 min after addition of CaCl 2 (22). OAG-induced Ca 2ϩ entry was estimated using the integral of the rise in [Ca 2ϩ ] i for 2.5 min after addition of OAG in a medium containing 1 mM Ca 2ϩ . Ca 2ϩ entry was corrected by subtraction of the [Ca 2ϩ ] i elevation due to leakage of the indicator or leak Ca 2ϩ entry after the addition of DMSO (the vehicle of TG and OAG). Ca 2ϩ release by TG was estimated using the integral of the rise in [Ca 2ϩ ] i for 3 min after the addition of the agent (22). Ca 2ϩ entry and release are expressed as nM⅐s, as previously described (24). Immunoprecipitation and Western Blotting-The immunoprecipitation and Western blotting were performed as described previously (15). Briefly, 500-l aliquots of platelet suspension (2 ϫ 10 9 cell/ml) were lysed with an equal volume of RIPA buffer, pH 7.2, containing 316 mM NaCl, 20 mM Tris, 2 mM EGTA, 0.2% SDS, 2% sodium deoxycholate, 2% Triton X-100, 2 mM Na 3 VO 4 , 2 mM phenylmethylsulfonyl fluoride, 100 g/ml leupeptin, and 10 mM benzamidine. Aliquots of platelet lysates (1 ml) were immunoprecipitated by incubation with 2 g of anti-Orai1 antibody and 25 l of protein A-agarose overnight at 4°C on a rocking platform. The immunoprecipitates were resolved by 10% SDS-PAGE and separated proteins were electrophoretically transferred onto nitrocellulose membranes for subsequent probing. Blots were incubated overnight with 10% (w/v) bovine serum albumin in Tris-buffered saline with 0.1% Tween 20 (TBST) to block residual protein binding sites. Immunodetection of STIM1, hTRPC1, and Orai1 was achieved using the anti-STIM1 antibody diluted 1:250 in TBST for 2 h, the anti-hTRPC1 antibody diluted 1:200 in TBST for 1 h, and the anti-Orai1 antibody diluted 1:1000 in TBST for 1.5 h, respectively. The primary antibody was removed and blots were washed six times for 5 min each with TBST. To detect the primary antibody, blots were incubated for 45 min with horseradish peroxidase-conjugated ovine anti-mouse IgG antibody or horseradish peroxidase-conjugated donkey anti-rabbit IgG antibody diluted 1:10,000 in TBST and then exposed to enhanced chemiluminescence reagents for 4 min. Blots were then exposed to photographic films. The density of bands on the film was measured using scanning densitometry. Reversible Electroporation Procedure-The platelet suspension was transferred to an electroporation chamber containing antibodies at a final concentration of 2 g/ml, and the antibod-ies were transjected according to published methods (15). Reversible electropermeabilization was performed at 4 kV/cm at a setting of 25-microfarad capacitance and was achieved by 7 pulses using a Bio-Rad Gene Pulser Xcell Electroporation System (Bio-Rad). Following electroporation, platelets were incubated with antibodies for an additional 60 min at 37°C and centrifuged at 350 ϫ g for 20 min and resuspended in HBS prior to the experiments. Statistical Analysis-Analysis of statistical significance was performed using Student's t test. p Ͻ 0.05 was considered to be significant for a difference. Orai1 Co-immunoprecipitates with hTRPC1 and STIM1 in Human Platelets-Platelets have been shown to endogenously express hTRPC1 channel in the PM (5), and a functional interaction between STIM1 in the Ca 2ϩ stores and hTRPC1 has been reported to account for the activation of SOCE in these cells (15). We have now investigated the association between hTRPC1 and Orai1 by looking for co-immunoprecipitation from platelet lysates. Immunoprecipitation and subsequent SDS-PAGE and Western blotting were conducted using control platelets and platelets treated in the absence of extracellular Ca 2ϩ (100 M EGTA was added to the medium) for different periods of time (from 10 to 60 s) with inhibitor of the sarcoendoplasmic reticulum Ca 2ϩ -ATPase (SERCA) TG (1 M) plus a low concentration of ionomycin (50 nM), to induce extensive depletion of the intracellular stores in platelets (25). After immunoprecipitation with anti-hTRPC1 or anti-Orai1 antibodies, Western blotting revealed the presence of Orai1 in samples from resting platelets. The specificity of the hTRPC1 antibody was tested with the anti-TRPC1 antibody T1E3, which has been shown to be a specific tool in the investigation of mammalian TRPC1 proteins (5,26). We found that treatment with TG ϩ ionomycin increased the association between Orai1 and hTRPC1 in a time-dependent manner, reaching a maximal effect after 30 s of platelet stimulation (Fig. 1A, upper panel; n ϭ 6). Similar results were observed when cells were stimulated with the physiologi-cal agonist thrombin (1 units/ml) in a Ca 2ϩ -free medium (100 M EGTA was added at the time of experiment). Thrombin increased coimmunoprecipitation between Orai1 and hTRPC1 in a time-dependent manner, reaching a maximum after 10 s of stimulation with the agonist (Fig. 1B, upper panel; n ϭ 6). Western blotting of the same membranes with the antibody used for immunoprecipitation confirmed similar protein content in all lanes (Fig. 1, lower panels). Furthermore, we have explored the association between STIM1 and Orai1 by looking for co-immunoprecipitation from platelet lysates. Immunoprecipitation and subsequent SDS-PAGE and Western blotting were conducted using control platelets and platelets treated in a Ca 2ϩ -free medium (100 M EGTA added) for different periods of time (from 10 to 60 s) with TG (1 M) and ionomycin (50 nM). Our results indicate that treatment with TG ϩ ionomycin increased the association between Orai1 and STIM1 in a time-dependent manner, reaching a maximal effect after 10 s of platelet stimulation ( Fig. 2A, upper panel; n ϭ 6). Similar results were observed when cells were stimulated with thrombin, which increased co-immunoprecipitation between Orai1 and STIM1 in a timedependent manner, reaching a maximum after 30 s of stimulation with the agonist (Fig. 2B, upper panel; n ϭ 6). Western blotting of the same membranes with the antibody used for immunoprecipitation confirmed similar protein content in all lanes (Fig. 2, lower panels). Our observations, showing an enhanced association of Orai1 with hTRPC1 and STIM1 in response to depletion of the intracellular Ca 2ϩ stores or the physiological agonist thrombin suggest that the STIM1-Orai1-hTRPC1 ternary complex might be important for the mediation of SOCE in these cells. Inhibition of Store Depletion-evoked Interaction between STIM1 and hTRPC1 by Electrotransjection with Anti-Orai1 C-terminal Antibody-The amino acid sequence 288 -301 of human Orai1 recognized by the anti-Orai1 antibody used is FIGURE 3. Inhibition of store depletion-evoked interaction between STIM1 and hTRPC1 by electrotransjection with anti-Orai1 C-terminal antibody. A, human platelets were electropermeabilized in a Gene Pulser as described under "Experimental Procedures" and then incubated in the presence of 1 g/ml anti-Orai1 antibody (␣-Orai1) or 1 g/ml rabbit IgG (r-IgG) for 60 min as indicated. Cells were then stimulated with 1 M TG ϩ 50 nM ionomycin for 30 s and lysed. Whole cell lysates were immunoprecipitated (IP) in the absence of antibodies but adding protein A-agarose, and immunoprecipitated proteins were analyzed by Western blotting (WB) using anti-Orai1 antibody. These results are representative of four independent experiments. B, platelets (10 9 platelets/ml) were electropermeabilized or left untreated, as indicated, incubated for 60 min at 37°C in the absence of antibodies, and lysed. Whole cell lysates were subjected to Western blotting using anti-actin antibody as described under "Experimental Procedures." Positions of molecular mass markers are shown on the right. Histograms represent the quantification of actin in cells electropermeabilized and non-electropermeabilized. Results are presented as arbitrary optical density units and expressed as mean Ϯ S.E. of six independent experiments. C, human platelets (10 9 platelets/ml) were electropermeabilized and incubated with 1 g/ml rabbit IgG (r-IgG) or 1 g/ml anti-STIM1 antibody (␣-Orai1) for an additional 60 min at 37°C, as indicated. Cells were then incubated for 30 s in the absence or presence of 1 M TG ϩ 50 nM ionomycin in a Ca 2ϩ -free medium (100 M EGTA was added) and lysed. Whole cell lysates were immunoprecipitated with anti-STIM1 antibody. Immunoprecipitates were analyzed by Western blotting using anti-hTRPC1 antibody (upper panel) and reprobed with anti-STIM1 antibody (lower panel) as described under "Experimental Procedures." Positions of molecular mass markers are shown on the right. These results are representative of six independent experiments. located in the cytosolic COOH-terminal region of Orai1, which has been shown to be essential for the interaction of Orai1 with STIM1 (27). Since Orai1 has been proposed to mediate the communication between STIM1 and hTRPC1 (21) we have investigated whether the anti-Orai1 antibody, which is directed to the COOH-terminal region, could block the interaction between STIM1 and hTRPC1. To assess this possibility the anti-Orai1 antibody was introduced into platelets using an electropermeabilization technique. Electroporation can be used successfully for transferring antibodies into cells while maintaining the physiological integrity of the cells (15,28,29). Human platelets were reversibly electroporated as described under "Experimental Procedures." The presence of this antibody inside platelets was confirmed in samples from platelets electropermeabilized and incubated with 1 g/ml of either anti-Orai1 antibody or rabbit IgG, of the same nature of the anti-Orai1 antibody used, by immunoprecipitation without adding any additional antibody and subsequent Western blotting with the anti-Orai1 antibody. As shown in Fig. 3A, Orai1 was clearly detected in cells that had been previously electropermeabilized and incubated with anti-Orai1 antibody and not in cells incubated with rabbit IgG. Electropermeabilization allowed the anti-Orai1 antibody to enter the cells and immunoprecipitate native Orai1, which was then detected by Western blotting. To further investigate whether reversible electroporation might induce loss of proteins of the size of Orai1 (ϳ45 kDa) (30) we investigated the presence of actin (42 kDa) in electroporated and non-electroporated platelets. As shown in Fig. 3B, the amount of actin detected by Western blotting in electroporated platelets was not significantly smaller than that detected in non-electroporated platelets. Altogether, these findings confirm the efficacy of the electrotransjection and that the amount of Orai1 detected was not modified by treatment with TG ϩ ionomycin for 30 s (Fig. 3A, top panel). As shown in Fig. 3C, interaction between STIM1 and hTRPC1 was abolished in platelets electrotransjected with 1 g/ml anti-Orai1 antibody (upper panel, third and fourth lanes; p Ͻ 0.001; n ϭ 6) compared with platelets electrotransjected with 1 g/ml rabbit IgG, as detected by immunoprecipitation of cell lysates with the anti-STIM1 antibody followed by Western blotting with anti-hTRPC1 antibody. Reprobing of the same membranes with anti-STIM1 antibody confirmed a similar protein loading in all lanes (Fig. 3C, lower panel). We found that electrotransjection of the anti-Orai1 antibody inhibits TG ϩ ionomycin-induced Orai1-STIM1 co-immunoprecipitation by performing immunoprecipitation with the transjected anti-Orai1 antibody (no additional antibodies were added for immunoprecipitation after transjection of anti-Orai1 antibody into cells) followed by Western blotting with the anti-STIM1 antibody (data not shown). These findings were not observed when platelets were electrotransjected with rabbit IgG (data not shown). These findings suggest that the amino acid sequence recognized by the anti-Orai1 antibody is essential for the interaction of STIM1 and hTRPC1, and blockade of this interaction might impair the function of the STIM1-Orai1-hTRPC1 ternary complex. Impairment of the Interaction between STIM1 and hTRPC1 Changes the Behavior of hTRPC1-forming Channels from Capacitative to Non-capacitative Channel-We have further investigated whether the anti-Orai1 antibody could affect SOCE in these cells. To assess this issue, the anti-Orai1 antibody was electrotransjected into platelets, followed by depletion of the intracellular Ca 2ϩ stores using TG (200 nM) to activate SOCE. Before the measurement of [Ca 2ϩ ] i platelets were maintained in a medium containing 200 M CaCl 2 , to avoid premature depletion of the stores. At the time of the experiment 250 M EGTA was added to perform the studies in a Ca 2ϩ -free medium. In platelets electrotransjected with rabbit IgG (Fig. 4A), TG evoked a prolonged elevation of [Ca 2ϩ ] i , due to leakage of Ca 2ϩ from intracellular stores (the integral for 3 min of the rise in [Ca 2ϩ ] i after the addition of TG was 238 Ϯ 73 nM⅐s; Fig. 4A, rabbit IgG: Control). Subsequent addition of Ca 2ϩ (1 mM) to the external medium induced a sustained increase in To assess the involvement of hTRPC1 in TGevoked SOCE we incubated cells for 30 min with 15 M anti-hTRPC1 antibody, directed toward the sequence 557-571 of human hTRPC1, which is located in the pore-forming region between the fifth transmembrane domain and region VII of hTRPC1 (31). We have previously used this procedure to successfully block hTRPC1 channel function (5,32). Incubation with the anti-hTRPC1 antibody significantly reduced TG-evoked SOCE without having any effect on TG-induced Ca 2ϩ release (TG-induced Ca 2ϩ release and entry, estimated as the integral of the rise in [Ca 2ϩ ] i after the addition of TG or CaCl 2 were 252 Ϯ 51 and 441 Ϯ 72 nM⅐s, respectively; Fig. 4A, rabbit IgG:␣-hTRPC1; p Ͻ 0.05). These findings confirm the role of hTRPC1 in SOCE. In platelets electrotransjected with anti-Orai1 antibody and not incubated with anti-hTRPC1 antibody TG-induced Ca 2ϩ entry was significantly reduced compared with cells electrotransjected with rabbit IgG (the integrals of the rise in [Ca 2ϩ ] i after the addition of TG or CaCl 2 were 212 Ϯ 27 and 297 Ϯ 34 nM⅐s, respectively; Fig. 4B, ␣-Orai1: Control; p Ͻ 0.05). Interestingly, incubation with anti-hTRPC1 antibody did not significantly modify either Ca 2ϩ release or entry induced by TG in platelets electrotransjected with anti-Orai1 antibody (the integrals of the rise in [Ca 2ϩ ] i after the addition of TG or CaCl 2 were 204 Ϯ 23 and 343 Ϯ 58 nM⅐s, respectively; Fig. 4B, ␣-Orai1:␣-hTRPC1). As mentioned under "Experimental Procedures," the integrals were corrected by subtraction of the elevation in [Ca 2ϩ ] i observed after the addition of 1 mM Ca 2ϩ in cells treated with vehicle (DMSO) instead of TG ( Fig. 4; leak Ca 2ϩ entry). These findings indicate that hTRPC1 is not involved in SOCE in platelets electroporated with anti-Orai1 antibody. Because some Ca 2ϩ entry was still detectable in platelets electrotransjected with anti-Orai1 antibody, our results indicate that a TRPC1-Orai1-independent pathway is involved in the remaining Ca 2ϩ entry in these cells. To investigate whether hTRPC1 is involved in non-capacitative Ca 2ϩ entry in platelets electrotransjected with anti-Orai1 antibody we used OAG, a diacylglycerol (DAG) analogue that induces non-capacitative Ca 2ϩ entry in human platelets (32,33). In the absence of extracellular Ca 2ϩ OAG was unable to induce elevation in [Ca 2ϩ ] i (data not shown). In platelets electrotransjected with rabbit IgG, OAG (100 M)-induced Ca 2ϩ entry was not significantly modified by incubation for 30 min with 15 M anti-hTRPC1 antibody (the integral of the rise in [Ca 2ϩ ] i for 2.5 min after the addition of OAG was 1509 Ϯ 194 and 1479 Ϯ 193 nM⅐s in platelets incubated in the absence or presence of anti-hTRPC1 antibody, respectively; Fig. 5A; n ϭ 6). In platelets electrotransjected with anti-Orai1 antibody, treatment with OAG enhances non-capacitative Ca 2ϩ entry to 137% of control (rabbit IgG: Control versus ␣-Orai1:Control). Incubation for 30 min with 15 M anti-hTRPC1 antibody reduced OAG-mediated non-capacitative Ca 2ϩ entry to 89% of control (rabbit IgG:Control versus ␣-Orai1: ␣-hTRPC1; the integral of the rise in [Ca 2ϩ ] i for 2.5 min after the addition of OAG was 2073 Ϯ 252 and 1348 Ϯ 236 nM⅐s in platelets incubated in the absence and presence of anti-hTRPC1 antibody, respectively; Fig. 5, B and C; n ϭ 6). These findings suggest that impairment of the interaction between hTRPC1 and STIM1 results in a change in behavior of hTRPC1, or heteromeric channels including hTRPC1, from capacitative to non-capacitative Ca 2ϩ entry. Finally, we have investigated the functional relevance of the interaction between STIM1 and hTRPC1 by testing the effect of JP, a cell-permeant peptide isolated from Jaspis johnstoni, which induces polymerization and stabilization of actin filaments (34). JP has been shown to elongate and organize actin filaments exclusively at the cell periphery and we have previously used it to stabilize the membrane actin cytoskeleton in platelets and prevent the interaction between ER and PM (35). Treatment of human platelets with 10 M JP for 30 min at 37°C resulted in a significant inhibition of the interaction between hTRPC1 and STIM1 as detected by co-immunoprecipitation (Fig. 6A). In addition, in the presence of JP, non-capacitative Ca 2ϩ entry stimulated by 100 M OAG was enhanced, an effect that was reduced by incubation with anti-hTRPC1 antibody to 64% (Fig. 6, B versus C; p Ͻ 0.05; n ϭ 6), reaching a value that was comparable with that induced by OAG in the absence of JP (Fig. 6C, JP: ␣-hTRPC1 versus Fig. 6B, Control). DISCUSSION Canonical TRP proteins have been shown to form both storeoperated (capacitative) and non-capacitative Ca 2ϩ permeable channels. The latter has been reported to be gated downstream of phospholipase C (PLC) activation by the second messenger DAG, or its analog OAG (36 -38), or by other lipid messengers, such as lysophosphatidic acid (39) and sphingosine 1-phosphate (40). hTRPC1 has long been proposed as a candidate to mediate SOCE by a dynamic interplay with the ER Ca 2ϩ sensor STIM1 and the PM channel Orai1 (20). STIM1 has been reported to act as a regulator of different store-operated Ca 2ϩ currents mediated, among others, by Orai-and TRPC1-forming channels (43). Although Orai1 has been presented as the channel mediating the Ca 2ϩ selective I CRAC , whereas TRPC1 has been shown to participate in the conduction of other rela-tively Ca 2ϩ selective I SOC (2), a functional requirement for Orai1 in the generation of TRPC1-SOC channels has recently been proposed (44). Here we show that impairment of the interaction between STIM1 and Orai1 results in inhibition of the interaction between STIM1 and hTRPC1, which indicates that Orai1 mediates the communication between STIM1 and hTRPC1. In addition, we have observed that impairment of the interaction between STIM1 and hTRPC1 results in loss of storeoperated (capacitative) behavior of hTRPC1-forming channels and appearance of DAG-regulated noncapacitative behavior. To our knowledge, this is the first description of this change in behavior of hTRPC1, because hTRPC1 has never been shown to respond to OAG. Other TRPC channel proteins have been reported to function in two distinct ways depending on the expression level or even the mode of expression. In the B lymphocyte cell line DT40 an increase in the level of expression of TRPC3 results in the disappearance of store-operated behavior and the appearance of a receptor-activated non-capacitative behavior (41). In addition, TRPC7 has been shown to be activated by both receptor-and store-operated modes in HEK-293 cells depending on the mode of expression. When stably expressed, TRPC7 can be activated by either Ca 2ϩ store depletion or PLC activation; however, when transiently expressed TRPC7 forms channels activated downstream of PLC, but not by Ca 2ϩ store depletion (42). Human platelets naturally express different hTRPC proteins, including hTRPC1, hTRPC3, hTRPC4, hTRPC5, and hTRPC6 (45,46). Thus, the enhanced OAG-mediated Ca 2ϩ entry observed when the interaction between STIM1 and Orai1-hTRPC1 is impaired might not be due to homo-hTRPC1 tetramers, but to a heteromeric channel with hTRPC1 as one of the subunits. We believe that the change in behavior of Ca 2ϩ -permeable channel involving hTRPC1 in human platelets is unlikely mediated by an increase in the level of expression or by a different mode of expression because we are investigating endogenously expressed hTRPC1 and presumably the duration of the experiments (1 h preincubation with anti-Orai1 antibody) is not long enough to induce any change in protein expression in anucleated platelets. We have tested the functional relevance of Orai1, mediating the interaction between STIM1 and hTRPC1, on the mode of activation of the hTRPC1-forming channel by electrotransjection of cells with an anti-Orai1 antibody, directed toward the COOH-terminal region, which is involved in the interaction with STIM1 (27). We have further tested the role of STIM1 in the mode of activation of hTRPC1-forming channels by treatment with JP. JP induces the formation of a cortical actin barrier at the PM, so excluding cytoplasmic organelles from this region and thus preventing close association between the PM and internal organelles (47). We have found that JP causes elongation and reorganization of actin filaments exclusively near the PM and reduces SOCE in platelets and other cells (35,48,49). A model based on the interaction between STIM1 in the ER and Orai1-hTRPC1 complex in the PM might be expected to be affected by stabilization of the cortical cytoskeleton barrier by JP before store depletion. Both electrotransjection with the anti-Orai1 antibody and treatment with JP prevented the interaction between STIM1 and hTRPC1, the former suggesting that this interaction is mediated by Orai1. In addition, both experimental maneuvers reduced SOCE (see Ref. 35 for JP) and enhanced OAG-mediated noncapacitative Ca 2ϩ entry. In platelets with impaired interaction between STIM1 and Orai1-hTRPC1, incubation with the anti-hTRPC1 antibody had no effect on the remaining SOCE but reduced OAG-evoked non-capacitative Ca 2ϩ entry to a level that was found to be similar to OAG-mediated non-capacitative Ca 2ϩ entry in cells where the STIM1-Orai1-hTRPC1 interaction was allowed, thus suggesting that under these conditions channels involving hTRPC1 support non-capacitative Ca 2ϩ entry. Our findings are in agreement with previous studies by Birnbaumer's group (21,50) reporting that SOCE/I CRAC channels are composed of heteromeric complexes including TRPCs and Orai proteins, with Orai conferring STIM1-mediated store depletion sensitivity to these channels. Furthermore, we present for the first time evidence for the interaction between endogenously expressed Orai1 and hTRPC1 and between Orai1 and STIM1 in platelets. The interaction between these proteins was detected in resting platelets and enhanced by store depletion or by the physiological agonist thrombin reaching a maximum after 10 or 30 s of stimulation and then decreased. The interaction between Orai1 and hTRPC1 or STIM1 was found to be independent on extracellular Ca 2ϩ . We have previously demonstrated that STIM1 co-immunoprecipitates with hTRPC1, which is likely involved in the communication of the Ca 2ϩ content of the ER to hTRPC1-forming channels to mediate SOCE (15). These findings further support the involvement of a dynamic STIM1-Orai1-hTRPC1 ternary complex in the activation of SOCE as previously reported in cultured cells (20). Our findings suggest that Orai1 mediates the communication between STIM1 and hTRPC1, which is essential for the mode of activation of channels including hTRPC1 as a subunit. We propose that under normal conditions STIM1 in the ER interacts with the complex Orai1-hTRPC1 in the PM and induces the activation of the hTRPC1-forming channel by store FIGURE 7. Speculative model for the regulation of hTRPC1 channel behavior by STIM1 in platelets. Top, in cells with a functional interaction between STIM1, Orai1, and hTRPC1, occupation of membrane receptors by an agonist results in the activation of PLC through a G-protein, leading to the synthesis of IP 3 and DAG. The latter induces non-capacitative Ca 2ϩ entry (NCCE) and IP 3 activates IP 3 receptors (IP 3 R) in the ER, induces Ca 2ϩ release and activates store-operated (capacitative) Ca 2ϩ entry (CCE) through hTRPC1 and other plasma membrane channels. Bottom, impairment of the interaction between STIM1 and Orai1 by electrotransjection of an anti-Orai1 (C-terminal) antibody, inhibits the interaction between STIM1 and hTRPC1 leading to a change in behavior of hTRPC1-forming channels from capacitative (store-operated) Ca 2ϩ channel to DAG-activated noncapacitative Ca 2ϩ channel. depletion. In contrast, when the communication between STIM1 and the Orai1-hTRPC1 complex is prevented hTRPC1forming channels support non-capacitative Ca 2ϩ entry, perhaps by forming heteromeric channels with other hTRPC subunits activated by OAG (a schematic diagram of the proposed model is depicted in Fig. 7). These data support that STIM1 regulates hTRPC1 activation mode.
2018-04-03T00:50:17.610Z
2008-09-12T00:00:00.000
{ "year": 2008, "sha1": "1c875043548e67f105a1bb563c4eeff77d27d5e6", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/283/37/25296.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "b9717c1f0e975ff2d2cd67cab182853293b58cf9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256370485
pes2o/s2orc
v3-fos-license
Vitamin D activates FBP1 to block the Warburg effect and modulate blast metabolism in acute myeloid leukemia Acute myeloid leukemia (AML) has the lowest survival rate among the leukemias. Targeting intracellular metabolism and energy production in leukemic cells can be a promising therapeutic strategy for AML. Recently, we presented the successful use of vitamin D (1,25VD3) gene therapy to treat AML mouse models in vivo. In this study, recognizing the importance of 1,25VD3 as one of only 2 molecules (along with glucose) photosynthesized for energy during the beginning stage of life on this planet, we explored the functional role of 1,25VD3 in AML metabolism. Transcriptome database (RNA-seq) of four different AML cell lines revealed 17,757 genes responding to 1,25VD3-treatment. Moreover, we discovered that fructose-bisphosphatase 1 (FBP1) noticeably stands out as the only gene (out of 17,757 genes) with a 250-fold increase in gene expression, which is known to encode the key rate-limiting gluconeogenic enzyme fructose-1,6-bisphosphatase. The significant increased expression of FBP1 gene and proteins induced by 1,25VD3 was confirmed by qPCR, western blot, flow cytometry, immunocytochemistry and functional lactate assay. Additionally, 1,25VD3 was found to regulate different AML metabolic processes including gluconeogenesis, glycolysis, TCA, de novo nucleotide synthesis, etc. In summary, we provided the first evidence that 1,25 VD3-induced FBP1 overexpression might be a novel therapeutic target to block the “Warburg Effect” to reduce energy production in AML blasts. To the Editor, Acute myeloid leukemia (AML) is the most common type of leukemia in adults [1]. Despite improvements in our understanding of AML and the development of different therapeutic approaches, approximately 50% of patients will relapse following induction chemotherapy, resulting in a dismal 5-year overall survival rate of 29% [1,2]. As such, there is an unmet need to understand the fundamental mechanisms of relapsed/refractory AML and develop effective therapies to improve the prognosis of AML. The remodeling of cellular metabolism is an essential process to meet higher demands of energy in cancers [3]. Enhanced glycolysis, known as the "Warburg Effect, " has been confirmed in leukemic blasts and is correlated to a worse prognosis for AML [4]. Also, increased production of lactate was attributed to chemoresistance in AML patients who have up-regulated lactate dehydrogenase [5]. Therefore, identifying potential druggable targets in a complex network of metabolic processes and developing relevant treatment approaches to inhibit blast metabolism/energy production could be one promising therapeutic strategy Open Access for AML/its relapse [6]. Fructose-1,6-bisphosphatase (FBP1) is an essential enzyme for gluconeogenesis, the pathway that runs opposite of glycolysis by transforming substrates into glucose, and based on prior studies of different solid tumors, FBP1 can also function as a tumor suppressor by inhibiting glycolysis and cancer cell growth [7]. Vitamin D is known to be the oldest hormone on earth [8]. Some of the earliest life forms such as phytoplankton took advantage of sunlight to photosynthesize 2 metabolites for energy and survival: glucose and vitamin D [8]. Our recent study demonstrated that the combination of 1,25VD3 and 5-Azacytidine (a FDAapproved hypomethylating agent) enhanced cytotoxicity/differentiation and inhibited proliferation of AML blasts in vivo [9]. Up to 35% of AML patients have mutations in the FMS-like receptor tyrosine kinase 3 (FLT3) gene and defective protein products (AML-FLT3) that are associated with poorer survival through an increased risk of relapse [10]. Tyrosine kinase inhibitors (TKI) are a new type of targeted therapies that are in clinical trials for the treatment of AML-FLT3 patients [11]. Our preliminary in vitro studies revealed that the supplementation of 1,25VD3 to Midostaurin (MIDO), a 1st generation TKI could effectively suppress the proliferation of MV4-11 (Supple. Fig. 1A). Our qPCR data also confirmed that the combination of 1,25VD3 and Gilteritinib (GILT), a 2nd generation TKI, could significantly reduce the CYCLIN D1 (encoded by CCND1 gene, 93% downregulation versus the untreated control and superior to single agents, Supple. Fig. 1B). This data is consistent with previous findings showing 1,25VD3 controls G1-S phase-cycle machinery in human breast cancer cells by repressing the CCND1 gene [12]. A recent study suggests that numerous metabolic pathways except for gluconeogenesis can be therapeutically exploited to overcome the TKI-resistance [13], and inhibition of glutaminolysis can achieve a promising treatment effect on AML-FLT3 blasts [14]. Vitamin D supplementation has been also found to correct the metabolic disturbance caused by a fructose-rich diet [15]. In this study, to find new therapeutic targets and develop potential 1,25VD3-based treatments for AML, we explored the comprehensive details of how 1,25VD3 works on the metabolism of FLT3-mutated blasts. First, we performed transcriptome analyses (RNAseq) of 4 different AML-FLT3 cell lines including MV4-11, MOLM-14, MV4-11-midostaurin-resistant cells and MOLM-14-midostaurin-resistant cells which were previously reported [9]. Among many differential expression methods developed for RNA-seq data analyses, FPKM number was found to be one of the best approaches in precision and accuracy on reporting RNA-seq results [16]. Our RNA-seq database revealed that there were 17,757 genes with FPKM numbers after 1,25VD3 treatment (distribution pie, Fig. 1A). FBP1 was found as the only gene with a ~ 254-fold increase in gene expression, which was ranked at 8413th (4.37 FPKM) in the untreated group and then at the 94th rank (1110.13 FPKM) after 80 nM 1,25VD3 treatment among 17,757 genes analyzed (Fig. 1B). Similar changes in FPKM and ranks could be observed in all 4 cell lines (Fig. 1B). The significantly increased FBP1 gene and proteins were confirmed by immunocytochemistry, qPCR and western blot (Fig. 1C-F). Furthermore, the functional lactate assay showed the significant reduction of the lactate concentration in MV4-11 cells after 1,25VD3 treatment (Fig. 1G). The flow cytometry (FC) data showed that 95.9% of the FBP1+ cells expressed vitamin D receptor (VDR, Fig. 1D; Isotype control in Supple. Fig. 2). In addition to MV4-11/MOLM-14, significant elevation of FBP1 and induction of blast differentiation could be observed in 1,25VD3 treatment of HL60, a human acute promyelocytic leukemia (APL) cell line (Supple. Fig. 3). The detailed description of materials and methods is available in supplementary documents. In addition to the central pathway of metabolizing glucose to pyruvate via glycolysis, AML metabolism involves diverse processes of nucleotides, amino acid, lipids and their end metabolites to perform signaling functions and produce energy to support tumorigenesis [17]. Here, we provided a table of RNA-seq data showing how 1,25VD3 modulated the genes essential for different metabolite processes in both MV4-11 and MV4-11-MIDO-R cells ( Fig. 2A). Notably, 1,25VD3 was found to increase gene expressions of certain enzymes related to gluconeogenesis, TCA cycles, oxidative phosphorylation, glycogenesis, and reduce gene expressions of certain enzymes related to glycolysis, glycogenolysis and nucleotide synthesis (Fig. 2A). In summary, our report is the first to identify the pathway of vitamin D modulating the AML metabolism by activating FBP1 to block the "Warburg Effect", which might enhance its anti-leukemic effect in addition to the induction of differentiation and inhibition of cell cycle progression (Fig. 2B). However, in prior clinical trials of vitamin D treatments for AML, there have been mixed results: this is probably due to the varying expression of baseline VDR of leukemic blasts and loss of function in mutated VDR [18]. The significant 1,25VD3-induced up-regulation of FBP1 to suppress glycolysis and its co-expression with VDR provides Figure 2 showing the FC plot of FITC-isotype control; E MV4-11 cells were treated with 80 nM 1,25VD3 for 48 h, then harvested and analyzed by RT-qPCR for expression of human FBP1 (Fold Change); F Treated MV4-11 cells were analyzed by WB for protein expression of human FBP1; G Treated MV4-11 cells were analyzed by Lactate Assay; Cumulative data of the concentration of intracellular lactate; Where applicable, data are means ± SEM and were analyzed by student "t" test. *p < 0.05, ***p < 0.005, n = 5 an important clinical implication that FBP1 could be a novel therapeutic target for the treatment of AML/its relapse by bypassing the impaired or low baseline VDR expression. Additional file 1. Fig. 2 1,25 vitamin D activates FBP1 to modulate AML Metabolism and block the "Warburg Effect" to enhance its anti-leukemic effect. A Table of RNA-seq results revealing that 1,25VD3 (80 nM) modulated different metabolic pathways in MV4-11 and MV4-11-MIDO-R cells by increasing gene expressions of certain enzymes related to gluconeogenesis, TCA cycles, oxidative phosphorylation, glycogenesis, and reducing gene expressions of certain enzymes related to glycolysis, glycogenolysis and nucleotide synthesis. B A summarized diagram. In addition to 1,25VD3's known roles in inducing differentiation and inhibiting proliferation, we proposed a new functional role of vitamin D in the treatment of AML blasts. 1,25VD3 induces ~ 5000-fold increase of FBP1 (qPCR data) in AML blasts, which encodes large amounts of Fructose-1,6-bisphosphatase (extremely large bands in WB) to disrupt the progression of glycolysis and reduce the lactate production (Warburg Effect), a main energy resource for AML metabolism
2023-01-30T15:21:44.102Z
2022-04-02T00:00:00.000
{ "year": 2022, "sha1": "64c12355a6b8cb3d6af8504ae605a9839b9cf4e0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40364-022-00367-3", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "64c12355a6b8cb3d6af8504ae605a9839b9cf4e0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
19979776
pes2o/s2orc
v3-fos-license
Exchange biasing of single-domain Ni nanoparticles spontaneously grown in an antiferromagnetic MnO matrix Exchange biased composites of ferromagnetic single-domain Ni nanoparticles embedded within large grains of MnO have been prepared by reduction of Ni$_x$Mn$_{1-x}$O$_4$ phases in flowing hydrogen. The Ni precipitates are 15-30 nm in extent, and the majority are completely encased within the MnO matrix. The manner in which the Ni nanoparticles are spontaneously formed imparts a high ferromagnetic- antiferromagnetic interface/volume ratio, which results in substantial exchange bias effects. Exchange bias fields of up to 100 Oe are observed, in cases where the starting Ni content $x$ in the precursor Ni$_x$Mn$_{1-x}$O$_4$ phase is small. For particles of approximately the same size, the exchange bias leads to significant hardening of the magnetization, with the coercive field scaling nearly linearly with the exchange bias field. I. INTRODUCTION Exchange anisotropy, or exchange bias, is an interfacial phenomenon between ferromagnetic and antiferromagnetic domains which results in the shifting and broadening of magnetic hysteresis loops. Exchange bias is believed to result from the interaction of ferromagnetic (FM) spins with uncompensated antiferromagnetic (AFM) spins at the FM/AFM interface. 1,2,3 Since its discovery in partially oxidized Co/CoO nanoparticles by Meiklejohn and Bean, 4 exchange bias has been observed and engineered in core-shell nanoparticles, 5 thin films, 6 and granular composites. 7 These architectures are utilized because a high proportion of FM spins must be interfacial in order for the AFM switching behavior to appreciably affect the FM coercivity. While they achieve a high interface/volume ratio, core-shell nanoparticles and thin film architectures do not result in large quantities of exchange-biased material. As an alternative, novel methods of processing exchange biased systems have been explored, including coevaporation, 8 mechanical milling, 9 and spontaneous phase separation. 10 Initial reports from Sort et al. 11 have demonstrated hydrogen reduction of Fe 0.2 Cr 1.8 O 3 to produce metal/oxide composites. Different transition metals reduce sequentially, resulting in nanosized Fe particles within micron sized Cr 2 O 3 grains. Interaction between the ∼10 nm Fe precipitates and the bulk Cr 2 O 3 provides exchange bias shifts of 10 Oe. Reduction kinetics of the system CoCr 2 O 4 -Co 3 O 4 have been reported by Bracconi and Dufour 12 , and Kumar and Mandal 13 have produced Co/Cr 2 O 3 composites directly from nitrate precursors. Recently, Toberer et al. 14 have demonstrated that remarkable microstructures with aligned porosity can be observed when the reduction product shares a common oxygen sublattice with the precursor. Here we report on hydrogen reduction of the system Ni x Mn 3−x O 4 to form Ni/MnO composites with striking microstructures associated with substantial exchange biasing. The Ni particles exhibit bulk saturation magne-tization values, and exchange bias is observed below the Néel temperature of MnO at T N = 119 K. Surface and interior particle size analysis reveals that this system produces Ni nanoparticles on the order of 15 nm to 30 nm. Size-dependent exchange bias phenomena are manifested in trends between the Ni content of the precursor spinel and the exchange and coercive fields of the reduced composite. II. EXPERIMENTAL Single-phase ceramic monoliths were prepared by solidstate reactions of oxalates, similar to that of Wickham. 15 Oxalates are versatile precursors for mixed metal oxides, and have found extensive use in recent years to produce substituted binary 16 Reductions were performed in alumina boats in a tube furnace under 5 % H 2 /N 2 with a flow rate of approximately 30 sccm. Once the gas mixture had equilibrated, the specimens, as pellets, were heated at 2 • C/min to 650 • C, 700 • C, or 725 • C, held for 2 h, then cooled at 10 • C/min to room temperature. Reduced samples were verified to be Ni/MnO by x-ray diffraction (XRD, Philips X'Pert with CuK α radiation) and Rietveld refinement using the xnd code. 20 Composites were characterized by thermogravimetic analysis (TGA, Cahn TG-2141), scanning electron microscopy (SEM, FEI Sirion XL40), focused ion beam milling and microscopy (FIB, FEI DB235), and SQUID magnetometry (Quantum Design MPMS 5XL). III. RESULTS AND DISCUSSION The calcining of the single-phase Ni/Mn oxalates, according to the phase diagram presented by Wickham,15 results in single phase spinel-related compounds that are not all cubic. Wickham 15 has reported that in their hightemperature state, bulk samples of Ni x Mn 3−x O 4 with x between 0.15 and 1.00 are cubic spinels before decomposing into NiMnO 3 and α-Mn 2 O 3 in the temperature range of 705 • to 1000 • C. Upon water quenching, samples prepared with x < 1 and fired at ≥ 1000 • C are observed to distort from the high-temperature cubic spinel reported by Wickham into single-phase hausmannite-type tetragonal spinels in space group I4 1 /amd. Slow-cooling, airquenching, or quenching in flowing nitrogen are insufficient to prevent decomposition of the solid solution. Rietveld refinement of the room-temperature XRD pattern for the water-quenched compound Ni 0.30 Mn 2.7 O 4 is shown in Fig. 1(a). Only peaks for the hausmannitetype solid solution are evident; this is a requirement for the final reduced composite to be homogeneous in terms of the distribution of Ni precipitates. The refinement assumes a "normal" spinel, where Ni 2+ and Mn 2+ occupy .3Mn2.7O4 sample shows that reduction proceeds by an initial reaction to rocksalt Ni0.1Mn0.9O solid solution, followed by a reduction of Ni 2+ into metallic Ni. the 4b tetrahedral sites. Mn 3+ in the 8c octahedral sites causes a cooperative Jahn-Teller distortion which leads to a loss of cubic symmetry. 21 An accurate determination of the cation distribution may be obtained by neutron diffraction and has been investigated by Larson et al. 22 When sintered at 1325 • C, samples with x near 1 partially decompose into mixtures of NiO and Ni 1−δ Mn 2+δ O 4 as described by Wickham,15 but subsequent annealing at 800 • C for 72 h ensures the formation of a single-phase tetragonal spinel. Dense pellets and micron-sized powder are both suitable precursors for hydrogen reduction because the dimensions of the precipitates and pores are orders of magnitude smaller than the grain size in either case. Adequately high oxygen mobility at the reduction temperature allows the reaction to permeate the sample regardless of any lack of preexisting porosity. In all cases, TGA analysis confirms the total amount of nickel precipitated (and thus the stoichiometry of the precursor spinel) during hydrogen reduction. A TGA weight loss curve for Ni 0.3 Mn 2.7 O 4 is shown in Fig. 2. The weight loss curve reveals that the single-phase spinel first reduces to a rocksalt (Ni 0.1 ,Mn 0.9 )O solid solution, followed by precipitation of metallic Ni. This progression is verified by the fact that incompletely reduced samples display an MnO lattice parameter that is smaller than the theoretical value, due to Ni substitution. X-ray diffraction Rietveld refinement of the final composite product obtained after reduction in 5% H 2 /N 2 indicate only rocksalt MnO and face-centered cubic Ni [ Fig. 1(b)]. High-spin Mn 2+ in octahedral coordination has an ionic radius of 0.83Å in contrast to octahedral Ni 2+ which has a radius of only 0.69Å. 23 Consequently, when Ni 2+ enters product MnO lattice, there is significant shrinkage of the cell parameter, which can be used to estimate the degree of conversion of the starting phases into pure Ni/MnO. The MnO lattice parameter obtained from Rietveld refinement is plotted in Fig. 3 precursor. The cell parameter of pure MnO, 4.444Å is also indicated as a horizontal dashed line. It is seen that for small substitution of Ni (x in the starting phases) the reduction temperature must be increased from 650 • C to 725 • C to ensure complete reduction and avoid the rock-salt (Ni,Mn)O solid solution. Depression of the required reduction temperature of Ni x Mn 3−x O 4 as x deviates from Mn 3 O 4 is a consequence of the higher ionization energy of Ni 2+ . In other words, more energy is released by reduction of Ni 2+ ions than of Mn 2+ , so the reduction to metal occurs more readily when x is larger. The greater ease of reduction of Ni over Mn is suggested by the appropriate Ellingham diagram. 24 The saturation magnetization M S of the magnetic Ni nanoparticle precipitates can be used in tandem with the values of a MnO obtained from Rietveld refinement to determine the completeness of Ni reduction. This is shown in Fig. 3(b), where agreement is seen between the convergence of a MnO and M S to their respective theoretical values of 4.444Å and 0.6 µ B /Ni for a completely reduced xNi/MnO composite, regardless of x. Hydrogen reduction of single-phase oxide monoliths can lead to striking hierarchically porous microstructures, which have been characterized by Toberer et al. 14,19,25 At first glance, low-magnification SEM micrographs of Ni x Mn 3−x O 4 precursor spinels and Ni/MnO reduced samples [ Fig. 4(a) and Fig. 4(b), respectively] appear nearly identical. However, higher magnifica- tion [Fig. 4(c) and (d)] reveals that reduced composites contain aligned pores in rock-salt MnO covered with Ni metal nanoprecipitates. It has been previously suggested 14,25 that the shared oxide sublattice of spinel and rocksalt allows the transformation from one to the other to take place without reconstruction. Porosity is introduced during the spinel to rocksalt transformation while leaving the oxygen framework largely intact. The associated volume loss gives rise to a pore structure that can be regarded as negative crystals -voids in crystals that possess the same facets as the crystals themselves do. Although the pores are as small as 20 nm, the pore and surface edges are aligned at right angles over the entire breadth of the 20 µm grains. This long-range alignment implies that the MnO grains are in fact single crystals with the same orientation and extent as the pre-reduction spinel grains. 14,19 Increasing the reduction temperatures should lead to densification and closing of the pores in the MnO monolith. However, in the interest of maintaining small Ni nanoparticles (and thus a high interface/volume ratio), and because the majority of nanoparticles are completely encased in MnO even in porous samples, reduction was performed at the lowest temperature that allowed complete Ni precipitation. If we assume that for the different values of x, the number of nuclei are the same, and that increasing x only affects the growth (ie. the diameter) of the particles, then we would expect only a weak dependence (changing as x Error bars indicate one standard deviation in the particle diameter. Typically at least 30 distinct particle's were counted in preparing the distributions. It is seen that most in most samples, the sizes are somewhat independent of x and are clustered around 30 nm. we assume that increasing x also increases the number of Ni nuclei upon reduction, then average particle diameter would show an even weaker dependence on x. We have analyzed the Ni particles in the SEM images of the surfaces of the monoliths by using the program imageJ 26 to prepare histograms of particle size distributions. These are plotted in Fig. 5 for the different monoliths. It is seen that mean particle diameters range from ∼15 nm to 35 nm, but there is no clear trend in size, at least until a nickel content of x = 0.60 is reached. Indeed, in the different monoliths, a clearer correlation is found for Ni particle size with the specific crystallographic face of MnO upon which it grows, rather than the starting x value. It is evident in Fig. 6(a) that for a x = 0.45 specimen, regions can be found which exhibit a wide variety of surface particle sizes and spacings depending on the nucleation environment. The coherent pore structure introduced by reduction produces square or triangular facets seen in Fig. 6(a) which correspond to exposed {100} or {111} faces. Cross-sections of reduced grains produced by FIB milling, shown in Fig. 6, reveal that the bulk MnO contains Ni nanoparticles of similar dimensions as those on the surface. Porosity is still prevalent in the bulk of the monoliths as it is in the images of the monolith surface. This is necessary to accommodate the volume loss of the structure while retaining the size and alignment of the MnO grains. By a comparison of lattice parameters, and assuming no sintering during reduction, the fraction of intragranular porosity produced by the conversion of Ni x Mn 3−x O 4 to xNi/MnO increases linearly from 16% when x = 0 to 39% when x = 0.6, which is in rough agree- After field cooling at H = 50 kOe, the coercive field is broadened and shifted H E = 100 Oe in opposition of the cooling field direction. The exchange behavior can be influenced by many factors, including Ni particle size, the amount and orientation of the FM-AFM interface, temperature, and the cooling field. 3 We anticipate that in the size regime studied here (near 20 nm) the Ni nanoparticles are single-domain magnets and that the coercivity below the blocking temperature should not show a strong size-dependence. 27 Fig. 8(a) shows that as the nickel content x increases, H C decreases for samples reduced at either 700 • C or 725 • C. At both reduction temperatures, the highest H C is found for the smallest x, and the smallest H C is found for the largest x. Additionally, the decrease in H C for x = 0.3 samples reduced at 725 • C as opposed to 700 • C implies increased coalescence of Ni particles as the temperature increases. We therefore anticipate that the increased coercivity as size is decreased arises from the same interfacial coupling that results in the increased exchange bias. In exchange biased nanostructures of spherical FM particles in an AFM matrix, the strength of the exchange field H E has been suggested to vary as where E A is the interfacial coupling energy per unit area, M S is the saturation magnetization of the FM, and d FM is the diameter of the FM particle. 7 Assuming this model to be correct, we anticipate that the exchange field H E should decrease with in increasing ferromagnetic particle size. If, with increasing x in our systems, Ni particle particle size indeed increases, then our results are broadly consistent with this model. In Fig. 9 we plot the 5 K coercivity as a function of the exchange field for the different systems measured, data for which are displayed in Fig. 8. We see that the coercivity varies nearly linearly with the exchange field, with the exception of one outlier. Gökemeijer et al. 28 have recently measured biasing of ferromagnets on different CoO surfaces and have concluded that on the uncompensated CoO surfaces, exchange biasing, and the associated shift of hysteresis is found, but on compensated CoO surfaces, the effect of the interface is simply to increase coercivity. The magnetic structure of MnO is not simple 29 and the architectures described here of nearly spherical ferromagnetic particles embedded in an antiferromagnetic host cannot be described in terms of simple interfaces. Given this, we suggest that perhaps both effects, of the uncompensated as well as the compensated surfaces are playing a role, and the linear relation between coercivity and exchange is simply an indication of increasing interfacial area between the two magnetic components. IV. CONCLUSIONS We have demonstrated that hydrogen reduction of Ni x Mn 3−x O 4 spinels produces Ni/MnO composites with significant interfacial area between antiferromagnetic MnO and ferromagnetic Ni, and associated exchange bias. With increasing nickel content x, these effects decrease, presumably because of a decrease in the relative proportion of interfacial spins in the ferromagnet. Exchange bias effects at the FM-AFM interface lead to an increase in H C with decreasing Ni content, along with a 1/x dependence of H E . A nearly linear relationship is found between H C and H E in these systems. V. ACKNOWLEDGMENTS This work was supported by the donors of the American Chemical Society Petroleum Research Fund, and the National Science Foundation through a Career Award (DMR 0449354) to RS, and for the use of MRSEC facilities (DMR 0520415). MG was supported by a RISE undergraduate fellowship.
2007-10-15T23:59:12.000Z
2007-10-15T00:00:00.000
{ "year": 2007, "sha1": "4beed571465e356529498f46d6e321cdf3c12de1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0710.2931", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4beed571465e356529498f46d6e321cdf3c12de1", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
260355618
pes2o/s2orc
v3-fos-license
Neuronal NPR-15 modulates molecular and behavioral immune responses via the amphid sensory neuron-intestinal axis in C. elegans The survival of hosts during infections relies on their ability to mount effective molecular and behavioral immune responses. Despite extensive research on these defense strategies in various species, including the model organism Caenorhabditis elegans, the neural mechanisms underlying their interaction remain poorly understood. Previous studies have highlighted the role of neural G-protein-coupled receptors (GPCRs) in regulating both immunity and pathogen avoidance, which is particularly dependent on aerotaxis. To address this knowledge gap, we conducted a screen of mutants in neuropeptide receptor family genes. We found that loss-of-function mutations in npr-15 activated immunity while suppressing pathogen avoidance behavior. Through further analysis, NPR-15 was found to regulate immunity by modulating the activity of key transcription factors, namely GATA/ELT-2 and TFEB/HLH-30. Surprisingly, the lack of pathogen avoidance of npr-15 mutant animals was not influenced by oxygen levels. Moreover, our studies revealed that the amphid sensory neuron ASJ is involved in mediating the immune and behavioral responses orchestrated by NPR-15. Additionally, NPR-15 was found to regulate avoidance behavior via the TRPM (transient receptor potential melastatin) gene, GON-2, which may sense the intestinal distension caused by bacterial colonization to elicit pathogen avoidance. Our study contributes to a broader understanding of host defense strategies and mechanisms underlining the interaction between molecular and behavioral immune responses. Introduction Hosts employ multiple defense mechanisms to combat infections, including molecular immune defenses (Netea et al., 2019;Gourbal et al., 2018;Blander and Sander, 2012) and behavioral defense responses to invading pathogens (Meisel and Kim, 2014;Sarabian et al., 2018;Hart and Hart, 2018).Overall, these strategies are conserved across species (Sarabian et al., 2018;Hart and Hart, 2018;Kimbrell and Beutler, 2001;Flajnik and Du Pasquier, 2004), but their relationship and mechanistic interplay are not yet fully elucidated.While the immunological defense response is effective, it is metabolically costly and may lead to inflammatory damage (Levine et al., 2011;Netea et al., 2020;Xiao, 2017;Geremia et al., 2014).On the other hand, the avoidance behavioral response serves as a crucial first line of defense, enabling hosts to prevent or minimize contact with pathogens (Behringer et al., 2006;Curtis, 2014;Meisel and Kim, 2014).Although both immune and behavioral responses to pathogen infection are well documented in Caenorhabditis elegans (Styer et al., 2008;Chang et al., 2011;Reddy et al., 2011;Singh and Aballay, 2019a;Sun et al., 2011), the relationship between these survival strategies remains poorly understood. C. elegans is a valuable model organism for studying the genetic mechanisms that control host immune and behavioral responses to pathogens (Balla and Troemel, 2013;Schulenburg and Félix, 2017).Although C. elegans lacks adaptive immunity, part of its innate immune response comprises evolutionarily conserved pathways and immune effectors (Schulenburg and Félix, 2017).C. elegans has been widely used in research studies to investigate these pathways due to its well-defined nervous system and genetic tractability, making it an ideal model organism to explore pathways that are critical to immunity and avoidance behavior (Sym et al., 2000;Nagiel et al., 2008;Powell, 2008).Notably, intestinal changes triggered by bacterial pathogen colonization activate the DAF-7/TGF-β pathway and the G-protein-coupled receptor (GPCR) NPR-1 pathway, which also regulates aerotaxis behavior (Meisel and Kim, 2014;Styer et al., 2008;Singh and Aballay, 2019a;Singh and Aballay, 2019b). To uncover mechanisms that control immune and behavioral responses to invading pathogens independently of aerotaxis, we focused on studying mutants in npr genes that have not been previously linked to either host strategy against pathogen infection.Our investigation revealed that lossof-function mutations in the NPR-15 encoding genes enhanced pathogen resistance when infected by Gram-negative and Gram-positive bacterial pathogens.Intriguingly, npr-15 mutants exhibited a lack of pathogen avoidance behavior that was found to be independent of oxygen sensation.These findings point toward the involvement of a novel mechanism in the regulation of immune response and avoidance behavior. Further analysis unveiled that the resistance to pathogen infection in npr-15 mutants is mediated by the transcription factors GATA/ELT-2 and TFEB/HLH-30.These evolutionarily conserved transcription factors play vital roles in regulating immunity in C. elegans (Head et al., 2017;Kerry et al., 2006;Olaitan and Aballay, 2018;Shapira et al., 2006;Visvikis et al., 2014).Additionally, we discovered that NPR-15 controls avoidance behavior through the intestinal-expressed transient receptor potential melastatin (TRPM) ion channel, GON-2, which has recently been demonstrated to modulate avoidance behavior in Gram-positive bacteria (Filipowicz et al., 2021).Moreover, our results indicate that the amphid sensory neuron, ASJ, plays a crucial role in the interplay between immune response and avoidance behavior.These findings provide insights into the neural mechanisms that control immunity against bacterial infections and pathogen avoidance behavior. NPR-15 loss-of-function enhances pathogen resistance and inhibits avoidance behavior independently of aerotaxis Out of 34 mutants in npr genes that were not previously linked to the control immunity (Supplementary file 1A), only animals lacking NPR-15 (npr-15(tm12539) and npr-15(ok1626) null animals) exhibited enhanced survival against Pseudomonas aeruginosa-mediated killing compared to wild-type (WT) animals (Figure 1A, Figure 1-figure supplement 1A, and Supplementary file 1B).Furthermore, we found that the npr-15(tm12539) animals exhibited less visible bacterial colonization and significantly reduced colony-forming units compared to WT animals (Figure 1B and C).The enhanced resistance to pathogen of npr-15(ok1626) animals appears to be universal, as the mutants were also found to be resistant to additional human pathogens, including Gram-negative Salmonella enterica strain 1344 and Gram-positive Enterococcus faecalis strain OG1RF and Staphylococcus aureus strain NCTCB325 (Figure 1D-F), suggesting that NPR-15 suppresses defense against bacterial pathogens in general.When exposed to live Escherichia coli, the primary food source of C. elegans in the laboratory, npr-15(tm12539) animals exhibited increased lifespan compared to WT animals (Figure 1G).However, there were no significant differences in longevity between npr-15(tm12539) and WT animals when they were exposed to lawns of E. coli that were rendered non-proliferating by ultraviolet light (UV) treatment (Garigan et al., 2002, Figure 1H).Expression of npr-15 under the control of its own promoter rescued the enhanced survival of npr-15(tm12539) animals to different bacterial pathogens (Figure 1A-G), indicating that the functional loss of NPR-15 enhanced the animal's survival against live bacteria. Considering that C. elegans exhibits avoidance behavior when encountering certain pathogenic bacteria (Meisel and Kim, 2014;Chang et al., 2011;Reddy et al., 2011;Singh and Aballay, 2019a), we examined the lawn occupancy of npr-15(tm12539) and WT animals on the partial lawn of S. aureus cultured in the center of an agar plate (Figure 1I).Unexpectedly, we found that npr-15(tm12539) exhibited significantly reduced pathogen avoidance when exposed to S. aureus (Figure 1J).We also compared the re-occupancy of the lawn exhibited by WT and npr-15(tm12539) animals and found no differences in their re-occupancy (Figure 1-figure supplement 1B).Interestingly, we noticed that the variation in lawn occupancy is greater in WT than in npr-15(tm12539) animals across experiments (Supplementary file 2), which suggests that the strong lack of avoidance of npr-15(tm12539) somehow counteracts the experimental variation.We also found that npr-15(tm12539) exhibited reduced learned avoidance compared to WT animals (Figure 1-figure supplement 1C).To investigate whether aerotaxis played a role in the lack of avoidance of S. aureus exhibited by npr-15(tm12539), we studied lawn occupancy in the presence of 8% oxygen.As shown in Figure 1figure supplement 1D, exposure to 8% oxygen to npr-15(tm12539) animals did not rescue the lack of avoidance to S. aureus, although it did rescue the lack of avoidance of npr-1 mutants.Moreover, the survival of npr-15(tm12539) animals on full-lawn assays, where agar plates were completely covered by pathogenic bacteria to eliminate the possibility of pathogen avoidance, was significantly higher than that of WT animals (Figure 1-figure supplement 1E and F).These findings suggest that NPR-15 suppresses pathogen resistance and enhances avoidance behavior in response to pathogen infection, independently of oxygen concentrations. Because C. elegans pharyngeal pumping directly affects bacterial intake (Styer et al., 2008;Singh and Aballay, 2019a;Cao et al., 2017;Sellegounder et al., 2019), we asked whether the resistance to infections and reduced pathogen avoidance behavior in npr-15(tm12539) animals could be attributed to a decrease in pathogen intake.We found that npr-15(tm12539) animals exhibited pumping rates comparable to that of WT animals (Figure 1-figure supplement 1G), indicating that the dose of pathogens is similar in both cases.Moreover, it has recently been demonstrated that bacterial accumulation in animals defective in the defecation motor program causes intestinal distension that elicits a robust immune response (Singh and Aballay, 2019a) and modulates pathogen avoidance (Singh and Aballay, 2019a;Singh and Aballay, 2019b;Filipowicz et al., 2021;Kumar et al., 2019;Hong et al., 2021).However, we found that the defecation cycle of npr-15(tm12539) animals is indistinguishable from that of WT animals (Figure 1-figure supplement 1H).Taken together, our results show that NPR-15 loss-of-function enhances pathogen resistance and inhibits avoidance behavior, suggesting NPR-15 suppresses molecular immunity while activating behavioral immunity. The loss of NPR-15 leads to the upregulated immune and neuropeptide genes To understand the immune mechanisms controlled by NPR-15 in defense against pathogen exposure, we conducted transcriptomic analyses to identify dysregulated genes in npr-15(tm12539) compared to WT animals (Figure 2A and Supplementary file 3).To identify gene groups that were controlled by NPR-15, we performed an unbiased gene enrichment analysis using a WormBase enrichment analysis tool (https://wormbase.org/tools/enrichment/tea/tea.cgi) that is specific for C. elegans gene data analyses (Angeles-Albores et al., 2016).The study revealed 10 ontology clusters with high enrichment scores of vital biological functions for upregulated and downregulated genes in npr-15(tm12539) (Figure 2B, Figure 2-figure supplement 1).Overall, the gene expression data showed significant upregulation of immune/defense response and neuropeptide signaling pathway genes (Figure 2B).Genes associated with synaptic signaling, ligand-gated channel activity, lipid metabolism, and response to biotic stimuli were also upregulated in npr-15(tm12539) animals. NPR-15 controls ELT-2-and HLH-30-dependent genes via sensory neuron, ASJ To investigate whether the enhanced resistance to pathogen infection of npr-15(tm12539) animals is due to the upregulation of immune genes, we examined the role of RNA interference (RNAi)mediated suppression of the immune pathways shown in Figure 2C.We inactivated elt-2, pmk-1, hlh-30, and daf-16 in WT and npr-15(tm12539) animals and exposed them to S. aureus.We found that elt-2 RNAi completely suppressed the enhanced resistance to S. aureus infection in npr-15(tm12539) animals (Figure 3A).Partial suppression of pathogen resistance in npr-15(tm12539) animals was observed with hlh-30 RNAi (Figure 3B).Furthermore, when both hlh-30 and elt-2 were inactivated in npr-15(tm12539) animals and were exposed to S. aureus, their susceptibility to infection was comparable to that of elt-2 RNAi animals (Figure 3C).However, pmk-1, daf-16 RNAi failed to suppress the pathogen resistance of npr-15(tm12539) (Figure 3D and E).To confirm these findings, we crossed mutants in the aforementioned immune regulators to npr-15(tm12539) animals and exposed them to S. aureus (Figure 3-figure supplement 1A-C).We also studied the aforementioned immune pathways in the response of npr-15(tm12539) animals to P. aeruginosa (Figure 3-figure supplement 1D-F).We found that hlh-30 or pmk-1 mutation partly suppressed the resistance to P. aeruginosa infection phenotype of npr-15(tm12539) animals (Figure 3-figure supplement 1D and E).These results indicate that NPR-15 suppresses ELT-2-and HLH-30-dependent molecular immunity. Because NPR-15 is expressed in six sensory neuronal cells (ASG, ASI, ASJ, ASE, AFD, and AWC) as well as an interneuron, AVK (Figure 4A, Harris et al., 2020), we studied whether the NPR-15expressing cells could control the defense response against pathogen infection.First, we examined the neuronal connectome and communication between the NPR-15-expressing cell and found wellestablished synaptic connections between all the sensory neurons (Figure 4A).Next, we asked if NPR-15 could be acting in a neuron-intrinsic manner to control immune response.To address this question, we specifically inactivated npr-15 in neurons and the intestine using tissue-specific RNAi strains and assessed the survival of the animals after S. aureus infection.Unlike intestine-specific RNAi (strain MGH171) (Figure 4-figure supplement 1A), neural-specific RNAi (strain TU3401) of NPR-15 resulted in a pathogen resistance phenotype similar to that of npr-15(tm12539) animals (Figure 4figure supplement 1B).This finding was further supported by rescuing NPR-15 under the control of a pan-neuronal promoter and exposing the animals to S. aureus.The results demonstrated that pan-neuronal promoter-driven expression of NPR-15 rescued the enhanced survival phenotype of npr-15(tm12539) animals (Figure 4B).These results suggest that NPR-15 suppresses immunity through the nervous system.We next studied the specific NPR-15-expressing neuronal cells (ASG, ASI, ASJ, ASE, AFD, AWC) that could control the defense response against pathogen infection.To identify the neuronal cells responsible for the NPR-15-mediated immune control, we crossed strains lacking the neurons with npr-15(tm12539) animals and studied their effect on defense against pathogen infection (Figure 4C-E and Figure 4-figure supplement 1C-E).We found that ASJ(-) animals exhibited resistance to pathogen-mediated killing similar to that of npr-15(tm12539) animals (Figure 4C).Although the ASG(-) and ASE(-) neuron-ablated strains demonstrated pathogen resistance phenotype, there is a significant difference compared to that of npr-15(tm12539) animals (Figure 4D-E).Hence, we further confirmed the role of NPR-15/ASJ neurons in suppressing immunity by rescuing NPR-15 on ASJ unique promoter and performing survival experiments with the rescued strain.Results showed that the ASJ-specific rescue of NPR-15 successfully blocked the enhanced survival of npr-15(tm12539) animals (Figure 4F).We also quantified the expression of immune genes and found that they were upregulated in ASJ(-) animals (Figure 4G).Collectively, these findings suggest that NPR-15 controls immunity in a manner similar to that of ASJ neurons. The lack of avoidance behavior by NPR-15 loss-of-function is independent of immunity and neuropeptide genes Having established the upregulation of immune genes in npr-15(tm12539) animals compared to WT animals (Figure 2B-F and Supplementary file 4), we determine whether the reduced pathogen avoidance of npr-15(tm12539) animals could be attributed to the upregulation of immune pathways.To investigate this, we employed RNAi to suppress immune transcription factors/regulators and evaluated their impact on pathogen avoidance behavior in both WT and npr-15(tm12539) animals.Our results indicate that none of the tested immune regulators (elt-2, pmk-1, daf-16, and hlh-30) were able to suppress the lack of pathogen avoidance behavior observed in response to S. aureus (Figure 5-figure supplement 1A-D).Furthermore, we inactivated immune genes that are not controlled by the immune regulators tested above, but none of them were able to suppress the lack of avoidance behavior in the npr-15(tm12539) animals (Supplementary file 5).Given the possibility of functional redundancy among these genes, we cannot rule out the possibility that different combinations may play a role in controlling avoidance behavior.These findings indicate that the avoidance behavior observed in npr-15(tm12539) is independent of individual immune genes upregulated in npr-15(tm12539) animals. Additionally, previous studies have shown that neuropeptides expressed in the intestine can modulate avoidance behavior (Lee and Mylonakis, 2017), and we found that neuropeptides are among the most highly upregulated genes in npr-15(tm12539) animals (Figure 2B and Supplementary file 6).To study whether an intestinal signal may act through NPR-15 to regulate avoidance, we inactivated the upregulated intestinal neuropeptide genes in npr-15(tm12539) animals.Our experiments revealed that none of the inactivated intestinal-expressed neuropeptides were able to suppress the lack of avoidance behavior of npr-15(tm12539) animals in response to S. aureus (Figure 5-figure supplement 1E and Supplementary file 5).Therefore, it can be concluded that the absence of avoidance behavior by loss of NPR-15 function is independent of both immune and intestinal neuropeptide signaling pathways. NPR-15 controls pathogen avoidance via sensory neuron, ASJ, and intestinal TRPM channel, GON-2 Having demonstrated that NPR-15 controls immune response via sensory neurons, ASJ (Figure 4C and F-G), we sought to identify the neurons involved in the lack of avoidance behavior to S. aureus observed in npr-15(tm12539) animals.First, we investigated the avoidance behavior of a pan-neuronal rescued strain of NPR-15, which successfully rescued the suppressed pathogen avoidance of npr-15(tm12539) animal (Figure 5A).Next, we asked which of the NPR-15-expressing-neuronal cells could control pathogen avoidance.To answer this question, we evaluated the pathogen avoidance of the different neuron-ablated strains that were crossed with npr-15(tm12539) animals.Consistent with our previous findings, we found that only ASJ(-) exhibited reduced pathogen avoidance similar to that of npr-15(tm12539) animals (Figure 5B).This observation was further confirmed by rescuing NPR-15 under the control of an ASJ unique promoter (Figure 5C).The pathogen avoidance of other neuronalablated strains was comparable to that of WT animals (Figure 5D-G).These results suggest that the loss of NPR-15 function suppresses behavioral immunity via sensory neurons, specifically ASJ. Because TRPM ion channels, GON-2 and GTL-2, are required for pathogen avoidance (Filipowicz et al., 2021), we studied whether they may be part of the NPR-15 pathway that controls pathogen avoidance.We inactivated gon-2 and gtl-2 in npr-15(tm12539) animals and WT animals.Our findings showed that only gon-2 null animals, but not gtl-2, exhibited pathogen avoidance behavior similar to that of npr-15(tm12539) animals (Figure 5H-I).This suggests that both NPR-15 and GON-2 may function in a shared pathway to regulated pathogen avoidance behavior.As it has been previously demonstrated that GON-2 modulates avoidance behavior through the intestine (Filipowicz et al., 2021), we confirmed the role of intestinal-expressed GON-2 in pathogen avoidance by inactivating gon-2 in an RNAi intestine-specific strain (MGH171), as well as in an RNAi neuron-specific strain (TU3401) that was used as a control.These animals were exposed to S. aureus to study their lawn occupancy.Our results showed that the inactivation of gon-2 in MGH171 and MGH171;npr-15(tm12539) (Figure 5J), but not in TU3401 (Figure 5-figure supplement 2), exhibited comparable avoidance behaviors to that of npr-15(tm12539) animals.These results suggest that NPR-15 acts through the intestinal TRPM channel GON-2 to control pathogen avoidance behavior.Since we have previously shown that only ASJ(-) animals among other neuron-ablated strains exhibit a similar avoidance behavior to npr-15(tm12539) animals (Figure 5B and D-G), we investigated whether NPR-15 control of avoidance behavior toward S. aureus in a GON-2-dependent manner involves ASJ.We used RNAi to inactivate gon-2 in ASJ(-), ASJ(-);npr-15(tm152799), npr-15(tm12539), and WT animals before exposing them to S. aureus to assay their lawn occupancy.Our results showed that the avoidance behavior of ASJ(-);gon-2 is comparable to that of ASJ(-), ASJ(-);npr-15(tm12539), and gon-2 null animals when exposed to S. aureus (Figure 5K).These results suggest that NPR-15 controls S. aureus avoidance in a GON-2-dependent manner via the ASJ neuron.In summary, the neuronal GPCR/NPR-15 plays a dual role: suppressing the immune response and enhancing avoidance behavior toward S. aureus through sensory neurons, specifically ASJ.The control of immunity against the pathogen S. aureus is dependent on the ELT-2 and HLH-30 transcription factors, while the control of avoidance behavior is GON-2-dependent and mediated through the intestine (Figure 6). Discussion GPCRs play a crucial role in neuronal and non-neuronal tissues in shaping both immune and avoidance behavioral responses toward invading pathogens (Furness and Sexton, 2017).In this study, we investigated the dual functionality of GPCR/NPR-15 in regulating molecular innate immunity and pathogen avoidance behavior independently of aerotaxis.Additionally, we aimed to understand the mechanism by which the nervous system controls the interplay between these crucial survival strategies in response to pathogenic threats.Our findings indicate that NPR-15 suppresses immune responses against different pathogen infections by inhibiting the activity of the GATA/ELT-2 and HLH-30 immune transcription regulators.Moreover, we found that the control of the pathogen avoidance behavior by NPR-15 is independent of aerotaxis and dependent on intestinal GON-2.Furthermore, we demonstrated that NPR-15 controls the immunity-behavioral response via an amphid sensory neuron, ASJ, in response to S. aureus infection. ASJ neurons are known to regulate different biological functions such as lifespan (Alcedo and Kenyon, 2004), dauer activities (Bargmann and Horvitz, 1991;Chung et al., 2013), head-directed light avoidance (Ward et al., 2008), and food search (Macosko et al., 2009).We show here that ASJ neurons play a role in the control of immune and behavioral responses against pathogen infection through GPCR/NPR-15.Our research additionally indicates that the regulation of NPR-15-mediated avoidance is not influenced by intestinal immune and neuropeptide genes.Given the potential for functional redundancy and our focus on genes upregulated in the absence of NPR-15, we cannot entirely rule out the possibility that unexamined immune effectors or neuropeptides, not transcriptionally controlled by NPR-15, might be involved.Different intestinal signals may also participate in the NPR-15 pathway that controls pathogen avoidance. Our findings shed light on the role of NPR-15 in the control of the immune response.NPR-15 seems to suppress specific immune genes while activating pathogen avoidance behavior to minimize potential tissue damage and the metabolic energy cost associated with activating the molecular immune response against pathogen infections.Overall, the control of immune activation is essential for maintaining homeostasis and preventing excessive tissue damage caused by an overly aggressive and energy-costly response against pathogens (Martin et al., 2017;Otarigho and Aballay, 2021;Ganeshan and Chawla, 2014;Ganeshan et al., 2019). Conclusion Our research uncovers the dual regulatory role of NPR-15 in both immunity and avoidance behavior, independent of aerotaxis, mediated by amphid sensory neurons.The host relies on behavioral responses to minimize or completely avoid pathogen exposure, effectively preventing the activation of the immune pathways and production of immune effector molecules, which can be metabolically costly.Moreover, the sustained and prolonged activation of the molecular immune system can have detrimental effects on the host.Understanding the organismal control of molecular and behavioral immune responses to pathogens can provide valuable insights into universal mechanisms used across species to maintain homeostasis during infections. RNA interference Knockdown of targeted genes was obtained using RNAi by feeding the animal with E. coli strain HT115(DE3) expressing double-stranded RNA homologous to a target gene (Fraser et al., 2000;Timmons and Fire, 1998).RNAi was carried out as described previously (Sun et al., 2011).Briefly, E. coli with the appropriate vectors were grown in LB broth containing ampicillin (100 μg/ml) and tetracycline (12.5 μg/ml) at 37°C overnight and plated onto NGM plates containing 100 μg/ml ampicillin and 3 mM isopropyl β-D-thiogalactoside (RNAi plates).RNAi-expressing bacteria were grown at 37°C for 12-14 hr.Gravid adults were transferred to RNAi-expressing bacterial lawns and allowed to lay eggs for 2-3 hr.The gravid adults were removed, and the eggs were allowed to develop at 20°C to young adults.This was repeated for another generation (except for ELT-2 RNAi) before the animals were used in the experiments.The RNAi clones were from the Ahringer RNAi library. C. elegans survival assay on bacterial pathogens P. aeruginosa and S. enterica were incubated in LB medium.S. aureus was incubated in a TSA medium with nalidixic acid (20 μg/ml).The incubations were done at 37°C with gentle shaking for 12 hr.P. aeruginosa and S. enterica were grown on a modified NGM agar medium of 0.35% peptone and TSA, respectively.For partial lawn assays, 20 μl of the overnight bacterial cultures were seeded at the center of the relevant agar plates without spreading.For full-lawn experiments, 20 μl of the bacterial culture was seeded and spread all over the surface of the agar plate.No antibiotic was used for P. aeruginosa and S. enterica, while nalidixic acid (20 μg/ml) was used for the TSA plates for S. aureus.The seeded plates were allowed to grow for 12 hr at 37°C.The plates were left at room temperature for at least 1 hr before the infection experiments.20 synchronized young adult animals were transferred to the plates for infection, three technical replicate plates were set up for each condition (n=60 animals), and the experiments were performed in triplicate.The plates were then incubated at 25°C.Scoring was performed every 12 hr for P. aeruginosa and S. aureus, and 24 hr for S. enterica.Animals were scored as dead if the animals did not respond to touch by a worm pick or lack of pharyngeal pumping.Live animals were transferred to fresh pathogen lawns each day.All C. elegans killing assays were performed three times independently. Bacterial lawn avoidance assay Bacterial lawn avoidance assays were performed by 20 ml of P. aeruginosa PA14 and S. aureus NCTCB325 on 3.5 cm modified NGM agar plates (0.35% peptone) and (0.35% TSA) respectively, which were cultured at 37°C overnight to have a partial lawn.The modified NGM plates were left to cool to room temperature for about 1 hr, and 20 young adult animals grown on E. coli OP50 were transferred to the center of each bacterial lawn afterward.The number of animals on the bacterial lawns was counted at 12 and 24 hr after exposure. Aversive training Training plates of 3.5 cm diameter containing either E. coli OP50 on SK agar or S. aureus on TSA agar were prepared as described previously.Young gravid adult hermaphroditic animals that were grown on E. coli OP50 were washed with M9 and transferred to the training plates.They were allowed to roam for 4 hr at 25°C.After this, the animals were rewashed and transferred to a TSA plate containing lawn occupancy TSA-plated seed with S. aureus, as described above.The number of animals on the bacterial lawns was counted. Avoidance assays at 8% oxygen Avoidance assays as described above were carried out in a hypoxia chamber.Briefly, after young gravid adult hermaphroditic animals were transferred to the avoidance plates, the plates were placed in the hypoxia chamber, and the lids of the plates were removed.The chamber was purged with 8% oxygen (balanced with nitrogen) for 5 min at a flow rate of 25 l/min.The chamber was then sealed, and assays were carried out.Control plates were incubated at ambient oxygen. Pharyngeal pumping rate assay WT and npr-15(tm12539) animals were synchronized by placing 20 gravid adult worms on NGM plates seeded with E. coli OP50 and allowing them to lay eggs for 60 min at 20°C.The gravid adult worms were then removed, and the eggs were allowed to hatch and grow at 20°C until they reached the young adult stage.The synchronized worms were transferred to NGM plates fully seeded with P. aeruginosa for 24 hr at 25°C.Worms were observed under the microscope with a focus on the pharynx.The number of contractions of the pharyngeal bulb was counted over 60 s.Counting was conducted in triplicate and averaged to obtain pumping rates. Defecation rate assay WT and npr-15(tm12539) animals were synchronized by placing 20 gravid adult worms on NGM plates seeded with E. coli OP50 and allowing them to lay eggs for 60 min at 20°C.The gravid adult worms were then removed, and the eggs were allowed to hatch and grow at 20°C until they reached the young adult stage.The synchronized worms were transferred to NGM plates fully seeded with P. aeruginosa for 24 hr at 25°C.Worms were observed under a microscope at room temperature.For each worm, an average of 10 intervals between 2 defecation cycles were measured.The defecation cycle was identified as a peristaltic contraction beginning at the posterior body of the animal and propagating to the anterior part of the animal followed by feces expulsion. Brood size assay The brood size assay was done following the earlier described methods (Berman and Kenyon, 2006;Otarigho and Aballay, 2020).Ten L4 animals from egg-synchronized populations were transferred to individual NGM plates (seeded with E. coli OP50) (described above) and incubated at 20°C.The animals were transferred to fresh plates every 24 hr.The progenies were counted and removed every day. C. elegans longevity assays Longevity assays were performed on NGM plates containing live, UV-killed E. coli strains HT115 or OP50 as described earlier (Sun et al., 2011;Kumar et al., 2019;Otarigho and Aballay, 2021;Otarigho and Aballay, 2020).Animals were scored as alive, dead, or gone each day.Animals that failed to display touch-provoked or pharyngeal movement were scored as dead.Experimental groups contained 60-100 animals and the experiments were performed in triplicate.The assays were performed at 20°C. Intestinal bacterial loads visualization and quantification Intestinal bacterial loads were visualized and quantified as described earlier (Sun et al., 2011;Otarigho and Aballay, 2020).Briefly, P. aeruginosa-GFP lawns were prepared as described above.The plates were cooled to ambient temperature for at least an hour before seeding with young gravid adult hermaphroditic animals and the setup was placed at 25°C for 24 hr.The animals were transferred from P. aeruginosa-GFP plates to the center of fresh E. coli plates for 10 min to eliminate P. aeruginosa-GFP on their body.The step was repeated two times more to further eliminate external P. aeruginosa-GFP left from earlier steps.Subsequently, 10 animals were collected and used for fluorescence imaging to visualize the bacterial load while another 10 were transferred into 100 µl of PBS plus 0.01% Triton X-100 and ground.Serial dilutions of the lysates (10 1 -10 10 ) were seeded onto LB plates containing 50 µg/ml of kanamycin to select for P. aeruginosa-GFP cells and grown overnight at 37°C.Single colonies were counted the next day and represented as the number of bacterial cells or CFU per animal. Fluorescence imaging Fluorescence imaging was carried out as described previously (Otarigho and Aballay, 2020).Briefly, animals were anesthetized using an M9 salt solution containing 50 mM sodium azide and mounted onto 2% agar pads.The animals were then visualized for bacterial load using a Leica M165 FC fluorescence stereomicroscope.The diameter of the intestinal lumen was measured using Fiji-ImageJ software.At least 10 animals were used for each condition. RNA sequencing and bioinformatic analyses Approximately 40 gravid WT and npr-15(tm12539) animals were placed for 3 hr on 10 cm NGM plates (seeded with E. coli OP50) (described above) to have a synchronized population, which developed and grew to L4 larval stage at 20°C.Animals were washed off the plates with M9 and frozen in QIAzol by ethanol/dry ice and stored at -80°C prior to RNA extraction.Total RNA was extracted using the RNeasy Plus Universal Kit (QIAGEN, Netherlands).Residual genomic DNA was removed using TURBO DNase (Life Technologies, Carlsbad, CA, USA).A total of 6 μg of total RNA was reverse-transcribed with random primers using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA). The library construction and RNA sequencing in Illumina NovaSeq 6000 platform was done following the method described by Zhu et al., 2018, andYao et al., 2018, pair-end reads of 150 bp were obtained for subsequent data analysis.The RNA sequence data were analyzed using a workflow constructed for Galaxy (https://usegalaxy.org)as described (Jalili et al., 2020) and was validated using Lasergene DNA star software.The RNA reads were aligned to the C. elegans genome (WS271) using the aligner STAR.Counts were normalized for sequencing depth and RNA composition across all samples.Differential gene expression analysis was then performed on normalized samples.Genes exhibiting at least twofold change were considered differentially expressed.The differentially expressed genes were subjected SimpleMine tools from WormBase (https://www.wormbase.org/tools/mine/simplemine.cgi) to generate information such as WormBase ID and gene name, which are employed for further analyses.Gene ontology analysis was performed using the WormBase IDs in DAVID Bioinformatics Database (https://david.ncifcrf.gov)(Dennis et al., 2003) and validated using a C. elegans data enrichment analysis tool (https://wormbase.org/tools/enrichment/tea/tea.cgi).The enrichment analysis tool on WormBase indicates that all significantly enriched terms have a q value less than 0.1.Immune and age determination pathways were obtained using the Worm Exp version 1 (http://wormexp.zoologie.uni-kiel.de/wormexp/)(Yang et al., 2016) using the transcription factor target category.The Venn diagrams were obtained using the web tool InteractiVenn (http://www.interactivenn.net)(Heberle et al., 2015) and bioinformatics and evolutionary genomics tool (http:// bioinformatics.psb.ugent.be/webtools/Venn/).While neuron wiring was done using the database of synaptic connectivity of C. elegans for computation (White et al., 1986, http://ims.dse.ibaraki.ac.jp/ ccep-tool/). RNA isolation and qRT-PCR Animals were synchronized and total RNA extraction was done following the protocol described above.Quantitative reverse transcription-PCR (qRT-PCR) was conducted using the Applied Biosystems One-Step Real-time PCR protocol using SYBR Green fluorescence (Applied Biosystems) on an Applied Biosystems 7900HT real-time PCR machine in 96-well plate format.Twenty-five microliter reactions were analyzed as outlined by the manufacturer (Applied Biosystems).The relative fold changes of the transcripts were calculated using the comparative CT(2 -ΔΔCT ) method and normalized to pan-actin (act-1, -3, -4).The cycle thresholds of the amplification were determined using StepOnePlus Real-Time PCR System Software v2.3 (Applied Biosystems).All samples were run in triplicate.The primer sequences were available upon request and presented in Supplementary file 7. Quantification and statistical analysis Statistical analysis was performed with Prism 8 version 8.1.2(GraphPad).All error bars represent the standard deviation.The two-sample t-test was used when needed, and the data were judged to be statistically significant when p<0.05.In the figures, asterisks (*) denote statistical significance as follows: NS, not significant, *, p<0.01, **, p<0.001, ***, p<0.0001, as compared with the appropriate controls.The Kaplan-Meier method was used to calculate the survival fractions, and statistical significance between survival curves was determined using the log-rank test.All experiments were performed at least three times. Figure 2 . Figure 2. NPR-15 inhibits the expression of immune and aversion-related genes/pathways.(A) Volcano plot of upregulated and downregulated genes in npr-15(tm12539) vs. wild-type (WT) animals.Red and blue dots represent significant upregulated and downregulated genes respectively, while the gray dots represent not significant genes.(B) Gene ontology analysis of upregulated genes in npr-15(tm12539) vs. WT animals.The result was filtered based on significantly enriched terms, with a q value <0.1.(C) Representation factors of immune pathways for the upregulated immune genes in npr-15(tm12539) vs. WT animals.(D) Venn diagram showing the upregulated immune genes in each pathway in npr-15(tm12539) vs. WT animals.(E) Quantitative reverse transcription-PCR (qRT-PCR) analysis of ELT-2-depenent immune gene expression in WT and npr-15(tm12539) animals.Bars represent means while error bars indicate standard deviation (SD) of three independent experiments; *p<0.05,**p<0.001,and ***p<0.0001.(F) qRT-PCR analysis of HLH-30-depenent immune gene expression in WT and npr-15(tm12539) animals.Bars represent means while error bars indicate SD of three independent experiments; *p<0.05,**p<0.001,and ***p<0.0001.The online version of this article includes the following figure supplement(s) for figure 2: Figure 3 . Figure 3. NPR-15 loss-of-function enhances immunity via ELT-2 and HLH-30 when exposed to S. aureus.(A) Wild-type (WT) and npr-15(tm12539) animals fed with elt-2 RNAi were exposed to S. aureus full lawn and scored for survival.EV, empty vector RNAi control.(B) WT and npr-15(tm12539) animals fed with hlh-30 RNAi were exposed to S. aureus full lawn and scored for survival.EV, empty vector RNAi control.(C) WT and npr-15(tm12539) animals fed with hlh-30 and elt-2 RNAi were exposed to S. aureus full lawn and scored for survival.EV, empty vector RNAi control.(D) WT and npr-15(tm12539) animals fed with pmk-1 RNAi and animals were exposed to S. aureus full lawn and scored for survival.EV, empty vector RNAi control.(E) WT and npr-15(tm12539) animals fed with daf-16 RNAi were exposed to S. aureus full lawn and scored for survival.EV, empty vector RNAi control.The online version of this article includes the following figure supplement(s) for figure 3: Figure 5 Figure 5 continued on next page Figure supplement 2 . Figure supplement 2. The transient receptor potential melastatin (TRPM) channel GON-2 control avoidance is independent of the nervous system. Figure 5 continued Figure 5 continued
2023-08-02T13:11:12.813Z
2024-03-06T00:00:00.000
{ "year": 2024, "sha1": "f42fbfef94b09100d34909bbc061c9a6207cfb2e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.90051", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "6f745c71dfbfb881c00a5be2e9cc818a9a78db92", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233303780
pes2o/s2orc
v3-fos-license
Cost-utility analysis of imrecoxib compared with diclofenac for patients with osteoarthritis Background To estimate the cost -utility of imrecoxib compared with diclofenac, as well as the addition of a proton pump inhibitor to both two treatment strategies, for patients with osteoarthritis, from a Chinese healthcare perspective. Methods A Markov model was built. Costs of managing osteoarthritis and initial adverse events were collected from a Medical Database which collected information from 170 hospitals. Other parameters were obtained from the literature. Subgroup analyses were conducted for people at high risk of gastrointestinal or cardiovascular adverse events. Deterministic and probabilistic sensitivity analyses were performed. Results Imrecoxib was highly cost-effective than diclofenac (the ICER was $401.58 and $492.77 in patients at low and high gastrointestinal and cardiovascular risk, respectively). The addition of a proton pump inhibitor was more cost -effective compared with single drug for both treatment strategies. Findings remained robust to sensitivity analyses. 59.04% and 57.16% probability for the co-prescription of imrecoxib and a proton pump inhibitor to be the most cost-effective strategy in all patients considered using the cost-effectiveness threshold of $30,000. Conclusions The addition of a proton pump inhibitor to both imrecoxib and diclofenac was advised. Imrecoxib provides a valuable option for patients with osteoarthritis. Uncertainties existed in the model, and the suggestions can be adopted with caution. Supplementary Information The online version contains supplementary material available at 10.1186/s12962-021-00275-7. Background Osteoarthritis (OA) is a chronic disease with a high prevalence of 46.3% in Chinese who aged 40 years-old and above [1]. OA is associated with high disability rate, which can increase the incidence of cardiovascular disease (CV) and all-cause mortality rate, accounting for the main cause of disability in many countries [2]. According to the recommendations from both American Academy of Orthopedic Surgeons (AAOS) and Osteoarthritis Research Society International (OARSI), the use of Nonsteroidal anti-inflammatory drugs (NSAIDs) was highly recommended, which was also listed as first-line drugs managing OA in Chinese Guideline for Diagnosis and Treatment of OA [2][3][4]. There are two types of NSAIDs: traditional NSAIDs and newly developed selective COX-2 inhibitors. Similar efficacy of pain relief was found in both traditional NSAIDs and COX-2 inhibitors, while traditional NSAIDs were associated with gastrointestinal (GI) side effects and selective COX-2 inhibitors were developed to reduce GI adverse events [3,5]. Meanwhile, cardiovascular (CV) adverse events were found in both traditional NSAIDs and COX-2 inhibitors. Imrecoxib is a Chinese-patent COX-2 inhibitor, which was approved by the Chinese Food and Drug Cost Effectiveness and Resource Allocation Administration (CFDA) in 2011. Diclofenac is a traditional NSAID, which was prescribed widely in managing OA. Imrecoxib was reported to have a lower rate of GI adverse events [6], but a higher drug price, the longterm cost-effectiveness of imrecoxib stay unknown. The National Institute for Health and Care Excellence (NICE) published a guideline (CG 59) for the management of OA in 2008 [7,8]. In the guideline, the cost effectiveness of NSAIDs, selective COX-2 inhibitors, selective COX-2 inhibitors + proton pump inhibitor (PPI), and NSAIDs + PPI were compared. It drew up efficacy and safety data from three randomized controlled trials (RCTs): CLASS (celecoxib, ibuprofen, and diclofenac), MEDAL (etoricoxib and diclofenac), and TARGET (lumiracoxib, naproxen, and ibuprofen). It assumed that the NSAIDs included had same efficacy of pain relief when managing OA but with different GI and CV risks, and therefore had different cost-effectiveness. Furthermore, NICE also provided an OA model which was widely used in different regions to explore the cost-effective of drugs managing OA from the perspective of their local healthcare system, which provides an efficient way to conduct cost effectiveness for drugs managing OA [5,[9][10][11]. The objective of this study is to perform a cost utility analysis of imrecoxib and diclofenac, and also the addition of a PPI to both imrecoxib and diclofenac for patients with osteoarthritis. The model used in the present study was based on the OA model provided by the NICE, the analyses were conducted from the perspective of Chinese healthcare system, hoping to provide suggestions for relevant stakeholders. Methods The model used in the present study is a cost-utility analysis based on the CG59 NICE OA model, which is a Markov model. The outcomes are increased cost, increased quality adjusted life years (QALYs), and incremental cost-effectiveness ratio (ICER). In addition, the present study was performed in accordance with the consolidated health economic evaluation reporting standards (CHEERS) (Additional file 1: Table S7). This study was approved by the institutional review board of Zhejiang University School of Public health, and no human subjects were involved. Comparators In the present study, we compared the cost effectiveness of imrecoxib (100 mg twice a day, 100 mg BID) versus (vs.) diclofenac (50 mg three times a day, 50 mg TID), with and without the addition of omeprazole co-prescription (200 mg QD). Imrecoxib was chosen because it's a relatively new selective COX-2 inhibitor developed in China in 2011, and its cost effectiveness was not fully known, which caused great interest on its cost-effectiveness in the healthcare system of China. Diclofenac is a widely-used traditional NSAID in managing OA. There's necessity to compare the cost-effectiveness between the two drugs, to provide more suggestions for relevant departments in China when managing OA from the perspective of long-term cost-effectiveness. The addition of a PPI was considered to be more cost-effective than single drug in the NICE guideline, however, the cost-effectiveness of the addition to imrecoxib still stay unknown. Therefore, the addition of omeprazole to imrecoxib and diclofenac was also considered in the analysis, which was widely used as a PPI for patients with osteoarthritis. Model description Both of the efficacy and safety of different treatment strategies were taken into consideration in the present model. The details of the model can be found in the NICE guideline (Additional file 1: Figure S1) [7,8]. The health states that make up the Markov model represent a range of possible adverse events (AEs): GI symptoms, symptomatic ulcer, complicated GI, myocardial infarct (MI), stroke and heart failure (HF). Except for GI discomforts, other AEs are assumed to have continuing impact over the patients' remaining lifetimes, therefore, there're five post AE states: post symptomatic ulcer, post complicated GI, post MI, post stroke, and post HF. In addition, death and normal states without AEs are also included in the model. Each health state has associated cost and QALYs. It was assumed that once a patient has an AE (except for GI discomforts, because GI discomforts was supposed to be a minor AE and patients in that state don't need to stop the medication), they would stop the medication and stay in that post state until dead. The model is a lifetime model, and would be terminated if patients are 80-years-old or dead. Two groups with different ages (55 years-old and 65 years-old) were both estimated in line with different risks of AEs. The annual discount rate of both cost and utility was set to 5% according to the Chinese Guidelines of Pharmacoeconomics [12]. Half cycle corrections were made for both cost and QALY. The simulation was carried out initially for 100 cycles and 60 cycles with 3 months in each circle for patients at low and high GI and CV risk group, respectively. Cohort simulation with 100,000 patients per circle was performed in the base-case analyses, and 100,000 Monte Carlo simulations were performed in probability sensitivity analysis (PSA) analyses. Patients The present model estimated results for OA patients aged 55 years-old and 65 years-old. Patients aged 55 years-old were assumed to have lower GI and CV risk, while patients aged 65 years-old were assumed to have higher GI and CV risk (a 2.96-times greater risk of developing an ulcer or complicated GI events, and 1.94-times greater risk of developing CV events) [8]. Cost Managing costs of OA and initial AEs were extracted from the Hospital Information System (HIS) of 170 hospitals through the Su-Value Database from 2016 to 2018 [13]. Data were extracted according to the ICD-10 code. It was assumed that the cost of managing OA was consisted of drug cost and other outpatient expense extracted from the Su-Value Database. Drug cost was calculated by using treatment duration in each circle, recommended dose and drug price, recommended dose and drug price were obtained from the Beijing Medicine Sunshine Purchasing System [14], and treatment duration in each circle was adjusted according to the consultation of doctors (Additional file 1: Table S1-3). In each circle, the cost of managing GI discomforts and symptomatic ulcer for all patients was supposed to include the cost of one outpatient visiting, while that of complicated GI, stroke, HF and MI were supposed to include one outpatient and inpatient visiting, which was similar to the assumption in the CG59 guideline (Additional file 1: Tables S4-5) [8]. It was supposed that there is no maintenance cost for GI events, while there's a risk for patients who experienced GI events would suffer again, therefore the cost of post complicated GI and post symptomatic ulcer were calculated by multiplying the recurrence rate and the cost of initial AE states. When it comes to CV states, it was supposed that patients who suffered CV would have a maintenance cost to manage CV events. Because the cost of post CV states can't be obtained from the Su-Value Database directly, therefore, maintenance cost of three post-CV states were obtained from literature which reported the cost in Chinese patients (Table 1). When patients cannot continue to take the medication of specific drug, the topical diclofenac was assumed to be adopted as a medication to manage OA [15] as the suggestion of the NICE OA model. Quality of life QALYs was used to represent quality of life, due to the sparse data, the QALYs data were extracted from the NICE OA model. The QALYs of OA patients without any AEs were measured based on the efficacy of drug and the QALYs of OA symptom itself, it was assumed that all NSAIDs/selective COX-2 inhibitors were equally efficacious, which indicated that the QALYs of OA patients treated with NSAIDs/selective COX-2 inhibitors was higher than that of OA patients without any drug medication. The QALYs of initial and post AE states were also extracted ( Table 1). Transition probabilities There are RCTs comparing imrecoxib vs. celecoxib, celecoxib vs. diclofenac, while there's no RCT comparing imrecoxib and diclofenac directly. Therefore, indirect comparison was conducted to obtain the imrecoxib relative risk of AEs compared to diclofenac. Absolute AEs rate of diclofenac was extracted from a metareview which pooled the AEs of diclofenac observed in CLASS (celecoxib 800 mg, diclofenac 150 mg, ibuprofen 2400 mg), MEDAL (etoricoxib 73 mg, diclofenac 150 mg), EDGE (etoricoxib 90 mg, diclofenac 150 mg), and CONDOR (celecoxib 400 mg, diclofenac 150 mg) [11]. Celecoxib relative risk compared to diclofenac was obtained from a meta-review pooled the relative risk of AEs observed in CLASS and CONDOR, in which comparisons of celecoxib and diclofenac were conducted [9]. Imrecoxib relative risk compared to celecoxib was obtained from relevant literature, through literature review, two RCTs compared the safety of imrecoxib and celecoxib was included into analysis [16,17]. The addition of a PPI can cause the reduction in the risk of GI-related AEs both in NSAIDs and selective COX-2 inhibitors, and the effect was obtained from literature, which reported the results of a meta-analysis [5]. The proportions of withdrawing due to GI symptoms were extracted from literature, which were 13.9% and 11.2% for NSAIDs and selective COX-2 inhibitors, respectively [8]. The observation period of the rate reported in the literature may not be consistent with the period divided in the model, thereby the probability was obtained by adjusting the instantaneous rate, the formula is [18]: r = −[In(1−P 1 )]/t 1 , P 2 = 1−exp(−rt 2 ), here r represents the instantaneous rate, P 1 represents the rate observed in literature during specific period, P 2 is the probability needed in the model, t 1 is the time of observation in the literature while t 2 is the time set in the model. The mortality rates of general population and patients with AEs were transmitted to probabilities using the formula (Additional file 1: Table S6). Sensitivity analysis Deterministic sensitivity analysis (DSA) was performed by varying parameters to explore the robustness of the model and access the main influencing factors: discount rate varied from 0 to 8% according to Chinese Guidelines of Pharmacoeconomics [12]; parameters of cost, utility and possibility were set up ± 20%. In addition to DSA, a PSA was also performed. It was required by the NICE updated guidance for technology assessment that all cost-effectiveness models submitted to the institute should use PSA [19]. DSA can only simultaneously analyze the impact of a limited number of input parameters on results (in the present study, the distributions of cost, probability and utility were set, Additional file 1: Table S8) [12]. When the model runs, a parameter for each input is randomly changed according to its preset distribution [7,8], the mean cost and QALYs were obtained from the PSA results. Incremental cost-effectiveness ratio (ICER) In the base-case analysis, no treatment strategy was strictly dominated by any other strategy. The addition of a PPI to both imrecoxib and diclofenac was cost-effective, the ICER of co-prescription of a PPI to imrecoxib was $8656.09 and $8178.07 per QALY in the low and high GI and CV risk group, respectively, the ICER of co-prescription of a PPI to diclofenac was $320.83 and $363.61 per QALY in the low and high GI and CV risk group, respectively. When it comes to single drug, imrecoxib was more cost-effective than diclofenac, with the ICER of $401.58 and $492.77 per QALY in the low and high GI and CV risk group, respectively. Meanwhile, the co-prescription of a PPI to imrecoxib was more cost-effective than the co-prescription of a PPI to diclofenac (The ICER was $8274.80 and $7011.67 in the low GI and CV risk groups, respectively) ( Table 2). Parameters influencing the ICERs DSAs were performed for the base-case results for all patients considered. It showed that the main influencing factors of ICERs reported in the base-case results were risk of MI, discount rate of utility and cost, and utility of GI discomforts. Parameters related to MI were the important influencing factors, includes relative risk of probability of MI (imrecoxib vs. diclofenac, NSAIDs + PPI vs. NSAIDs, selective COX-2 inhibitors + PPI vs. selective COX-2 inhibitors), cost of post-MI (Additional fie 1: Figure S2-9). Although uncertainties exist in the present model with the wide range of parameters, it was found that the basecase results were robust to the sensitivity analysis, most of the ICERs below $10,000 (1.0 GDP per capita approximately), while the ICER of imrecoxib + PPI vs. imrecoxib, imrecoxib + PPI vs. diclofeanc + PPI exceed $10,000 but below $15,000 in all the patients considered (Additional fie 1: Figure S2-9). Probabilistic representation of uncertainty PSAs were performed, the cost-effectiveness scatterplot of the comparison of two single drugs: imrecoxib vs. diclofenac was performed. It showed that there were 1 and 2). The results of the PSA are shown, it suggested that in the low GI and CV risk group, for cost-effectiveness threshold below $500, diclofenac has the highest probability to be the most cost-effective option; for cost-effectiveness threshold between $500 and $3000, co-prescription of diclofenac and PPI was the most costeffective option; for cost-effectiveness threshold above $3000, the co-prescription of imrecoxib and PPI has the highest probability to be the most cost-effective option. In high GI and CV risk group, for cost-effectiveness threshold below $300, diclofenac was the most costeffective option, for cost-effectiveness threshold between $300 and $2500, co-prescription of diclofenac and PPI was likely to be the most cost-effective option; for costeffectiveness threshold above $2500, co-prescription of imrecoxib and PPI has the highest probability to be the most cost-effective option (Figs. 3 and 4). Using the threshold of $30,000 (3.0 GDP per capita approximately), there were 59.04% and 57.16% probability for imrecoxib plus PPI to be the most cost-effective option in the low and high GI and CV risk group, respectively. Discussion To our knowledge, this is the first study to explore the cost effectiveness of imrecoxib and diclofenac, and the addition of a PPI to both treatment strategies. A Markov model based on the NICE OA model was used in the present study. DSA was performed to explore the robustness of model with one parameter changing according to its preset range. PSA was performed to explore the impact of joint uncertainties of model parameters: costs, utilities and transition probabilities. The results from the PSAs can provide more results compared with base-case analyses and DSAs [19,23]. Of the four treatment strategies, none were strictly dominated by any other strategy in the base case analysis. Using the cost-effectiveness threshold $10,000 (1.0 GDP per capita approximately), the additions of a PPI to both imrecoxib and diclofenac were more cost-effective (especially for diclofenac) in the long-term use, which was similar to other reports on the cost-effectiveness of adding a PPI to NSAIDs or selective COX-2 inhibitors [5]. The cost-effectiveness of the addition of a PPI can also be found in patients at both low GI and CV risk, which provides a good suggestion for clinicians to prescript a PPI when prescribing imrecoxib and diclofenac, even for patients at low GI and CV risks. It can be a good way to save money to co-prescribe a PPI to imrecoxib and diclofenac based on the present findings. When it comes to single drug, diclofenac was highly dominated by imrecoxib, it was more likely for imrecoxib to be the cost-effective option compared to diclofenac. The anti-inflammatory mechanism of NSAIDs is to inhibit the cyclooxygenase (COX), which is a requirement for prostaglandin synthesis [3]. There are mainly two types of COX: COX-1 and COX-2, COX-1 is involved in platelet activation, gastrointestinal protection and kidney function, and COX-2 is involved in inflammatory. Traditional NSAIDs can inhibit both COX-1 and COX-2, causing the GI toxicity, while selective COX-2 inhibitors can selectively inhibit COX-2 [6,24]. From the perspective of mechanism, the lower GI events incidence rate of imrecoxib compared to diclofenac can be explained [6,25]. Taking both price and incidence of AEs, the relatively high cost-effectiveness of imrecoxib can be obtained also with higher price compared to diclofenac. In China, according to the medical insurance policies, diclofenac was listed as first-line drug with higher reimbursement ratio compared to imrecoxib, which was listed as secondline drug [26]. From the perspective of long-term costeffectiveness, higher reimbursement ratio of imrecoxib can be expected to encourage the wide use, because it's a way to save money from the perspective of long-term cost-effectiveness, especially with the increase number of OA patients in China nowadays [1]. Although uncertainties existed in the present study, the results in the base-case analyses were robust to sensitivity analyses. According to the WHO-CHOICE recommendations, if the ICER < 1.0 GDP per capita (approximately $10,000 in China), the treatment was highly more cost-effective compared to another, if the ICER < 3.0 GDP per capita, the treatment was more cost-effective compared to another [27]. In the present study, with the changes of parameters according to their wide ranges in the DSAs, the ICERs all below $15,000 (1.5 GDP per capita approximately). In the PSAs, using the threshold of $30,000 (3.0 GDP per capita approximately), co-prescription of imrecoxib and a PPI had the highest probability to be the most cost-effective option, and diclofenac was high dominated by imrecoxib, which were robust to the results of PSAs. There are several limitations of this study, first, as with all modeling studies, standard treatment was assumed for all patients when they suffered osteoarthritis or complied with other AEs. In real world, patients may change their original treatment option to another due to different reasons, for example, it was assumed that when patients moved to post AE states and post treatment states, they would stop the original medication and use topical diclofenac instead, but in real world there are many other options for patients. However, it's also because of the preset standard treatment, the comparison of cost-effectiveness of different treatment strategies can be achieved. Second, there were large RCTs comparing the efficacy and safety of celecoxib and diclofenac, while there were limited RCTs comparing imrecoxib and celecoxib, imrecoxib and diclofenac. In order to decrease the effect of relative risk of probability of AEs of imrecoxib and celecoxib, DSA was performed to explore the influence of relative risk of probabilities of AEs of imrecoxib vs. celecoxib, the results stay robust with the wide range of relative risk of probabilities of AEs between imrecoxib and diclofenac. Third, when adapting the model to a Chinese perspective, a part of data was collected from the NICE model, which may not the same as that in the Chinese population. In order to decrease the uncertainties, Chinese real-world data was used to represent cost, general population mortality rate was collected from Chinese Yearbook to decrease the uncertainties caused by the source of parameters input. Conclusion Although uncertainties in the model exist, based on our findings, it was suggested that a PPI can be added when prescribing imrecoxib or diclofenac (especially for diclofenac) to manage OA in the long-term use due to the high cost-effectiveness of a co-prescription of a PPI obtained in the present study, even for patients at low GI and CV risks. Imrecoxib provides a valuable treatment option, clinicians can consider using imrecoxib, and a higher reimbursement ratio of imrecoxib is expected to encourage the use of imrecoxib.
2021-04-20T13:52:44.561Z
2021-04-20T00:00:00.000
{ "year": 2021, "sha1": "db9046969539b2bef0a3c3cef600835aa668b97b", "oa_license": "CCBY", "oa_url": "https://resource-allocation.biomedcentral.com/track/pdf/10.1186/s12962-021-00275-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "db9046969539b2bef0a3c3cef600835aa668b97b", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
33189396
pes2o/s2orc
v3-fos-license
Thioredo~i~mdependent Peroxide Reductase from Yeast* A25-kDa antioxidant enzyme that provides protection against oxidation systems capable of generating reactive and been The nature the The A25-kDa antioxidant enzyme that provides protection against oxidation systems capable of generating reactive oxygen and sulfur species has previously been identified. The nature of the oxidant eliminated by, and the physiological source of reducing equivalents for, this enzyme, however, were not known. The 25-kDa enzyme is now shown to be a peroxidase that reduces HzOz and alkyl hydroperoxides with the use of hydrogens provided by thioredoxin, thioredoxin reductase, and NADPH. This protein is the first peroxidase to be identified that uses thioredoxin as the immediate hydrogen donor and is thus named thioredoxin peroxidase (TPx). TPx exists as a dimer of identical 25-kDa subunits that contain 2 cysteine residues, and Cys170. Cyd7-SH appears to be the site of oxidation by peroxides, and the oxidized Cys4' probably reacts with Cys"'-SH of the other subunit to form an intermolecular disulfide. Mutant TPx proteins lacking either C Y S~~ or C~S "~, therefore, do not exhibit thioredoxin-coupled peroxidase activity. The TPx disulfide is specifically reduced by thioredoxin, but can also be reduced (less effectively) by a small molecular size thiol. The Saccharomyces cereuisiae thioredoxin reductase gene was also cloned and sequenced, and the deduced amino sequence was shown to be 51% identical with that of the Escherichia coli enzyme. In the presence of 0, and an electron donor such as ascorbate or a thiol compound (RSH), iron generates reactive oxygen species that include O,, H,O,, and HO' (1, 2). It is generally believed that the following reactions take place: Fe3+ + ascorbate Fez+ + semidehydroascorbate radical REACTION 1 Fe3+ + RSH -Fez+ + RS' + HA REACTION 2 Fez+ + 0, + Fe3+ + 0, H,02, and OH' by the sequential reactions 3,4, and 5. Hydroxyl radicals (OH') are powerful oxidants and inflict damage on lipids, proteins, and nucleic acids. Although the semidehydroascorbate radical, the other product of reaction 1, is not particularly reactive and undergoes a disproportionation to ascorbate and dehydroascorbate or a reduction by glutathione (GSH), the thiol radical (RS') produced in reaction 2 is reactive and can be Further converted to the s~fur-contai~ing radicals RSOO' and RSSR' (3, 4). We have previously purified a 25-kDa enzyme from yeast (4) and rat brain (5) that prevents damage induced by the thiol oxidation system but not that induced by the ascorbate oxidation system, despite the fact that the degree of oxidative stress is similar for the two systems as judged by the comparable extent of induced inactivation of glutamine synthetase. We postulated that the 25-kDa enzyme eliminates reactive sulfur species such as RS', RSSR', or RSOO', providing specificity to the thiol-containing system, and named this protein thiol-specific antioxidant (TSA)' (4)(5)(6)(7). However, several lines of evidence described in this report suggest that TSA is not an appropriate name. Yeast and rat genes that encode the 25-kDa protein have been cloned and sequenced (6,7). The deduced amino acid sequences show no homology to conventional antioxidant enzymes, including superoxide dismutases (yeast Cu,Zn-superoxide dismutase, yeast Mn-superoxide dismutase), catalases (yeast catalase A, yeast catalase T), and peroxidases (yeast cytochrome c peroxidase, mouse glutathione peroxidase, and pig phospholipid hydroperoxide glutathione peroxidase) (71. ~a l m o n e~~a t y p~~~u r i u~ alkylhyd~peroxide reductase has been shown to consist of 22-kDa AhpC2 and 57-kDa AhpF', an FAD-containing NAD(P)H dehydrogenase (8,9). The two 25-kDa protein sequences are 40% identical w i t h~p C . A search of data bases also revealed 26 additional protein sequences that are homologous to the 25-kDa protein and AhpC (6). The homologous proteins, except for AhpC, are not associated with known biochemical functions and may represent a new, widely distributed family of antioxidants. Alignment of the amino acid sequences of the antioxidant family members revealed 2 highly conserved cysteine residues, corresponding to Cys47 and The abbreviations used are: TSA, thiol-specific antioxidant; AhpC, a component of alkyl hydroperoxide reductase; Trx, thioredoxin; TR, Trx reductase; DTT, dithiothrei~l; PAGE, polyacrylamide gel eledrophoresis; TPx, thioredoxin peroxidase; kb, kilobaseb). AhpC and AhpF were previously referred to as C22 and F52, respectively, in Ref. 8. in the yeast 25-kDa protein. The more amino-terminal cysteine is conserved in all family members, whereas the more carboxylterminal cysteine is present in most but not all members (6). The oxidized form of the 25-kDa protein exists mainly in a dimeric form linked by two disulfide bonds between and CysL70. The 25-kDa protein does not contain any obvious redox cofactor, and the cysteine residues appear to constitute the site of oxidation (10). The reduced form of AhpC converts alkyl hydroperoxides to the c o~e s~n d i n g alcohols with concomitant oxidation of the two s u l~y d~l s of AhpC to a disulfide bond (8). The regeneration of AhpC sulfhydryls is achieved by AhpF, which transfers reducing equivalents from NAD(P)H to the disulfide of AhpC. The fact that the 25-kDa protein is homologous to AhpC suggests that it may also act on peroxides, and the reduction of the 25-kDa protein disulfide may be achieved by an enzyme with a function similar to that of AhpF. Thus, the 25-kDa protein may possibly function as an antioxidant against both the ascorbate and thiol oxidation systems, and the previously observed specificity for the thiol oxidation system may be attributable to the possibility that thiols, but not ascorbate, are able to reduce the 25-kDa protein disulfide. We now describe the purification of two protein components from yeast that can reduce the 25-M)a protein disulfide at the expense of NADPH and support the antioxidant activity of the 25-kDa protein against the ascorbate oxidation system. The two protein components are shown to be thioredoxin (Trx) and thioredoxin reductase (TR). In the presence of Trx, TR, and NADPH, the 25-kDa protein reduced H,O, and alkylhydroperoxide. We also describe, for the first time, the cloning and sequencing of the yeast TR gene. Hepes-NaOH (pH 7.0) and treated with Chelex 100 (Bio-Rad), Glutamine synthetase was purified from Escherichia coli as described (11). A Saccharomyces cereuisiae strain that is not able to produce the 25-kDa antioxidant has been described (7). Cysteine residues 47 and 170 of the yeast 25-kDa protein were individually replaced by serine, and recombinant proteins (RWT(wi1d type), RC47S, and RC170S) were expressed in and purified from E. coli as described (10). Antioxidant Activity Assay-Glutamine synthetase was subjected to inactivation by the ascorbate oxidation system, and the ability of column fractions to protect glutamine synthetase from the oxidative insult was measured in the presence of the 25-kDa protein and NADPH. Glutamine synthetase inactivation was performed in a 50-pl reaction mixture containing 2 pg of glutamine synthetase, 2 pg of 25-kDa protein, 0.4 nm NADPH, 10 m M ascorbate, 12.5 p~ FeCl,, 50 m M Hepes-NaOH (pH 7.01, and a portion of column c h~m a t o~a p h y fractions. After 10 min at 30 "C, the residual glutamine synthetase activity was measured by adding 2 ml of a y-glutamyltransferase assay mixture as described (4). Identification and Purification of Protein Components That Support the NADPH-dependent Antioxidant Activity of the 25-kDa Protein against the Ascorbate Oxidation System-Frozen S. cereuisiae BJ926 cells (800 g) were suspended in 2 liters of deionized water and centrifuged at 5000 x g for 10 min. The cell pellet was resuspended with 3 liters of 50 m M Hepes-NaOH (pH 7.0) containing 2 m M phenylmethylsulfonyi fluoride, aprotinin (5 pglml), and leupeptin (1 pg/ml). Cells were lysed by 6 passes through a Laboratory Homogenizer (model 15 M; Gaulin, Wilmin~on, MA) at 9000 PA., and the cell extract was centrifuged at 9,000 x g for 30 min. Polyethylene glycol was added to the supernatant to a final concentration of 8%, and the mixture was then stirred for 30 min at 4 "C. The precipitate was collected by centrifugation at 5,000 x g for 30 min and resuspended in 600 ml of 20 mM Hepes-NaOH (pH 7.0). Insoluble material was removed by centrifugation at 45,000 x g for 30 min. The resulting supernatant was applied to an AF-red Toyopearl dyeaffinity column (5 x 12 cm) that had been equilibrated with 20 m M Hepes-NaOH (pH 7.0). The column was washed with 800 ml of equilibration buffer, and proteins were eluted with a linear NaCl gradient from 0 to 2 M in 2 liters of equilibration buffer at a flow rate of 5 mlimin (Fig. 3A, a). Fractions of 16 ml were collected, and 10 pl of each fraction were assayed. Peak fractions (fractions 131 to 138) were pooled, diluted with 20 m M Tris-HC1 (pH 7.51, and concentrated in an Amicon concentrator. One-fourth (5 ml) of the concentrated sample was applied to a Mono Q HR 10/10 column that had been equilibrated with 20 m M Tris-HC1 (pH 7.5). Proteins were eluted at a flow rate of 2 mlimin with a linear NaCl gradient from 0 to 400 m M for 40 min (Fig. 3 A , b). Fractions of 2 ml were collected. Assay of 10 pl of each fraction did not reveal any antioxidant activity. However, when the assay was performed in the presence of 5 1. 11 of the flavoprotein-containing peak centered at 27 min (peak I, Fig. 3 A , b), glutamine synthetase-protecting activity was apparent in a peak centered at 17 min. Conversely, when the assay was performed in the presence of 5 pl of the 17-min fraction, a peak of antioxidant was apparent that coincided with the flavoprotein peak I. The Mono Q column yielded another flavoprotein-containing peak (peak 11) centered at 19 min. However, with the addition of the peak I1 flavoprotein to the assay mixture, none of the Mono Q fractions protected glutamine synthetase. These results suggest that the 25-kDa protein-supporting activity consisted of two protein components that eluted in the 17-and 27-min peaks. The Mono Q chromatographic step was repeated with the three remaining portions of the concentrated sample from the dye-aftinity column. One-third (3 ml) of the concentrated 27-min peak sample from the Mono Q column was applied at a flow rate of 1 mlimin to a TSK heparin-5PW high performance liquid chromatography column (7.5 x 75 mm) that had been equilibrated with 20 m M Hepes-NaOH (pH 7.0). Proteins were eluted at a flow rate of 1 mlimin by a NaCl gradient from 0 to 400 mM for 40 min (Fig. 3B, a). Fractions of 1 ml were collected and assayed for the glutamine synthetase-protecting activity in the presence of 2 p1 of the pooled 17-min peak fractions from the Mono Q column. This chromatographic step was repeated with the remaining two portions of the concentrated 27-min sample. Peak fractions (fractions 28 to 30) were combined and concentrated to 0.7 ml. One-half (0.35 ml1 of the concentrated sample from the TSK heparin-5PW column was applied to a TSK G3000SW column (7.5 x 600 mm) that had been equilibrated with 50 m M sodium phosphate (pH 7.0) containing 100 m M NaCl. Proteins were eluted at a flow rate of 0.5 mlimin with the same buffer. Fractions of 0.5 ml were collected and assayed in the presence of 2 p1 of the pooled 17-min peak fractions from the Mono Q column. Activity eluted as a single symmetrical peak ( Fig. 3B, b). This gel filtration chromatography step was repeated with the remaining half of the concentrated sample from the heparin column. Peak fractions (fractions 21 to 23) were combined, diluted with 20 mM Hepes-NaOH (pH 7.01, concentrated, divided into portions, and stored at -70 "C. One-fourth of the concentrated 17-min peak sample from the Mono Q column was applied to a TSK G3OOOSW column (7.5 x 600 mm) that had been equilibrated with 50 m M sodium phosphate (pH 7.0) containing 100 m M NaCl. Proteins were eluted at a flow rate of 0.5 mlimin with the same buffer. Fractions of 0.5 ml were collected and assayed in the presence of 2 pl of the pooled 27-min peak fractions from the Mono Q column. The activity peak coincided with the major protein peak (Fig. 3C, a). This chromatographic step was repeated with the remaining three portions of the concentrated 17-min peak sample from the Mono Q column, and the peak fractions were pooled and concentrated. One-half of the concentrated sample from the TSK G3000SW gel filtration column was applied to a Vydac C,, column (4.6 x 250 mm) that had been equilibrated with 10 m M sodium phosphate (pH 7.0). Proteins were eluted with a linear gradient of 0 to 30% acetonitrile in the same buffer for 10 min, followed by a second linear acetonitrile gradient of 30 to 70% for 40 min, at a flow rate of 1 mfimin. Fractions of 1 ml were collected, and, after removal of acetonitrile by evaporation, the two peaks of glutamine synthetase-protecting activity (fraction 24 and fractions 27 to 29) (Fig. 3C, b) were pooled separately and stored at -70 "C. RESULTS The 25-kDa Protein Is Reduced by DTT but Not by Ascorbate-We compared the abilities of ascorbate and DTT to reduce the oxidized form of the 25-kDa protein by taking advantage of the fact that the oxidized form exists as a dimer and the reduced form as a monomer under denaturing conditions. The dimer was converted to monomer by 10 mM DTT but not by 10 mM ascorbate (Fig. 1). This result supports the possibility that the 25-kDa protein could not prevent the damage induced by the ascorbate oxidation system because its sulfhydryl groups could not be regenerated. Identification of a 25-kDa Protein-reducing Activity in Yeast Extract-We undertook a search for an enzyme that would support the antioxidant activity of the 25-kDa protein against the ascorbate oxidation system. Protection of glutamine synthetase against the ascorbate oxidation system was measured in the presence of various combinations of the 25-kDa protein, yeast extract, and NADPH or NADH. The combination of the 25-kDa protein, yeast extract, and NADPH provided the greatest protection (Fig. 2 A ) . NADH was ineffective. The combination of crude extract and NADPH afforded a similar extent of protection as the combination of the 25-kDa protein, crude extract, and NADPH ( Fig. 2B), probably because the yeast extract contained a sufficient amount of the 25-kDa protein (the 25-kDa protein is an abundant protein, constituting -0.3 to 0.7% of total soluble protein in yeast ( 5 ) ) . This conclusion was supported by the observation that an extract from a yeast mutant that cannot produce the 25-kDa protein did not protect glutamine synthetase in the presence of NADPH, whereas the mutant extract supplemented with the 25-kDa protein provided protection (Fig. 2B ). The 25-kDa Protein-reducing Activity Comprises lEuo Components-Purification of the putative 25-kDa protein-reducing enzyme was attempted from yeast extract. Column fractions were monitored for their ability to protect glutamine synthetase against the ascorbate oxidation system in the presence of the 25-kDa protein and NADPH (Fig. 3). Chromatography on a Toyopearl dye-affinity column yielded a single peak of protection activity (Fig. 3A, a). After subsequent chromatography on a Mono Q column, however, activity was not detectable in any fraction. The observation that the pool of all eluted proteins exhibited protection activity suggested that the 25-kDa protein-reducing activity is likely attributable to more than one component. Given that NADPH-dependent redox enzymes often contain FAD as a prosthetic group, we identified, by absorbance a t 450 nm, peaks I and I1 (centered at 27 and 19 min, respectively) as potential flavoprotein-containing fractions from the Mono Q column (Fig. 3A, b). We then re-evaluated the protection activity of each of the Mono Q fractions after supplementation with either peak I or peak 11. A protection activity peak centered at 17 min was detected when fractions were supplemented with flavoprotein peak I. Supplementation of each of the Mono Q fractions with the 17-min fraction yielded an activity peak centered a t 27 min. These results suggested that the 25-kDa protein-regenerating activity was attributable to two components, one of which eluted a t 17 min and the other, initiated by adding 10 pl of freshly prepared iron and ascorbate mixture. After 10 min a t 30 "C, the residual glutamine synthetase activity was measured by adding 2 ml of y-glutamyltransferase assay mixture as described (4, 11). B , glutamine synthetase inactivation and assay of the residual enzyme activity were performed as in A with the exception that a crude extract of a 25-kDa protein null mutant was compared with the wild-type extract in the inactivation reaction. a flavoprotein, a t 27 min from the Mono Q column. Purification and Identification of the Flavoprotein Component-The flavoprotein that eluted in the 27-min peak from the Mono Q column was purified to homogeneity by two sequential chromatographic steps on a TSK heparin-5PW column and a TSK G3000SW gel filtration column (Fig. 3B). The molecular mass of the active protein was estimated as 66 kDa from the gel filtration chromatography (data not shown). However, the purified protein yielded a single band with an apparent molecular mass of 34 kDa on SDS-polyacrylamide gel electrophoresis (PAGE) (not shown), suggesting it exists as a dimer of two identical subunits under nondenaturing conditions. The ultraviolet-visible absorbance spectrum of the purified protein has revealed peaks at 273,379, and 457 nm and a shoulder a t 480 nm (not shown), which are characteristics of flavoproteins. The sequence of the amino-terminal 15 residues and the partial sequences of five tryptic peptides of the purified flavoprotein were determined: VXNKVXIIGSGPAAH (amino terminus), VDLSSKPF (peptide 11, MHLPGEETXWQK (peptide 2), YGSK (peptide 3), KNXETD (peptide 4), and QAYorAGX (peptide 5), where X represents an unidentified amino acid. A search of the GenBank data base revealed that the aminoterminal sequence was homologous to the amino-terminal sequence of E. coli TR. Of 13 residues identified, 8 were identical and 3 were conservative substitutions. In addition, E. coli TR contains regions that showed homology to peptides 1,2, and 5 and is known to be a dimer of 35-kDa subunits, each of which contains one tightly bound FAD molecule. TR purified from bakers' yeast also consists of two 38-kDa subunits (12, 13). Furthermore, TR activity associated with the purified flavoprotein was directly demonstrated with the use of an assay that involves the conversion of 5,5'-dithiobis(2-nitrobenzoic acid) by reduced Trx to a colored product as described (12). Purification and Identification of the Second Component of the 25-kDa Protein-reducing Activity-The second component of each fraction was measured in the presence of the pooled 27-min peak fractions from the Mono Q step. The activity eluted in two peaks, peak 1 and peak 2, from the Vydac C,, column. required for the 25-kDa protein-reducing activity was purified to homogeneity from the 17-min peak fractions of the Mono Q column by successive chromatographic steps on a TSK G3000SW gel filtration column and a Vydac C,, reversed-phase column (Fig. 3 0 . The reversed-phase chromatography yielded two protection activity peaks, 1 and 2, each of which contained a single protein of 12.6 kDa and 12.4 kDa, respectively, as estimated by SDS-PAGE (Fig. 4). Cloning and Sequencing of Yeast Thioredoxin Reductase Gene-Although our results indicated that the flavoprotein component of the 25-kDa protein-reducing activity was likely TR, the complete amino acid sequence of yeast TR was not known and many other FAD-containing disulfide oxidoreductases share similar sequences. We therefore prepared rabbit antibodies to purified yeast TR and used them to screen an S. cerevisiae genomic DNA library in an attempt to clone and sequence the yeast TR gene. An immunologically positive clone with a 2.6-kb insert was isolated. The insert contained three internalAcc1 sites, cleavage a t which generated four fragments of 0.5, 1.1,0.2, and 0.8 kb. Sequencing revealed that the 1.1-kb and 0.2-kb fragments together yielded an open reading frame that contained sequences encoding the amino-terminal 15 residues and the five tryptic peptides of the purified flavoprotein (Fig. 5). The open reading frame encodes a polypeptide of 319 amino acids with a calculated molecular mass of 33,908 Da. The amino acid sequences of E. coli and S. cerevisiae TR molecules were compared by a dot matrix plot (not shown). The resulting diagonal line without large gaps indicated that most regions of the two sequences are well conserved. Alignment revealed 51% identity and 69% similarity between E. coli and S. cerevisiae TR. In addition, yeast TR, like the E. coli enzyme (17), has a CXXC motif for the redox-active cysteines and consensus sequences for the binding of FAD and NAD(P)H. These results confirmed that the flavoprotein we purified from yeast extract is TR. The calculated isoelectric point of yeast TR is 5.36 and the extinction coefficient at 278 nm is 23,380 M -~ cm", which is equivalent to 0.69 absorbance unit mg" ml-I. Previously, a partial reading frame capable of encoding the carboxyl-terminal 59 amino acids of TR was detected adjacent to the S. cerevisiae TRP4 gene, which encodes the tryptophan biosynthetic enzyme anthranilate phosphorylase (18). Comparison of the 25-kDa Protein-reducing Activities of DTT and the Thioredoxin System-Incubation with the Trx system (Trx, TR, and NADPH) caused immediate reduction of the dimeric form of the 25-kDa protein (data not shown) similar to GCCTCCGGAATTC (318) the conversion induced by DTT (Fig. 1). The 25-kDa proteinsupporting functions of DTT and the Trx system were compared by measuring their abilities to prevent glutamine synthetase inactivation in the presence of various concentrations of the 25-kDa protein (Fig. 6). Glutamine synthetase was inactivated by the ascorbate-oxidation system for the evaluation of the Trx system and by the DTT-oxidation system for the evaluation of DTT. The concentration of Fe3+ was adjusted such that equivalents (4). These results suggest that the Trx system, not a thiol such as glutathione, is likely to reduce oxidized 25-kDa protein in cells. The extent of glutamine synthetase protection increased in a saturable, dose-dependent manner with 25-kDa protein (Fig. 61, Trx (not shown), and TR (not shown) concentration. Tm I was slightly more efficient as a hydrogen donor than Trx 11. Peroxidase Activity of the 25-kDu Protein-We examined the 25-kDa protein for peroxidase activity toward H202 by directly following the decrease in H,O, in the presence of the Trx system (Fig. 7). The rate of the H,O, removal was fast initially and then decreased gradually. Peroxidase activity toward H202 or t-butyl hydroperoxide in the presence of the Tnr system was also monitored indirectly by following the decrease in A,,, attributable to the oxidation of NADPH (Fig. 8, A and B ) . The rate of the peroxidase-dependent NADPH oxidation decreased with time, and the decrease was more rapid as the substrate concentration increased. For equivalent concentrations of peroxide, the decrease in rate was more rapid with t-butyl hydroperoxide than with H,O,; the reaction essentially stopped several minutes after the addition of millimolar concentrations of t-butyl hydroperoxide (Fig. 8B). Cumene hydroperoxide also elicited NADPH oxidation, and the oxidation rate decreased rapidly with time as for t-butyl hydroperoxide (data not shown). The decrease in rate was not attributable to exhaustion of substrate or to product inhibition by NADP+; NADP+ competes poorly with NADPH for TR (the K,,, for NADPH is 1.2 p~ and the K, for NADP' is 15 p~ (19)). The addition of NADP+ at a concentration similar to that of NADPH did not have a marked effect on the NADPH oxidation rate (data not shown). The decrease in rate was a first-order process, as judged from analysis (not shown) of the time course shown in Fig. 8 A , and appeared attributable to inactivation of 25-kDa protein by per- oxides. The markedly decreased NADPH oxidation rate achieved after incubation with 5 m M H,O, for 12 min was increased by replenishing the supply of 25-kDa protein (Fig. 9). The NADPH oxidation was, however, almost linear with time when incubated with the ascorbate-oxidation system (Fig. 8C). The electron flow from NADPH to peroxide required all three protein components, the 25-kDa protein, Trx, and TR; NADPH oxidation in the absence of either 25-kDa protein or Trx is negligible compared to that observed in the presence of all three proteins (Fig. 10). Thus, Trx, despite its redox-sensitive cysteines, cannot reduce H,O, directly and the disulfide of the 25-kDa protein cannot be reduced directly by TR. Cys47 and CYS'~'' Are Essential for the Peroxidase Activity of the 25-kDa Protein-We previously investigated the roles of and Cysl7' in the 25-kDa protein by replacing them individually with serine, expressing the mutant (RC47S and RC17OS, respectively) and wild-type (RWT) proteins in E. coli and evaluating the ability of each recombinant protein to protect glutamine synthetase against damage by the DTT oxidation system. RC170S was as protective as RWT, whereas RC47S was completely ineffective (10). In contrast, both RC170S and RC47S failed to protect glutamine synthetase from the ascorbate oxidation system when the Tm system served as hydrogen donor (Fig. 1lA). Direct assay of peroxidase activity also revealed the inactivity of RC17OS and RC47S ( Fig. 11B 1, suggesting the indispensability of both cysteines for peroxidase activity. DISCUSSION In our attempt to identify the physiological hydrogen donor that supports the catalytic activity of the 25-kDa yeast antioxidant protein, we purified two protein components that together mediate the flow of electrons from NADPH to the oxidized form of the 25-kDa protein. One component was identified as Trx I or Trx 11, and the other as TR. The Trx system (Trx, TR, NADPH) was a more potent hydrogen donor for the 25-kDa protein than DTT on the basis of ability to support the antioxidant activity of the 25-kDa protein (Fig. 6). In addition, the combination of Trx and TR was the major electron carrier detectable from yeast extract when NADPH and NADH were used as the ultimate electron donor ( Figs. 2A and 3A). These results indicate that our previous assumption (4, 5, 7) was incorrect and that the physiological hydrogen donor for the catalytic function of the 25-kDa protein is not a thiol like glutathione but the Trx system. Our current data suggest that, in the presence of the Trx system, the 25-kDa protein reduces peroxides with Trx as the immediate hydrogen donor and protects glutamine synthetase against the ascorbate oxidation system by eliminating H,O, (Figs. 7 and 8). Thus, we propose to rename the 25-kDa protein thioredoxin peroxidase (TPx). It is now clear that the apparent thiol specificity observed previously for TPx, which gave rise to the name TSA, is attributable to the fact that the TPx disulfide can be reduced by a thiol but not by ascorbate (Fig. 1). The designation TSA also appeared consistent at that time with results obtained by electron paramagnetic resonance (EPR) spectroscopy. When sulfur-containing radicals were generated from DTT by the action of horseradish peroxidase and H,O, in the presence of the spin-trapping reagent 5,5-dimethyl-l-pyrroline-N-oxide, TPx inhibited the formation of 5,S-dimethyl-lpyrroline-N-oxide-sulfur radical adducts (20). Having failed to detect H,O,-removing activity associated with TPx in the presence of D m , we attributed the inhibition of the formation of the 5,5-dimethyl-l-pyrroline-N-oxide adducts to the catalytic elimination of sulfur-containing radicals by TPx. Now, with the detection of peroxidase activity of TPx, the EPR data can be reinterpreted and attributed to the reduction of H20, by TPx. We cannot, however, eliminate the possibility that direct reduc- tion of sulfur-containing radicals by TPx sulfhydryls was also partly responsible for the EPR results. The sequence of events that are likely to occur during the flow of electrons from NADPH to ROOH is summarized in Fig. 12. TR consists of two identical subunits linked by noncovalent bonds. Each subunit has one tightly bound FAD molecule and a redox-sensitive disulfide in its active center. The sequence surrounding this disulfide is CAVC, corresponding to residues 141 to 144 of yeast TR. Trx also possesses a redox-sensitive disulfide in the CXXC configuration. TR is highly specific for NADPH, and i t is believed that electrons flow from NADPH to the bound FAD, from FAD to the redox-sensitive disulfide in TR, and then to the redox-sensitive disulfide in Trx (19). The reduced Trx then serves as a protein disulfide reductase ofTPx. This reaction is reminiscent of the thiol-disulfide exchange observed between Trx and many disulfide proteins including insulin (19). The reduced TPx finally provides 2 hydrogens to reduce a peroxide molecule. TPx contains 2 cysteine residues, which, in contrast to Trx and TR, are not nearby and do not appear to form an intramolecular disulfide. TPx exists predominantly as a dimer linked by two identical disulfide bonds between Cys"' and Cyst"' (10). The TPx mutant lacking C Y S~~, RC47S, is inactive regardless of whether the reducing equivalents are provided by DTT or by the Trx system, whereas RC170S is active in the presence of DTT and inactive in the presence of the Trx system (Fig. 11). These results suggest the model shown in Fig. 13A. In this scheme, C~S~~-S H is the primary site of substrate peroxide reduction and is directly oxidized by ROOH to yield ROH and cysteine sulfenic acid ( C~S~~-S O H ) . The C~S~~-S O H then reacts with Cys17'-SH of the other subunit to produce H,O and an intermolecular disulfide. Cysteine sulfenic acid was previously proposed as a stable intermediate of the redox-sensitive halfcystine in oxidized NADH peroxidase, its stability being mainly attributable to the absence of nearby protein thiols (21). The sulfenic acid is readily oxidized to sulfinic acid (Cys-S0,H) by peroxides, a reaction that has been suggested to be responsible for the irreversible inactivation of NADH peroxidase by H,O, (21). The C~S~~-S O H of TPx may also be further oxidized if the reaction of C~S~~-S O H and CysI7'-SH requires significant distortion of the protein backbone and is thus sufficiently slow to allow the encounter of C~S~~-S O H with peroxide molecules; this scenario may be especially relevant in the presence of high concentrations of peroxides. Such inactivation of TPx by substrates is likely responsible for our previous failure to detect peroxidase activity of TPx (4,20). A slight reduction of H,O, by a high concentration of TPx (1 mg/ml) and DTT has, however, been observed (22). Although there is no tangible evidence for the sulfenic acid intermediate, the reaction scheme shown in Fig. 1 3 A is the simplest mechanism that is compatible with the observations that both and Cys17' are essential for TPx activity, the oxidized TPx is a dimer containing disulfides between and Cys17', and TPx is highly susceptible t o inactivation by peroxides. It is also possible that the reaction mechanism involves an intermediate with sulfur bonded to a nearby nitrogen analogous t o the reaction mechanism proposed for glutathione peroxidase (see below). In the model shown in Fig. 13A, it is possible that a small diffusible thiol molecule could replace Cys17'-SH in the formation of a disulfide with C~S~~-S H . Such a scenario is shown in Fig. 1323 and would explain why the TPx mutant RC170S protects glutamine synthetase against the thiol oxidation system (10). An analogous mechanism has been proposed for selenoprotein glutathione peroxidase, which contains a redox-active half-selenocystine and no other sulfur or second selenium in proximity (23); on reaction with a peroxide molecule, the Cys-SeH is oxidized to an intermediate that was proposed to be selenic acid (Cys-SeOH) or selenium bonded to nitrogen (Cys-Se-N-) (23, 24) and which is subsequently reduced by 2 molecules of GSH, first t o Cys-Se-S-G and then to Cys-SeH and GSSG. The schemes shown in Fig. 13 require maintenance of the dimeric arrangement of TPx throughout the catalytic cycle, even in the absence of the C y~~~-C y s~~' disulfide linkage. Indeed, wild-type TPx, RC47S, and RC170S all exist as dimers or higher oligomers under reducing, nondenaturing conditions, as judged from the PAGE performed in the presence of 2-mercaptoethanol (not shown). In the presence of SDS and 2-mercaptoethanol, however, all three TPx molecules migrate as monomers (10). However, we cannot exclude the possibility that the dimeric TPx is an artifact of purification and that TPx operates as a monomer during catalysis, forming an intramolecular disulfide. TPx and AhpC are similar in size, exhibit 40% identity in amino acid sequence, contain 2 conserved cysteine residues that align perfectly, and reduce peroxides ultimately at the expense of NAD(P)H. For all their similarities, TPx and AhpC differ in several characteristics: AhpC is rapidly inactivated by H,O, (81, whereas TPx is more sensitive to alkyl hydroperoxides (Fig. 8); and regeneration of reduced AhpC is achieved by a single protein (AhpF), whereas the reduction of the TPx disulfide requires both Trx and TR. AhpF has been identified in prokaryotes but not in eukaryotes. Yeast TPx cannot be reduced by S. typhimurium A~P F .~ AhpF (from S. typhimurium, 521 amino acids) is significantly larger than TR (from yeast, 318 amino acids), but the amino acid sequence alignment reveals that the carboxyl-terminal 311 residues of AhpF are 33% identical with TR. The two homologous regions contain the consensus binding sites for FAD and NAD(P)H as well as the redox-sensitive CXXC sequence, all of which align perfectly between the two protein sequences (25,261. AhpF contains two CXXC motifs: one (Cysl" and Cys13') in the amino-terminal region that does not have a corresponding region in TR, and the other (Cys345 and CYS~~') in the carboxyl-terminal region that aligns with the only CXXC (Cys14' and C Y S '~~) motif of TR. ~ ~~ These observations suggest that the amino-terminal 210 residues of AhpF may serve as a hydrogen carrier from G~s~*~-S H and Cys3*'-SH to AhpC, analogous to T r x , which carries hydrogens from C~S~~~-S H and C~S '~~-S H of TR to TPx. We have identified 26 proteins that exhibit homology to TPx and AhpC (6). These proteins are present in organisms from all kingdoms and have not been associated with known biochemical reactions (6). The similarities among these proteins extend over the entire sequence, and 2 cysteines, which correspond to Cysd7 and CYS'~O in yeast TPx, are highly conserved. The aminoterminal cysteine is conserved in all family members, and the carboxyl-terminal cysteine in all except six proteins. It is, therefore, reasonable to speculate that these additional 26 proteins are also peroxidases, with the conserved amino-terminal cysteine being the primary redox catalytic site. We propose to name this family of peroxidases the peroxiredoxin family. The diversity in the amino acid sequences of the family members probably reflects several different mechanisms involved in the regeneration of reduced peroxiredoxin. The members that contain the two conserved cysteines may be reduced by a mechanism that involves either a single protein, like AhpF, or two proteins, like Trx and TR. On the other hand, the members that contain only 1 conserved cysteine might require the participation of a small thiol like GSH, as in the case of glutathione peroxidase and RC 170s. The peroxiredoxin family thus likely represents a widely distributed class of enzymes that directly reduce H,O, and various alkyl hydroperoxides with hydrogens derived from NAD(P)H via various routes. TPx, previously called TSA, is a member of the peroxiredoxin family whose immediate hydrogen donor is Trx. To our knowledge, TPx is the Erst peroxidase to be identified that uses Trx as hydrogen donor. Because TPx is ubiquitous and abundant in mammalian tissues, it, together with glutathione peroxidase, would provide a major pathway of H,O, elimination. The discovery of TPx, therefore, adds a previously unidentified antioxidant function to the thioredoxin system.
2018-04-03T02:21:21.261Z
1994-11-04T00:00:00.000
{ "year": 1994, "sha1": "f79cfd4573163bcdd2443436306235563ad4e44b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)47038-x", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "91c57e681396b3491201bcbcd606dd2608d5075b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
236402804
pes2o/s2orc
v3-fos-license
The Linkage Mechanism between Environment-Related Rules and Environment-Related Efficiency of Industries in China: An Analysis Based on the Adaptive Semi-Parametric Panel Model : This study employed the adaptive semi-parameter model to determine the effects of environment-related rules (environment-related rules refer to laws and regulations in relation to environmental protection) on environment-related efficiency (environment-related efficiency refers to the environmental efficiency of a wide range of industries). In addition, the threshold regression model was employed to determine the industry threshold effect of environment-related rules on environment-related efficiency. The following conclusions were primarily drawn: (1) A U-shaped curve relationship was identified between the effect of environment-related rules and the environment-related efficiency of the broader national industry; (2) A threshold effect is exerted on environment-related efficiency, and under the effect exerted by environment-related rule below the lower limit of the optimal interval, environment-related rule policies cannot play a corresponding role, while the effect of environment-related rules is high. When the upper limit of the optimal interval is set, environment-related rules will exert extensively strong effects, which leads to the unsustainable development of the industry and distorts the industry’s development. The government is required to roll out different environment-related rule policies in accordance with the industry differences and the development stages of respective industries, fully exploit such environment-related rule policies for industries and technologies, optimize the environment-related rules system, and harmoniously advance industries, the economy, and the environment. Given the empirical results, it is critical to enhance the effect exerted by environment-related rules in the mining and manufacturing industries, elevate their technical level, and develop a positive relationship between the effect exerted by environment-related rules and environment-related efficiency enhancements. While China’s current environment-related rule policy imposes no discrimination between pollution-intensive industries and cleaner production industries, these industries should be treated differently in the days ahead. Introduction In the last four decades, China has achieved major economic development, which has been accompanied by serious environmental pollution. When the central government decides to roll out environment-related rule policies requiring compliance across a whole social welfare level, different enterprises are adversely affected by different regulatory constraints affecting different endowment factors (e.g., technology and resources). As a result, the degree of impact on these enterprises is different. Moreover, the environment-related efficiency of various industries is also affected. From a short-term perspective, the impact exerted by environment-related rules onto environment-related efficiency is primarily a "cost effect". Environment-related rules increase the cost expenditure of enterprises, affect investment in technological innovation, reduce profits, and lower the production efficiency and performance level of enterprises or industries. In the long run, environment-related rules will exert "innovation compensation" effects and lead to increasingly reasonable future environment-related rules. In accordance with the technology compensation effect, pirically found that the cost of pollution control caused by environment-related rules could lead to the decrease in industrial productivity to varying degrees. Gray and Shadbegian [5] analyzed the correlation of environment-related rules and investment in innovations under the "weak" Porter hypothesis framework. This suggests that environment-related rules do not show positive correlation with investment in innovations, but negative ones. Under the framework of the "strong" Porter Hypothesis, Shadbegian and Gray [6] found that environment-related rules could increase the enterprises' cost of pollution control while technological innovations cannot fully offset the cost of environment-related rules, thereby resulting in a decrease in environment-related performance and the competitiveness of enterprises. In addition, Rubashkina et al. [7] applied the panel information pertaining to 17 European nations and identified no evidence supporting the "strong" Porter Hypothesis. Moreover, the research conclusion of Rexhauser and Rammer [8] indicated that the "strong" Porter Hypothesis is generally invalid, and the type of environment-related innovations determines the impact of environment-related rules on industrial competitiveness. Given the questioning of the Porter Hypothesis, supporters mainly have four pieces of evidence: (1) The inconsistent and even contradictory empirical conclusions underlining the theoretical framework of the Porter Hypothesis originate from the biased errors caused by index selection and estimation methods [9]; (2) Step-by-step regression testing of Porter Hypothesis will cause inconsistencies in empirical conclusions [10]; (3) The primary reason why the empirical results obtained under the framework of the "strong" Porter Hypothesis are controversial is that the impact of environment-related rules on productivity growth varies with nations, regions and industries, and it depends on the technological level of each nation or enterprise [11,12]. (4) It takes time for stricter environment-related rules to stimulate technological innovations, which up-regulates the efficiency and down-regulates costs. The main reason for doubts is that the dynamic dimension of the Porter Hypothesis is not fully considered in empirical tests [13]. This viewpoint is also supported by the investment adjustment cost theory [14]. There are also considerable articles on the influence system between environmentrelated rules and environment-related efficiency based on the industry level. Based on the measurement of China's industrial environment-related productivity, Tu [15] empirically analyzed the relationship between environment-related rules and China's industrial environment-related productivity, which demonstrated that there is no obvious negative correlation. Through the evaluation and empirical investigation on the technical efficiency of the provincial electric power industry, Zhang and Xia [16] concluded that there is a "U-shaped" curve relation between environment-related rules and the technical efficiency of the electric power industry. The empirical analysis of 39 industries conducted by Shen [17] revealed a "U-shaped relationship" between environment-related rules and the environment-related efficiency of industries. Moreover, environment-related rules could help improve the environment-related efficiency of clean industries in the current period, while there is no significant short-term effect on pollution-intensive industries. With the panel data of 30 municipalities directly under the central government and autonomous regions from 2000 to 2011, Liu and Ran [18] empirically delved into the influence exerted by environment-related rules on the production technological progress of 17 industries in industrial enterprises. They verified that several industries present a significant "Ushaped" or inverted "U-shaped" relation, while there is no significant relation in other industries. In addition, the effect of environment-related rules on the production technological progress of each industry has no relation with the pollution type of the industry. Albrizio [19] used the national panel data of the OECD to empirically analyze the influence exerted by environment-related policy intensity on productivity growth at the industry level. As suggested from the results, strict environment-related policies in nations with higher scientific and technological levels could promote the short-term productivity growth at the industry level, while the growth effect decreases with the increase in the distance from the global productivity frontier until it becomes insignificant. By measuring the environment-related efficiency of the whole manufacturing industry, severe, moderate and Sustainability 2021, 13, 6203 4 of 26 mild-pollution industries in Hebei Province, Zhuang et al. [20] conducted an empirical analysis on the relation between environment-related rules and environment-related efficiency. According to these scholars, environment-related rules exert noticeably positive influences onto the environment-related efficiency of the mild-pollution industry, while environment-related rules exert an insignificant positive effect on the environment-related efficiency of the severe and moderate-pollution industries. By building a mathematical model and based on the assumption of regional consistency, Shen [21] tested the nonlinear relation between environment-related rules and technological innovations in China and determined the optimal rule level of environment-related rules. The research indicated a double threshold of the level of the economy development. The greater the threshold of the level of the economic development crossed, the more significant the promoting effect of environment-related rules on technological innovations will be. With the panel data of 33 subdivided sectors in China from 2004 to 2011 and the threshold regression method, Song [22] empirically investigated the threshold effect of environment-related rules on the R&D double-link efficiency. The empirical conclusion indicated that the threshold effect of environment-related rules on the R&D double-link efficiency had significant heterogeneity. Based on the measurement of the intensity and efficiency of environment-related rules in 30 provinces and cities (excluding Tibet, Hong Kong, Macao, and Taiwan) from 2007 to 2016, Zhou et al. [23] conducted an empirical analysis and confirmed a "U-shaped" relation between the two at the national level and an inverted "U-shaped" relation in the eastern region. Furthermore, there is a threshold effect between environment-related rules and environment-related efficiency. Moreover, some researchers explored the threshold influence exerted by environment-related rules from the perspective of the region [24][25][26]. In brief, there are some divergences in the demonstration of Porter Hypothesis in the current studies. The influence system of environment-related rules on environment-related efficiency and the relation between the two vary with data and models. However, there are relatively few studies on whether there is a threshold effect in industrial environmentrelated rules. Here, the impact mechanism of environment-related rules on environmentrelated efficiency, the heterogeneity of different industries, and the threshold effect of environment-related rules of different industries are further explored. The Influence of Environment-Related Rules of Industries on Environment-Related Efficiency When the government rolls out environment-related rule policies, enterprises generally satisfy the government's requirements by managing pollution, or by improving their technological level and implementing cleaner production measures. However, industries exhibiting different pollution levels face different constraints affecting environment-related rules. Heavy polluting industries are facing robust constraints. It is an urgent requirement to reduce their pollution emissions via pollution control, or by cleaner production, or elevating their current technological level, etc. Less polluting industries are subject to less constraints, and environment-related rule policies slightly affect them. For the huge differences in the elements and levels technological development of extensive industries, the influence exerted by different effects of environment-related rules on the environment-related efficiency of various industries is different. Overall, there are three scenarios: (1) if the environment-related rules are in a certain range of intensity, these may significantly enhance the environment-related efficiency of the industry, and its improvement. A gradual increase in its degree refers to an increase in its marginal effect; (2) If environment-related rules are in a certain range of intensity, these will help increase the environment-related efficiency of the industry, though its degree of improvement declines progressively, i.e., the phenomenon of diminishing marginal effect; (3) If environmentrelated rules are at a certain level, it may also occur that these environment-related rules hinder the improvement of environment-related efficiency of industries, i.e., a negative correlation is identified [17,[27][28][29]. Thus, it is considered that environment-related rules exert a "threshold effect" on environment-related efficiency, i.e., there are some thresholds values. If environment-related rules fall in different thresholds, the impact of environment-related rules on environment-related efficiency is different. Thus, the following Hypotheses 1 and 2 are proposed. Hypothesis 1 (H1). A range of industries are facing different degrees of restriction of environmentrelated rules, so different effects of environment-related rules are exerted on the environment-related efficiency of various industries. Hypothesis 2 (H2). Environment-related rules have a "threshold effect" on environment-related efficiency. If environment-related rules fall in different thresholds, the marginal effect of environmentrelated rules on environment-related efficiency varies. Materials and Methods Given the analysis of the theoretical mechanism of the second section, this study selects three variable factors as dependent variable factors and seven variable factors as independent variable factors: environment-related efficiency (y 1 ); environment-related efficiency technology gap (y 2 ); environment-related efficiency improvement potential (y 3 ); effect of environment-related rules (x 1 ); technological progress (x 2 ); FDI dependency (x 3 ); industrial profit margin (x 4 ); capital and labor structure (x 5 ); market concentration (x 6 ); and nationalization rate (x 7 ), and calculates the urban environment. Efficiency acts as a dependent variable. The variable factors and indicators include: Environment-related efficiency (y 1 ): Where capital and labor are input variables, and by using the expected (GDP) and unexpected output (i.e., the environment pollution) as output variables, the SBM model is employed to calculate green environment efficiency. In combination with the models and methods of Tone [30,31], this study employs the SBM model with unexpected output to determine the environment-related efficiency. A system is assumed to have n decision units DMU j , j = 1, 2, · · · , n, which have three indicators, i.e., m input indicators (i = 1, 2, · · · , m), s 1 expected output indicators as well as s 2 non-expected output indicators. These indicators are, respectively, considered as vectors x ∈ R m , y g ∈ R s 1 , y b ∈ R s 2 . Matrix X, Y a , Y b are X = [x 1 , x 2 , · · · , x n ] ∈ R m×n , Y g = y g 1 , y g 2 , · · · , y g n ∈ R s 1 ×n , The production possibility set has the following definition: The SBM model (VRS, with an undesired output based on a variable income is expressed as where s − , s g and s b , respectively, denote input, expected output and unexpected output; λ expresses the weight vector; objective function is ρ * ∈ [0, 1]. The aforementioned three decrease to s − , s g and s b , respectively. Environment-related efficiency technology gap (y 2 ): By using the formula of meta-frontier SBM model, we can calculate the energy efficiency by complying with the group frontier and meta frontier, respectively. Here, this study marks τ h and τ, τ is termed as the total factor energy efficiency, and it is able to fall to the technology gap (TG) and technical efficiency (TE): The technology gap can be exploited to measure the technology gap between group frontier and meta frontier. The more the value approaches 1, the lower the technology gap will be. 3. Environment-related efficiency improvement potential (y 3 ): In the SBM model of environment-related efficiency measurement, the optimal relaxation variables can be calculated simultaneously. The relaxation variables of input, expected output and unexpected output are recorded as S − , S g , S b , respectively, which can represent the optimal adjustment range of input and output. Since environment-related efficiency stresses energy input and pollutant emission, this study uses the energy input relaxation variable and the unexpected output relaxation variable to construct an evaluation index of environment-related efficiency improvement potential (EI): where S − E denotes the relaxation variable of energy input, i.e., the reduction in energy input; S b expresses the relaxation variable of unexpected output and the reduction in pollutant emission. Therefore, the larger the EI, the greater the potential for environment-related efficiency improvement will be. 4. Strength of environment-related rules (x 1 ): Numerous indexes can be adopted to measure environment-related management strength in existing literatures, which are measured by using proxy variables from different perspectives. In brief, it can fall to 4 types: (1) The measurement with the comprehensive index of pollution emissions-for instance, Li and Mu [32] (2013) used carbon emissions per unit output to determine the strength of environment-related management. Fu and Li [33], Li and Tao [34] used the emission of pollutants to build a comprehensive index method to determine the strength of environment-related management; (2) The use of investment expenditure on pollution control for the measurement-for example, Zhang Cheng et al. [35] exploited the overall investment in the control over the industrial pollution as a measurement index; (3) According to the perspective of economic development level-for example, Lu [36] adopted the level of per capita income to quantify informal environment-related management; (4) the use of the number of environment-related management laws and policies as the measurement index of environment-related management. Compared with the number of relevant laws and rules, the strength of environment-related management is determined by the actual implementation. Thus, the number of laws and rules cannot directly measure the effect of environmentrelated management. Though the level of economic development is affected by the level of environment-related management to a certain extent, only informal environment-related management can be measured. Such a measurement cannot be completely accurate. As the investment quota for pollution control is directly affected by the size of the industry, large-scale industries will inject more capital and technologies into pollution control. However, this index does not indicate that the industry's environment-related management strength is high. Thus, this study uses pollutant discharge as an index, fully considers three different types of pollutants (i.e., wastewater, waste gas and waste), and comprehensively reflects the strength of environment-related management through the technical treatment (e.g., unit output and standardization). By using the method to build comprehensive indexes proposed by Yu [37], and considering the availability of data, this study selects two single indexes of industrial wastewater discharge and industrial waste gas discharge from various industries to build a comprehensive measurement index of environment-related management strength. The specific treatment is as follows: (1) The annual industrial wastewater discharge and industrial waste gas discharge of each region is divided by the total industrial output, as an attempt to solve the problem of the difference of pollutant discharge among different industries; (2) The emission per unit output is standardized, and the values of each index are converted into the range of [0,1]. Since both indexes are negative indexes, the following treatment methods are adopted: where x ij denotes the strength of environment-related management for a province in a year, v ij represents the emission value of a province in a year (after the dedifferentiation), max(v ij ), min(v ij ) are the maximal and minimal emissions of a province in nine years (after the de-differentiation), respectively; (3) It is always meaningful to add standardized pollutant indexes for their horizontal comparability. In this study, the standardized data of industry-related wastewater discharge and industrial waste gas discharge are combined to obtain the regional environment-related strength data by using the equal weight method. Notably, since environment-related management is measured by pollutant emissions, this study uses negative indexes to process data. Thus, the greater the value of environment-related strength, the greater the strength of environment-related management will be. 5. Technological progress (x 2 ): The elevation of their level of technological development, i.e., technological progress primarily consists of technological innovations and innovation and technological process renewal and transformation, to exploit resources and energy and stimulate environment-related efficiency more efficiently. The industry R&D is split by industry GDP to determine its technological progress. 6. FDI dependency (x 3 ): Dividing industry FDI by industry GDP, which reflects the degree of dependence of GDP on FDI in a specified period. 7. Profit margin of industries (x 4 ): This study divides industry profit by gross output to obtain industry profit margin index, thereby revealing industry competitiveness. 8. Capital labor structure (x 5 ): This study uses the ratio of net fixed assets to the number of employees to determine capital labor structure indicators. 9. Market concentration (x 6 ): This study divides the difference between industrial added value and total wage by total industrial output to determine the market concentration index. 10. Nationalization rate (x 7 ): Overall, non-state-owned enterprises exhibit higher production efficiency than state-owned enterprises. State-owned enterprises have a higher monopoly position in resources and energy; they face poor cost constraints affecting resources and energy. This commonly leads to the waste and inefficient use of resources and energy. The proportion of state-owned enterprises in China's industrial output is commonly high. Environment-related efficiency has declined. With the ratio of the gross domestic product of state-owned and state-controlled enterprises to that of industrial enterprises above the scale, the index of nationalization rate can be determined. Data Sources The research scope of this part consists of the data of mining, manufacturing, power, heat, gas and water production and supply, and construction industries from 2007 to 2015. The basic data employed in this study originate from China Statistical Yearbook, China Industrial Statistical Yearbook, China Population and Employment Statistical Yearbook, China Environment-Related Statistical Yearbook and the official website of the National Bureau of Statistics. Descriptive Statistics Descriptive statistics of all variable factors in the industry are listed in Table 1. The average environment-related efficiency of the broader national industry reaches 0.835, and the average intensity of effect of environment-related rules is 0.892. Descriptive statistics of variable factors in the mining industry are listed in Table 2. The average environment-related efficiency of mining industry is 0.796, and the average value of effect of environment-related rules is 0.76. The environment-related efficiency and effect of environment-related rules of mining industry are lower than that of the broader national industry. Table 3 presents the descriptive statistics of variable factors in the manufacturing industry. The average environment-related efficiency of manufacturing industry is 0.846, Sustainability 2021, 13, 6203 9 of 26 and the average value of effect of environment-related rules is 0.92. The average of environment-related efficiency and effect of environment-related rules of manufacturing industry is higher than that of the broader national industry. Descriptive statistics of variable factors in the production, supply and supply sectors of electricity, heat, gas and water are listed in Table 4. The average environment-related efficiency of power, heat, gas and water production and supply industries is 0.851, and the average effect of environment-related rules is 0.944. The average environment-related efficiency and regulatory intensity of the production and supply industries of electricity, heat, gas and water are the highest among the four industries. Descriptive statistics of variable factors in the construction industry are listed in Table 5. The average environment-related efficiency of the construction industry is 0.795, and the average value of the effect of environment-related rules is 0.811. The average of environment-related efficiency and effect of environment-related rules of the construction industry is also low. Generally, the environmental efficiency and the intensity of environmental regulations in the industries of manufacturing and production and supply of electricity, heat, gas, and water have a relatively high average value, higher than the environmental efficiency value of the full sample of the industry. The environmental efficiency and the intensity of environmental regulations in mining and construction industries have relatively low average values, lower than the environmental efficiency values of the full sample of the industry. The standard deviation of environmental efficiency values in all industries is 0.1~0.2, with a low degree of dispersion. However, as revealed from descriptive statistical analysis, the performance of the manufacturing and electricity, heat, gas and water production and supply industries regarding the environmental efficiency and the intensity of environmental regulations is better, higher than the performance of the other two industries and the entire industry. Adaptive Semi-Parametric Panel Model Specific to measurement approaches, the semi-parameter model assumes a linear relationship between some independent and dependent variable factors, while the other part presents a non-linear relationship. To capture the spatial heterogeneity of data in different regions, different regions need different smoothing factors. In addition, the MADM method can solve similar problems [33,38,39], whereas more variables and the adaptability of the model are considered. In this study, the adaptive semi-parameter model is employed for the econometric analysis. Overall, the specific form of the econometric model is determined in the parametric model. However, the function relation between the dependent variable and independent variable may not be the case. The nonparametric model does not assume the functional form between the dependent variable and independent variable, while it directly fits to the data of the dependent variable and independent variable, which solves this problem to a certain extent [40]. Nevertheless, the nonparametric model may cause the problem of "dimension curse", and the boundary data fitting error is more significant. Thus, the self-adaptive semi-parametric panel model is used for the econometric analysis to solve the aforementioned problems. The semi-parametric model assumes that there is a linear relationship between part of the present independent and dependent variables, as well as a nonlinear relationship between the other part of independent and dependent variables. In addition, different smoothing factors are employed to capture the spatial heterogeneity of data from different industries [41]. of 26 The expression is supposed as To avoid the over fitting problems, it is assumed that: (the symbol~denotes b k 's distribution of obedience). According to the work of Crainiceanu [42], the punitive spline model can be determined, as expressed in (10): It is supposed that σ 2 ε (x i ) and σ 2 b κ m k are modeled by logarithmic linear models, as expressed in Equations (12) and (13). Setting σ 2 b κ m k in log linear model is to develop a spatial adaptation method to simulate σ 2 b κ m k . The spatial adaptation method can capture the spatial heterogeneity of data in different regions. Moreover, different regions apply to different smoothing factors: Given the work of Baladandayuthapani [43], β 0 , β 1 , · · · , β p prior distribution is a normal distribution with a mean value of 0 and a large variance. { b k } K m k=1 's prior distribution is independent of normal distribution. b k ∼ N 0, σ 2 b κ m k , k = 1, 2, · · · , K m , σ 2 ε (x i ) and σ 2 b κ m k 's prior distribution refers to Gama distribution. In other words, , y 2 , · · · , y n ] T be the first behavior of a matrix: D(b) relies on b, and the front p + 1 diagonal element is 0. The rest of the diagonal elements are, respectively, b 2 (κ 1 ), · · · , b 2 (κ K )'s diagonal array, adaptive smoothing coefficient b's punitive spline estimation v = β T , u T can be determined by the work of Ruppert et al. [41]:v The adaptive smoothing coefficient is determined by minimizing the generalized cross validation statistics [41]: where d f f it (b) = tr C T C + D(b) −1 C T C denotes the degree of freedom for fitting. To analyze the relationships of effect of environment-related rules and environmentrelated efficiency, the previous analysis revealed that the impact of the effect of environmentrelated rules on environment-related efficiency is non-linear, and the factors (e.g., upgrading of industrial structures, GDP per capita, human resources level, urbanization rate, openness and technological progress) should be regulated. Accordingly, this study established an adaptive semi-parametric panel model with non-linear variable factors as well as linear variable factors. According to Equation (16), the full sample of cities, resource-based cities and non-resource-based cities, eastern, central and western cities are analyzed, respectively: where β i (2 ≤ i ≤ 9) denotes the coefficient of the corresponding variable, given the meaning of y i , x 1 -x 9 above. Empirical Results With the AdaptFit module of R software, the evaluated results of the whole sample and the adaptive semi-parametric panel model of various industries are listed in Table 6. This table elaborates that technological progress boosts environment-related efficiency, which reveals that technological progress contributes to the environment-related efficiency enhancements. FDI dependency significantly reduces environment-related efficiency. The industrial profit margin affects environment-related efficiency both positively and negatively, and the whole sample, mining industry and the production and supply of electricity, heat, gas and water. Industrial profit margin of industries enhances environment-related efficiency, industrial profit margin of the manufacturing industry and the construction industry significantly reduces environment-related efficiency. Capital labor structure and environment-related efficiency have a negative impact. The effect of market concentration on environmentrelated efficiency is negative. Only the market concentration of the construction industry enhances environment-related efficiency, whereas such a positive effect is slight. The nationalization rate positively impacts environment-related efficiency. Table 6. The estimation of environment-related efficiency for the full sample and industry by the adaptive semi-parametric panel model. Variable Code Name Full Sample Mining Industry Manufacturing Industry Production and Supply of Electricity, Heat, Gas and Water Construction Business Technical progress Note: Value in brackets is the p value is. Specifically, p value less than 0.001 is recorded as ***; p value less than 0.01 is recorded as **. With the adaptive semi-parametric regression model, the fitting graph of the effect of environment-related rules and environment-related efficiency of the broader national industry is presented in Figure 1; the fitting graph of the effect of environment-related Sustainability 2021, 13, 6203 13 of 26 rules and environment-related efficiency of mining industry is presented in Figure 2; the fitting graph of the effect of environment-related rules and environment-related efficiency of manufacturing industry is presented in Figure 3; the fitting graph showing the relation between the effect of environment-related rules and the environment-related efficiency of power, heat, gas and water production and supply industries is presented in Figure 4; the fitting graph of the relationship between the effect of environment-related rules and environment-related efficiency of the construction industry is given in Figure 5. According to Figure 1, the relationships of the effect of environment-related rules and environment-related efficiency in the broader national industry is a U-curve, and a significant heterogeneity is identified between the effect of environment-related rules and environment-related efficiency in the four industries. Thus, it is essential to delve into the relationships of the effect of environment-related rules and environment-related efficiency in the four industries. Figures 2 and 3 show that the relationships of the effect of environment-related rules and environment-related efficiency of mining and manufacturing industry are flat. The effect of the environment-related rules of mining and manufacturing industry slightly impacts the environment-related efficiency enhancements and should be strengthened. Figure 5 elaborates an increasing relationship between the effect of environment-related rules and environment-related efficiency of the construction industry, and the effect of environment-related rules of the construction industry has a certain impact on the environment-related efficiency enhancements. Figure 4 presents that the relationships of the effect of environment-related rules and environment-related efficiency of power, heat, gas and water production and supply industries exhibit a N-shaped curve, i.e., with the rise in the effect of environment-related rule, environment-related efficiency is first up-regulated and subsequently down-regulated, and then it shows a rise. Accordingly, the government should adopt different policies for different industries, rationally use environment-related rule policies, and realize the optimal development of industry economy and environment. fitting graph of the relationship between the effect of environment-related rules and environment-related efficiency of the construction industry is given in Figure 5. According to Figure 1, the relationships of the effect of environment-related rules and environmentrelated efficiency in the broader national industry is a U-curve, and a significant heterogeneity is identified between the effect of environment-related rules and environmentrelated efficiency in the four industries. Thus, it is essential to delve into the relationships of the effect of environment-related rules and environment-related efficiency in the four industries. Figures 2 and 3 show that the relationships of the effect of environment-related rules and environment-related efficiency of mining and manufacturing industry are flat. The effect of the environment-related rules of mining and manufacturing industry slightly impacts the environment-related efficiency enhancements and should be strengthened. Figure 5 elaborates an increasing relationship between the effect of environment-related rules and environment-related efficiency of the construction industry, and the effect of environment-related rules of the construction industry has a certain impact on the environment-related efficiency enhancements. Figure 4 presents that the relationships of the effect of environment-related rules and environment-related efficiency of power, heat, gas and water production and supply industries exhibit a N-shaped curve, i.e., with the rise in the effect of environment-related rule, environment-related efficiency is first up-regulated and subsequently down-regulated, and then it shows a rise. Accordingly, the government should adopt different policies for different industries, rationally use environment-related rule policies, and realize the optimal development of industry economy and environment. Now, the environment-related efficiency is divided into two parts: the technical gap of environment-related efficiency and the potential of environment-related efficiency enhancements. Subsequently, the two parts and the effect of environment-related rules are econometrically measured and analyzed, respectively. Now, the environment-related efficiency is divided into two parts: the technical gap of environment-related efficiency and the potential of environment-related efficiency enhancements. Subsequently, the two parts and the effect of environment-related rules are econometrically measured and analyzed, respectively. Table 7 lists the evaluated results of the self-adaptive semi-parametric panel model for the whole sample and the technological gap of environment-related efficiency in a range of industries. This shows the impact of technological progress on the technological gap of environment-related efficiency is positive, demonstrating that technological progress is conducive to the expansion of the technological gap of environment-related effi- Table 7 lists the evaluated results of the self-adaptive semi-parametric panel model for the whole sample and the technological gap of environment-related efficiency in a range of industries. This shows the impact of technological progress on the technological gap of environment-related efficiency is positive, demonstrating that technological progress is conducive to the expansion of the technological gap of environment-related efficiency; the FDI dependency of the broader national industry, manufacturing and power, heat, water production and supply industries can obviously broaden the technological gap of environment-related efficiency; mining and construction industries can broaden the technological gap of environment-related efficiency. The FDI dependency of industries can effectively narrow the technological gap of environment-related efficiency. The industrial profit margin can obviously broaden the technological gap of environment-related efficiency; the capital labor structure of the broader national industry, manufacturing industry and power, heat, water production and supply industry can obviously broaden the technological gap of environment-related efficiency; the capital labor structure of the manufacturing industry and the construction industry can effectively narrow the technological gap of environment-related efficiency. The effect of the market concentration on the technological gap of environment-related efficiency is negative, only the market concentration degree of mining industry can effectively narrow the technological gap of environment-related efficiency; the nationalization rate of the four major industries can obviously broaden the technological gap of environment-related efficiency, whereas the effect of the market concentration degree of the broader national industry on the technological gap of environment-related efficiency is positive. Construction Business Technical progress Note: Value in brackets is the p value is. Specifically, p value less than 0.001 is recorded as ***; p value less than 0.01 is recorded as **; p value less than 0.05 is recorded as *. Fitting graphs of the technical gap between effect of environment-related rules and environment-related efficiency of the broader national industry are presented in Figure 6; the fitting graph of the technical gap between the effect of environment-related rules and the environment-related efficiency of the mining industry is presented in Figure 7; the fitting graph of the technical gap between the effect of environment-related rules and the environment-related efficiency of the manufacturing industry is presented in Figure 8; the fitting graph of the technical gap between the effect of environment-related rules and environment-related efficiency in power, heat, gas and water production and supply industries is presented in Figure 9; the fitting graphs between the effect of environment-related rules and the technical gap of environment-related efficiency in the construction industry is presented in Figure 10. From Figure 6 to Figure 10, there is an upward relationship between the effect of environment-related rules and the technical gap of environment-related efficiency in the broader national industry. The relationships of the effect of environmentrelated rules and the technical gap of environment-related efficiency in the mining industry comply with a U-shaped curve, and the relationships of the two are basically upward in the mining sample data. A decreasing relationship was identified between the effect of environment-related rules and environment-related efficiency technology gap in the mining industry. The production and supply of electricity, heat, gas and water and the technical gap between the effect of environment-related rules and environment-related efficiency in the construction industry are all increasing. The evaluated results of the self-adaptive semi-parametric panel model for the whole sample and the potential environment-related efficiency enhancements in various industries are listed in Table 8. As can be seen from Table 8, the technological progress of the broader national industry, mining and manufacturing industries can elevate the potential of environment-related efficiency enhancements, while the production and supply of electricity, heat, gas and water and construction industries have a negative impact on the potential of environment-related efficiency enhancements; the FDI dependency of the broader national industry, the production and supply of electricity, heat and water and the construction industry adversely affect the potential of environment-related efficiency enhancements; the FDI dependence of the mining industry and manufacturing industry can elevate the potential of environment-related efficiency enhancements; industrial profit margin adversely affects the potential of environment-related efficiency enhancements, and only the industrial profit margin of mining industry can boost the potential of environment-related efficiency enhancements; capital and labor structure boosts the potential of environment-related efficiency enhancements; market concentration adversely affects the potential of environment-related efficiency enhancements; the nationalization rate of the broader national industry, mining industry and manufacturing industry adversely affects the potential of environment-related efficiency enhancements. The impacts of the nationalization rate of electricity, heat, gas and water production and the supply industries and construction industries on the potential of environment-related efficiency enhancements is positive. Figure 9. Fitting chart of technical gap of effect of environment-related rules and environment-related efficiency in electricity, thermal, gas and water production and supply industries. The evaluated results of the self-adaptive semi-parametric panel model for the whole sample and the potential environment-related efficiency enhancements in various industries are listed in Table 8. As can be seen from Table 8, the technological progress of the broader national industry, mining and manufacturing industries can elevate the potential of environment-related efficiency enhancements, while the production and supply of electricity, heat, gas and water and construction industries have a negative impact on the potential of environment-related efficiency enhancements; the FDI dependency of the broader national industry, the production and supply of electricity, heat and water and the construction industry adversely affect the potential of environment-related efficiency enhancements; the FDI dependence of the mining industry and manufacturing industry can elevate the potential of environment-related efficiency enhancements; industrial profit margin adversely affects the potential of environment-related efficiency enhancements, and only the industrial profit margin of mining industry can boost the potential of environment-related efficiency enhancements; capital and labor structure boosts the potential of environment-related efficiency enhancements; market concentration adversely affects the potential of environment-related efficiency enhancements; the nationalization rate of Fitting graph of the relationship between the effect of environment-related rules and the potential of environment-related efficiency enhancements of the broader national industry is presented in Figure 11; the fitting graph of the relationship between the effect of environment-related rules and the potential of environment-related efficiency enhancements of the mining industry is presented in Figure 12; the fitting graph of the relationship between the effect of environment-related rules and the potential of environment-related efficiency enhancements is given in Figure 13; the fitting graph of the relationship between the effect of environment-related rules and the potential of environment-related efficiency enhancements of power, heat, gas and water production and supply industries is given in Figure 14; the fitting graph of the relationship between the effect of environmentrelated rules and the potential of environment-related efficiency enhancements in the construction industry is given in Figure 15. Figures 11-15 show that there is a W-shaped curve relationship between the effect of environment-related rules and the potential of environment-related efficiency enhancements in the broader national industry. However, considering the four industries of mining, manufacturing, electricity, heat, gas and water production and supply-and construction separately-the relationships between the effect of environment-related rules and the potential of environment-related efficiency enhancements are decreasing. As fueled by the optimization of the effect of environment-related rule, the potential of environment-related efficiency enhancements in the manufacturing industry largely decreases, while the potential of environment-related efficiency enhancements in power, heat, gas and water production and supply industries decreases slightly. Note: Value in brackets is the p value is. Specifically, p value less than 0.001 is recorded as ***; p value less than 0.01 is recorded as **; p value less than 0.05 is recorded as *. Figure 11. Fitting potential of effect of environment-related rules and potential of environment-related efficiency enhancements for the broader national industry. Industry Classification Existing studies on environment-related rules of industries have commonly performed the simple classification of industries. Most frequently, all industries are split into pollution-intensive industries and cleaner production industries. According to the classification of Shen [21], 38 industries are divided into pollution-intensive industries and Industry Classification Existing studies on environment-related rules of industries have commonly performed the simple classification of industries. Most frequently, all industries are split into pollution-intensive industries and cleaner production industries. According to the classification of Shen [21], 38 industries are divided into pollution-intensive industries and cleaner production industries (e.g., pollution-intensive industries and cleaner production Industry Classification Existing studies on environment-related rules of industries have commonly performed the simple classification of industries. Most frequently, all industries are split into pollutionintensive industries and cleaner production industries. According to the classification of Shen [21], 38 industries are divided into pollution-intensive industries and cleaner produc-tion industries (e.g., pollution-intensive industries and cleaner production industries), as presented in Table 9. Table 9. Industry Classification. Pollution Intensive Industries Cleaner Production Industry Coal mining and washing; non-metallic mining and dressing; ferrous metal mining and dressing; petroleum; natural gas mining; non-metallic mineral products; chemical fiber manufacturing; chemical raw materials and products manufacturing; non-ferrous metal smelting; petroleum processing; non-ferrous metal mining and dressing; ferrous metal smelting; paper and paper products; electricity; thermal production; gas production and supply; pharmaceutical manufacturing; rubber products Metal products; food manufacturing; beverage manufacturing; textile and garment manufacturing; whose production and supply; electrical machinery manufacturing; cultural and educational supplies manufacturing; communication equipment manufacturing; special equipment manufacturing; general equipment manufacturing; crafts manufacturing; automobile manufacturing; railway; ship; aerospace and other transportation equipment manufacturing waste gas resource processing; wood processing products; printing industry; furniture manufacturing industry; tobacco products; instruments; agricultural and sideline food processing; leather and fur products Threshold Regression Model In this section, a threshold regression model is used to determine the threshold effect of environment-related efficiency among industries, and to distinguish between polluting industries and non-polluting industries. According to Qu [34], for a specific threshold, the effect of environment-related rules with a lag period is taken as the threshold variable, and the panel threshold regression model is set as follows: where I(·) is the indicative function, when the conditions in brackets are satisfied, I(·) equals to 1, or otherwise equals to 0. With xthreg command in Stata software, the estimated threshold values of the broader national industry are 0.8851, 0.9829, 0.8831 and 0.9841. The optimal range of effect of environment-related rules in pollution-intensive industries is (0.8831, 0.9829) and in cleaner production industries is (0.8831, 0.9841). There is indeed a threshold effect in environment-related rule. When the effect of environment-related rules is lower than the optimal range, the environment-related rule policies do not fulfill their role. When the effect of environment-related rules is higher than the optimal range, the effect of environment-related rules is too strong, which exceeds the extent that the industry can bear and distorts the development of the industry. According to the sample data, there is little difference in the optimal range of effect of environmentrelated rules between pollution-intensive industries and cleaner production industries, which indicates that the existing environment-related rule policies do not treat different industries differently. Conclusions This study used the adaptive semi-parametric panel model to estimate the results and divided them into five dimensions: the broader national industry; mining; manufacturing; power, heat, gas and water production and supply industry; and the construction industry, to analyze how the effect of environment-related rules affects environment-related efficiency. The following conclusions were drawn: (1) The relationships between the effect of environment-related rules and environment-related efficiency of the broader national industry comply with a U-shaped curve; the relationships between effect of environmentrelated rules and environment-related efficiency of mining industry and manufacturing industry are flat; the relationships between the effect of environment-related rules and environment-related efficiency of power, heat, gas and water production and supply industry comply with an N-shaped curve. The relationships of the effect of environment-related rules and environment-related efficiency of the construction industry are rising; (2) An upward relationship was identified between the effect of environment-related rules and the technical gap of environment-related efficiency in the broader national industry, and a Ushaped curve relationship between the effect of environment-related rules and the technical gap of environment-related efficiency in the mining industry. The relationships of the two are basically an upward one in the mining sample data; the technical gap between the effect of environment-related rules and environment-related efficiency in the mining industry. A downward relationship was identified between them, while an upward relationship was revealed between the effect of environment-related rules and the technical gap of environment-related efficiency between the production and supply of electricity, heat, gas and water and the construction industry; (3) A W-shaped curve relationship was identified between the effect of environment-related rules and the potential of environment-related efficiency enhancements in the broader national industry, whereas the four industries of mining, manufacturing, power, heat, gas and water production and supply, and construction are separated. The relationships of the effect of environment-related rules and the potential of environment-related efficiency enhancements are declining. With the improvement of the effect of environment-related rule, the potential of environment-related efficiency enhancements in the manufacturing industry decreases the most, while the potential of environment-related efficiency enhancements in the power, heat, gas and water production and supply industries decreases less; (4) The government is required to apply different environment-related rule policies to different industries and their different stages of development, fully exploit such environment-related rule policies as industries and technologies, optimize the environment-related rule system, and achieve the coordinated development of industry economy and environment. From the empirical results, it is critical to enhance the effect of environment-related rules in the mining and manufacturing industries, elevate their technical level, and make the effect of environment-related rules have a positive relationship with the environment-related efficiency enhancements; (5) Environment-related rules exert a threshold effect on environment-related efficiency. When the effect of environment-related rules is lower than the optimal range, environmentrelated rule policies do not play their due role. When the effect of environment-related rules is higher than the optimal range, the effect of environment-related rules is too strong, thereby exceeding the extent that the industry can bear and distorts the development of the industry. According to the sample data, the optimal range of the effect of environmentrelated rules between pollution-intensive industries and cleaner production industries is relatively close, demonstrating that the existing environment-related rule policies do not differentiate between pollution-intensive industries and cleaner production industries, and future environment-related rule policies should differentiate between them. Given the aforementioned conclusions, the following policy suggestions are proposed. (1) According to different industries and their different development stages, the government should implement different environment-related rule policies, comprehensively employ environment-related rule policies (e.g., industries and technologies), improve the environment-related rules system, and develop the industrial economy and environment in a coordinated manner. The empirical results suggest that it was particularly necessary to strengthen the environment-related rules' intensity of mining and manufacturing industries and improve their technical level, enabling a positive relationship between the environment-related rules' intensity and the improvement of environment-related efficiency. (2) For the environment-related policies, pollution-intensive industries and clean production industries should be treated differently, while the optimal range of environmentrelated rules in different industries should be considered as an attempt to ensure the steady improvement of the industrial production technologies. Data Availability Statement: Publicly available datasets were analyzed in this study. These data can be found here: (http://www.stats.gov.cn/ (accessed on 15 February 2018)). Conflicts of Interest: The author declares no conflict of interest.
2021-07-27T00:05:08.128Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "3ca656541352fb1ba4c8558414739a8c25970fb1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/11/6203/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e00ef0e1a0014cdaae6236212511c676a1916fc5", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
198278185
pes2o/s2orc
v3-fos-license
Preoperative imaging in staghorn calculi, planning and decision making in management of staghorn calculi Objective Staghorn calculi present a particular and challenging entity of stone morphology. Treatment is associated with lower stone-free rates and higher complication rates compared to non-staghorn stones. In this review we looked for the most relevant data on preoperative imaging and access planning to help decision making for percutaneous surgery with this complex condition. Methods We conducted a PubMed search of publications in the past 2 decades that include relevant information on the planning for management of staghorn stones. Non-contrast computerized tomography (NCCT) is indeed the standard imaging tool for percutaneous nephrolithotomy (PCNL); additional tools such as three-dimensional computed tomography (CT) reconstruction of the staghorn calculus may help plan access in complex cases. Ultrasound guided percutaneous access may be considered for staghorn stones when planning upper pole access in kidney malposition or complex intrarenal anatomy or with complex body habitus. Wideband doppler ultrasound and real-time virtual sonography can assist. New technologies to improve kidney access such as Uro Dyna-CT or electromagnetic sensor have been reported, but have not shown utilization in staghorn cases. Staghorn morphometry-based prediction algorithms may predict the number of tract(s) and stage(s) for PCNL monotherapy. Lower pole access can be equally effective as upper pole when planning for staghorn and complex stones, with significantly less complications rate; Stone-Tract length-Obstruction-Number of involved calyces-Essence of stone density (STONE) nephrolithometry seems to be the best system to predict outcomes of PCNL in staghorn cases. There is a growing trend of endoscopic combined intrarenal surgery (ECIRS) in concordance with PCNL to treat larger stones. Conservative management of staghorn calculi is an undesired option, but can be an alternative for a carefully selected group of high-risk patients. Conclusion Staghorn stones may lead to deterioration of renal function and life-threatening urosepsis. This entity should be managed aggressively with planning ahead for surgery using the different tools available as the cornerstone for a successful outcome. ª 2020 Editorial Office of Asian Journal of Urology. Production and hosting by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/). Introduction Staghorn stones are a specific entity of kidney stones that branch out and fill the renal pelvis and part or all of the intrarenal calyceal system. This particular form of stone can present with a complete or partial configuration. It is usually unilateral and more prevalent in females [1]. Traditionally, staghorn stones have close association with urinary tract infections caused by urea-splitting organisms and consist of pure magnesium ammonium phosphate (struvite) or a mixture of struvite and calcium carbonate apatite [1]. These "infection stones", if left untreated, grow rapidly and may lead to deterioration of renal function, end-stage renal disease, and lifethreatening urosepsis [2]. Percutaneous nephrolithotomy (PCNL) represents the standard of care for staghorn stones [2]. A large series of patients with staghorn stones proves that PCNL has a lower stone-free rate (SFR) compared to non-staghorn stones, higher complication rate, with increased operation time and length of hospitalization [3]. We herein provide an overview of the preoperative planning and decision making to be considered in the management of staghorn calculi. Preoperative imaging Successful PCNL relies on meticulous preoperative planning and optimal percutaneous access. Computed tomography (CT) has become the standard imaging modality for PCNL planning. Preoperative CT allows the selection of the optimal percutaneous renal access. Intraoperative fluoroscopy and/or ultrasound (US) are necessary to carry out directed percutaneous renal puncture and the following tract dilation before endoscopic inspection of the collecting system. Finally, postoperative imaging (CT, US or kidneys, ureters and bladder [KUB]) determines the presence and volume of residual fragments to ascertain the need for second-look flexible nephroscopy. Safe and efficient percutaneous access and stone removal require that endourologist has a clear and accurate understanding of the pelvicaliceal system anatomy as well as stone location with respect to infundibular and caliceal system. This is even more crucial in patients with various anatomic abnormalities such as morbid obesity, kyphoscoliosis and ectopic or mal-rotated kidneys that are in greater risk for a poor access, incomplete stone removal, and injury to adjacent organs. Since spiral CT was first introduced into clinical practice in the late eighties, it has become the standard imaging tool for PCNL. Advantages of non-contrast CT (NCCT) before renal stone surgery have become increasingly evident and include localizing peripheral stones to anterior or posterior calices, determining the direction of caliceal extensions of staghorn calculi, evaluating the thickness of the parenchyma that overlies calculi, and visualizing even stones that are poorly seen on plain radiograph [4]. The development of helical CT with the elimination of respiratory artefacts has improved image reconstruction even further while analyzing the surrounding structures in order to plan the access and further demonstrated the safety of supracostal access with the patient in the prone position [4]. Noncontrast three-dimensional (3D) CT reconstruction of the staghorn calculus for planning access has also been evaluated in several studies with the thought that 3D reconstruction of the renal stone can help determine access site and intraoperative orientation. Hubert et al. [5] used this method in 27 renal calculi of whom 23 staghorn stones found that the access site was altered in a third of patients to what would have been adopted if the corresponding axial CT and intravenous urography (IVU) had been used. Li et al. [6] performed PCNL successfully with the assistance of a 3D model in 15 patients with complex stones, and eight partial/complete staghorn were managed with one-stage SFR of 93.3%. There may be a role also for contrast enhanced CT scans for patients with staghorn stones. Thiruchelvam et al. [7] assessed a modified technique of multidetector computed tomographic urography (CTU) to map the pelvicalyceal system (PCS) for complex renal calculi. Of the 10 CTUs performed, three were for staghorn stones. These showed good infundibular anatomy and provided a good map of the stones in relation to the PCS. With reconstructed images, subjectively the 3D imaging provided an advantage over conventional imaging in optimizing nephrostomy placement. Authors suggested reducing radiation exposure by performing CTU with no preceding unenhanced CT as all the stones detected on the CTU after contrast injection were visualized on the unenhanced CT. Mishra et al. [8] suggested a CT urography staghorn morphometry-based prediction algorithm to predict tract(s) and stage(s) for PCNL monotherapy and to use it to classify the staghorn stone accordingly. They used a retrospective case-control design of 94 renal units. CT software calculated the total stone volume (TSV) with absolute volume and percentile volume in the pelvis, planned entry calix, favorable and unfavorable calix. This model of staghorn morphometry differentiated staghorn stones into type 1 (single tract and stage), type 2 (single tract-single/ multiple stage, or multiple tract single stage), and type 3 (multiple tract and stage). Planning on the access calyx and number of tracts PCNL represents the standard of treatment for large renal stones. Even though it is not a complication free procedure, it is still considered a minimally invasive procedure providing high success rate and safety profile [8,9]. Complete removal of stones is crucial for preventing recurrence and morbidity. The most favored approaches when performing PCNL for staghorn stones are the upper pole (UP) access and multiple tract access. Literature favors the UP caliceal access as the one that allows better entry into the entire pelvicalyceal system thereby allowing better approach to the stone burden, better SFRs, fewer kidney percutaneous tracts needed, and less manipulative trauma compared to lower pole (LP) or multiple tracts access [10,11]. While the UP calyx allows direct access to the intrarenal collecting system and potential greater stone clearance rate, one should keep in mind that the complication rates, especially thoracic complications and bleeding, are significantly more common with this approach. Tefekli et al. [12] reviewed 4 494 patients from the data collected by the Clinical Research Office of the Endourological Society (CROES) from consecutive patients at 96 centers globally. The upper pole access group was utilized for more staghorn stones (21.7% vs. 15.5%, p<0.001). Overall perioperative complication rates were higher in the UP-access group compared to LP (23.5% vs. 16.1%, p<0.001). Pulmonary complications (hydrothorax) were significantly more common in the UP access (5.8% vs. 1.5%, p<0.001). Transfusion rate was also significantly higher for the UP access compared to LP (7.3% vs. 4.0%, p<0.002). In a recent report, Blum et al. [13] looked at total of 76 patients with complete staghorn stones. The lower pole was accessed in 59 (77.6%) patients, and they found similar efficacy with decreased morbidity compared to UP access in patients in prone position. They did not find any difference in the ability of completing the surgery utilizing a single tract as opposed to multiple tracts (74.6% of LP patients vs. 76.5% of UP patients). SFRs for LP and UP access were similar (74.5% vs. 70.5%, respectively [pZ0.760]) and the complication rates were lower for LP access vs. UP access (3.4% vs. 23.5%, pZ0.02). The debate continues over the use of single tract PCNL with complimentary flexible nephroscopy and/or ureteroscopy, versus multiple tracts [14]. Akman et al. [15] retrospectively reviewed the records of 413 patients with partial or complete staghorn stones. Single access was performed in 244 (59%) patients and multiple accesses were necessary in 169 (41%) patients. Mean durations of fluoroscopy and operative times were significantly longer in the multiple tract group. Success after one stage PCNL was achieved in 70.1% with single tract and in 81.1% after multiple tracts (pZ0.012). The most common complication was bleeding for both groups, and it was higher for the multiple tract group (hemoglobin drop 2.1þ1.7 in the single tract vs. 2.5þ1.6 in the multiple tract group, p<0.0001). Turna et al. [16] retrospectively analyzed the data of 193 PCNL procedures, and they found that staghorn stones (pZ0.006) and multiple tracts (pZ0.038) were associated with increased renal hemorrhage during PCNL on multivariate analysis. Decrease in renal function is another factor to consider when planning single or multiple tracts. We recently reported that multiple per cutaneous access is associated with a small reduction in the differential renal function on the operated kidney when compared to a single access approach. We identified 110 cases in which renography was performed before surgery and between 1 month and 1 year after PCNL and found a significant 2.28% decrease in renal function on the affected kidney in patients who received multiple tracts (p<0.01) [17]. Other studies favor a more aggressive approach when treating staghorn calculi by showing the safety and efficacy of multiple tracts. Singla et al. [18] retrospectively analyzed 164 renal units in 149 patients with 2e6 tracts per unit. Complete stone clearance rate of 70.7% was achieved after a single session of PCNL and increased to 89% after a secondlook procedure and extracorporeal shock wave lithotripsy. The complications described included blood transfusion in 46 patients, sepsis in eight and hydrothorax in seven. Fluoroscopy versus ultrasound guided PCN access Fluoroscopy is the most commonly used modality for PCN access. Several techniques have been described, and all have proved to be efficient. The election of the technique should be merely decided based on the surgeon experience. The main disadvantages for fluoroscopy access are the single plane projection and the radiation exposure to the patient and the staff in the operating theatre. US guided percutaneous access has several advantages compared with fluoroscopic access. US is a readily available, non-expensive and portable unit in OR; it provides a tridimensional orientation and provides guidance for access in multiple planes, longitudinal, transverse and oblique; it allows to measure the stone to skin distance; it avoids the risk of lesions to adjacent organs mainly colon, liver or spleen and the risk of transthoracic puncture. Doppler/US may prevent the puncture to important vascular structures and allows a real-time monitoring of the needle tip placement reducing significantly radiation exposure during surgery. While US guided access can be used for any patient, indications where US has clear advantage over fluoroscopy are upper pole access, kidney malposition, complex intrarenal anatomy, complex patient body habitus, reconstructed urinary tract post urinary diversion or when retrograde contrast injection cannot be performed. Several studies have compared fluoroscopy versus US guided access during PCNL. Andonian et al. [19], in another of the CROES publications on PCNL, analyzed weather the image modality used for percutaneous renal access made a difference. The study included 453 (13.7%) patients where US guided access was utilized and 2 853 (86.3%) patients accessed with fluoroscopy. In the univariate analysis they found that there was a significant reduction in the risk of transfusion in favor of US (6.0% versus 13.1% for fluoroscopy, pZ0.001). On the multivariate analysis they found that the risk of bleeding was associated to the size of the tract with an increase of 4.91 times when using 27e30 Fr sheaths compared to a sheath smaller than 24 Fr and the number of tracts with an increase of 2.6 times for multiple versus single tracts. There has been a recent interest on performing X-rayfree ultrasound access to the kidney. Usawachintachit et al. [21] reported on their technique of X-ray-free ultrasound-guided PCNL. They looked at 96 consecutive patients and concluded that the ideal candidate for a completely X-ray-free ultrasound-guided PCNL should have a hydronephrotic collecting system with no staghorn stone present. Inoue et al. [22] evaluated the efficacy and safety of wideband Doppler ultrasound-guided mini-endoscopic combined intrarenal surgery (mini-ECIRS) for large renal stones. This method displays a clearer image of the path of the blood vessels in real time than conventional color Doppler and therefore can be used to accurately visualize peripheral vascular flow. Forty one patients with a mean stone size of 45.5 mm, of which 41.4% were staghorn stones were included. The mean total operative time was 158.4þ51.3 min. SFR was defined as residual fragments smaller than 4-mm on X-ray and ultrasonography on 1 day and 1 month postoperatively. Initial SFR of 73.2% was reported with mean hemoglobin drop of 0.54 g/dL and three (7.3%) postoperative modified Clavien grade II complications. New technologies for percutaneous renal access Performing the puncture of the renal collecting system is the major challenge step in PCNL, whether it is performed with the use of standard fluoroscopy or in combination with ultrasound-based maneuvers. The current challenge for PCN renal surgery is to improve accuracy of the puncture, using real time anatomic navigation system to reduce the puncture related complications and improve the procedure efficacy. In recent years several novel techniques for percutaneous kidney access have been developed. Lima et al. [23] used a flexible ureteroscopy to insert an electromagnetic sensor to the optimal renal calyx for access. Then the selected calyx was punctured with a needle with a sensor on the tip guided by real-time 3D images observed on the monitor. Rassweiler et al. [24] reported an iPAD-assisted access that applies marker-based tracking for puncture of collecting system. A CT is performed in similar prone position on a PCNL-cushion with six colored radiopaque markers on the skin around the target area preoperatively and using a special software, virtual anatomy displayed on the iPAD correlates with real anatomy and can be used for puncture. Uro Dyna-CT (Siemens Medical Solutions, Erlangen, Germany) utilizes a digital angiography unit that rotates around the patient and creates 3D reconstruction of target structures [25]. Using a special software and based on "bull's eye" technology, a laser light simulates the puncture line in any position of the C-arm that might be necessary during puncture. An obvious disadvantage of this technology is that the image acquisition uses higher radiation doses than standard fluoroscopy. Real-time virtual sonography (RVS) is another technology suggested to assist in access to the kidney. This is a diagnostic imaging support system that synchronizes real-time US with CT or magnetic resonance imaging, via a magnetic navigation system, to provide volume and position data, side by side, in real time. It has been used for radiofrequency ablation for hepatocellular carcinoma, and biopsy and focal therapy for prostate cancer. Hamamoto et al. [26] evaluated it for percutaneous renal puncture during endoscopic combined intrarenal surgery. Thirty patients were divided half into the RVS-guided puncture and half for US guided puncture. In the RVS group, renal puncture was repeated until precise piercing of a papilla was achieved under direct endoscopic vision. The mean sizes of the renal calculi in the RVS and the US group were 33.5 mm and 30.5 mm, respectively. A lower mean number of puncture attempts for access was needed for the RVS compared with the US group (1.6 times vs. 3.4 times, pZ0.001). The RVS group had a lower mean postoperative hemoglobin decrease (0.93 vs. 1.39 g/dL, pZ0.04), but with no differences with regard to operative time, tubeless rate, and SFR and with no postoperative complications of a Clavien score !2. While these technologies represent promising ways to improve renal access and the final outcomes for the patients with renal stones, today, none of these alternatives have been approved for routine utilization during percutaneous surgery and none of them have been implemented for PCNL in staghorn stones in a large scale. [27] conducted recently the most updated meta-analysis including 6 881 patients. The study showed that prone position was associated with a higher rate of stone clearance than the supine position (odds ratio [OR]: 0.74; 95% CI: [0.65, 0.84]; p<0.00001). A shorter mean operative time was observed in the supine groups (weighted mean difference [WMD]: À18.27; 95% CI: À35.77 to À0.77; pZ0.04) as well as a lower incidence of blood transfusions in favor of the supine group (WMD: 0.73; 95% CI: 0.56 to 0.95; pZ0.02). Astroza et al. [28] on behalf of CROES analyzed the effect of supine versus prone position on the outcomes of PCNL in patients with staghorn stones. They looked at a total of 1 311 patients, 1 079 PNLs performed in prone and 232 in supine position, and found that SFR was higher for patients in the prone position (48.4% vs. 59.2%; p<0.001). Of note, upper pole access was significantly more utilized in prone position compared to supine (12.6% vs. 3.6%, p<0.001), supporting the fact that accessing the upper pole is more challenging when patient is positioned in supine. Surgical time was shorter in the prone position (103.2 min vs. 123.1 min; p<0.001); retreatment rate was higher in supine position (36.1% vs. 21.5%; p<0.05); there were no differences in complication rates in both groups. Gökce et al. [29] looked at 48 patients operated in prone position and 39 patients operated in supine position and reported that multi-caliceal and intercostal access was more common in the prone position. Operation duration was significantly shorter and hemoglobin drop was significantly less in the supine group while the complication rates were similar in the two groups. SFR was similar (64.1% and 60.4% in the supine and prone groups, respectively; pZ0.72). Prone and supine position are feasible and recommended to access the kidney during PCN renal surgery. This statement applies also for the management of staghorn stones, although the literature shows that upper pole access may be limited in some cases in the supine position. There are very limited restrictions for each one of the surgical approaches and there is not enough evidence in the literature to make strong recommendations on a superior position. The decision on the patient position modality to treat renal stones, including staghorn stones, depends strictly on the surgeon preference. ECIRS Another growing trend in the past decade is ECIRS, using both PCNL and RIRS to treat larger stones [30]. This approach has the apparent advantage of avoiding multiple tracts and therefore bleeding complications. Since then, several modifications to this approach have been published, including, among others, the use of mini-PCNL instead of standard PCNL and the use of semi-rigid URS. This technique is becoming more popular and larger series will be needed to probe the benefits treating staghorn stones [31]. Predicting outcomes of PCNL for staghorn stones There are currently three systems validated to predict prognosis of PCNL outcomes, the Guy's stone score (GSS), the CROES nomogram and Stone-Tract length-Obstruction-Number of involved calyces-Essence of stone density (STONE) nephrolithometry [32e34]. All three nomograms systems have shown high accuracy and proved to be reliable in predicting SFRs after PCNL [35,36]. Sfoungaristos et al. [37] analyzed a total of 73 staghorn calculi with mean Guy's, CROES and STONE scores of 3.34, 125.8 and 9.95, respectively. Postoperative SFR was 65.8% and STONE nephrolithometry was found to be the only predictor for SFR after PCNL for staghorn stones compared to Guy's and CROES nomograms. Conservative management of a staghorn stone The natural history of untreated staghorn stones is one of progressive morbidity and mortality. It destroys the kidney and causes life threatening risks from end stage kidney disease and infectious complications. This has been consistently shown in earlier and recent reports. Blandy and Singh [39] in 1976 reported a 10-year mortality rate of 28% with conservative treatment. Teichman and colleagues [40] in 1995 retrospectively reviewed 177 consecutive staghorn patients with an average follow-up of 7.7 years. They reported an overall rate of renal deterioration of 28% and 67% renal-related causes of the deaths for those who declined treatment. Despite this evidence, ever so often clinicians face with the need of deciding for conservative treatment due to severe comorbidities, restrictions for renal access due to difficult anatomy and even patient or family decisions. Morgan et al. [41] described the overall outcomes in a cohort of patients with staghorn calculi treated conservatively. Fourteen out of a cohort of 29 patients were treated conservatively over a mean follow-up of 24 months. None of the study patients required hemodialysis or developed an abscess. There was only one related admission for pyelonephritis and one death from urosepsis of a patient that had been noncompliant with follow-up. Deutsch and Subramonian [42] evaluated the outcomes of 22 patients with unilateral or bilateral staghorn calculi conservatively managed. The rate of recurrent UTIs was 50%; the progressive renal failure rate was 14%; the disease-specific mortality rate was 9%; the dialysis dependence rate was 9%; the rate of hospital attendances attributable to stone-related morbidity was 27%. Therefore, conservative management of staghorn calculi can be an option for a carefully selected patients that should be counseled thoroughly regarding the risks entailed with this choice. Conclusion Staghorn stones are a renal disease that may lead to deterioration of renal function and life-threatening urosepsis. This entity should be managed aggressively and effectively and planning ahead for surgery using the different tools available is the cornerstone for a successful outcome. Proper consent is important so patients understand what to expect after treatment.
2019-07-26T07:23:47.754Z
2019-07-06T00:00:00.000
{ "year": 2019, "sha1": "446923ffab2a745398916b575b96fcb1563d0383", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ajur.2019.07.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "099679b2d0f50bfbbb4ec6b890c91263412e7320", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59519752
pes2o/s2orc
v3-fos-license
Existence and approximation of Hunt processes associated with generalized Dirichlet forms We show that any strictly quasi-regular generalized Dirichlet form that satisfies the mild structural condition D3 is associated to a Hunt process, and that the associated Hunt process can be approximated by a sequence of multivariate Poisson processes. This also gives a new proof for the existence of a Hunt process associated to a strictly quasi-regular generalized Dirichlet form that satisfies SD3 and extends all previous results. Introduction In this note we are concerned with several questions related to probabilistic and analytic potential theory of generalized Dirichlet forms. A particular aim is to find definitive analytic conditions for non-sectorial Dirichlet forms that ensure the existence of an associated Hunt process. The question whether the associated process is a Hunt process is crucial for localizing purposes (see e.g. introduction of Ref. [15]). A fundamental consequence of Theorem 2 in Ref. [15] and Theorem 3.2(ii) in Ref. [9] is that any transient Hunt process M on a metrizable and separable state space is strictly properly associated in the resolvent sense with a strictly quasi-regular generalized Dirichlet form. This is relevant because we can then apply all the fine results from the potential theory of generalized Dirichlet forms w.r.t. the strict capacity (see Ref. [15] for some strict potential theory, and Remark 3.3(iv) in Ref. [9] which applies also to strictly quasi-regular generalized Dirichlet forms and Hunt processes). Moreover, if the state space is only slightly less general, namely (for tightness reasons) a metrizable Lusin space, then by Theorem 2.1 in Ref. [6] the Hunt process can be approximated by multivariate Poisson processes and the approximation works for all P x , i.e. for all x in the state space. The canonical approximation of the Hunt process by Markov chains is useful as it provides an additional tool for its analysis and for the analysis of the underlying generalized Dirichlet form. Note that the just mentioned line of arguments is not valid for sectorial Dirichlet forms, which underlines a strength of generalized Dirichlet form theory. In fact, for a given arbitrary Hunt process we first do not know how to check whether it is associated to a sectorial Dirichlet form, and second this is clearly is not true in general. Here, we establish the "quasi converse" of the above with nearly no restriction on the state space. We consider two problems, which, due to the method, are in fact solved simultaneously. The first problem is to establish the existence of an associated Hunt process to a strictly quasi-regular generalized Dirichlet form on a general state space, and the second is the approximation of this Hunt process in a canonical way through Markov chains. The second problem goes back to an original idea of S. Ethier and T. Kurtz. In fact, it is shown in Chapter 4.2 in Ref. [1] that for nice state spaces such as locally compact and separable state spaces and nice transition semigroups like Feller ones, the Yosida approximation via multivariate Poisson processes converges for all starting points to a Markov process with the given semigroup. This was generalized in Ref. [7] where it is shown that the Yosida approximation of the generator together with some tightness arguments that result from the strict quasi-regularity leads to the approximation via multivariate Poisson processes of any Hunt process that is associated with a strictly quasi-regular sectorial Dirichlet form. This also led to a new proof for the existence of an associated Hunt process. However, the price for the increased generality is that the approximation only works for strictly quasi-every starting point x of the state space. Since we use the same method we have to pay the same price, and even more we have to assume the additional structural condition D3 that is however trivially satisfied for any sectorial Dirichlet form (see Proposition 2.1). Nonetheless, since the class of generalized Dirichlet forms is much larger than the class of sectorial Dirichlet forms our results represent a considerable generalization. In particular time-dependent processes and processes corresponding to far-reaching perturbations of symmetric (or even sectorial) forms, are covered. Besides the canonical approximation scheme through Markov chains we want to emphazise that our main result Theorem 4.6 is a definitive improvement of Theorem 3 in Ref. [15]. Applying here a quite different method than in Ref. [15] we were able to relax the algebra structure condition SD3 of Ref. [15] to the quite weaker linear structure condition D3. Therefore our general analytic conditions for non-sectorial Dirichlet forms to ensure the existence of an associated Hunt process are just D3 and the strict quasi-regularity. The state space is only assumed to be a Hausdorff topological space such that its Borel σ-algebra is generated by the set of continuous functions on the state space. Our result is hence the counterpart of IV. Theorem 2.2 in Ref. [12] for Hunt processes. Finally let us briefly summarize the main contents of this paper. Section 2 contains some preliminaries and the fundamental results. In particular, our way of defining the strict capacity (cf. Definition 2.2) is more explicit than in V.2 of Ref. [4] and Section 2 of Ref. [7], but still equivalent (see Remark 2.4). The strict capacity is defined w.r.t. some reference function ϕ, but it turns out to be like the ϕ-capacity independent of that function (see Remark 2.4(ii)). Proposition 2.5 provides a useful new estimate for the strict capacity. A crucial result is the construction of the modified functions e n in Lemma 2.10 in comparison to the functions e n of Lemma 3.5 in Ref. [7]. This makes the difference and allows to handle the non-sectorial case (see also Remark 2.11 for some related explanations). Lemma 2.10 allows to get the important tightness result of Lemma 4.3. Note that we also correct an inaccuracy that appears in the proofs of the statements corresponding to Lemma 4.3 in both papers Ref. [7] and Ref. [6] (see Remark 4.2) and that we partially improve results from Ref. [7] (see e.g. paragraph in front of Proposition 2.7). Having developed the fundamental results of potential theory in Section 2, most of the results of Sections 3 and 4 follow by "routine" arguments from Ref. [4], Ref. [1], and Ref. [3], similarly to the line of arguments used in Ref. [7]. For the sake of completeness and convenience of the reader we summarize these results. 2 Strict quasi-regularity, strict capacity, and the construction of R α For notations and notions that might not be defined here we refer to Ref. [15] and references therein. Throughout the paper let E be a Hausdorff space such that its Borel σ-algebra B(E) is generated by the set C(E) of all continuous functions on E, and let m be a σ-finite measure on (E, B(E)). Let (E, F) be a generalized Dirichlet form with sectorial part (A, V) on H = L 2 (E, m). Let (G α ) α>0 be L 2 (E, m)-resolvent associated with E, and ( G α ) α>0 be the adjoint of (G α ) α>0 in H. Quasi-regular generalized Dirichlet forms and the conditions D3 and SD3 Given ϕ ∈ L 2 (E, m), ϕ > 0, an increasing sequence of closed subsets By IV. Proposition 2.10 in Ref. [12] the notion of E-nest is independent of the special choice of ϕ. Accordingly to Cap ϕ , E-exceptional sets, E-quasi-continuity, etc., and the quasi-regularity are defined (see Ref. [12]). In contrast to the theory of sectorial Dirichlet forms in Ref. [4] and Ref. [5] it is not known whether quasi-regularity is sufficient for the existence of an associated standard process in case of a generalized Dirichlet form. Therefore the following condition D3 There exists a linear subspace Y ⊂ H ∩ L ∞ (E, m) such that Y ∩ F is dense in F, lim α→∞ e αGαu−u = 0 in H for all u ∈ Y and for the closure Y of Y in L ∞ (E; m) it follows that u ∧ α ∈ Y for u ∈ Y and α ≥ 0. is introduced in IV.2 in Ref. [12] and it is shown in IV. Theorem 2.2 of Ref. [12] that a quasiregular generalized Dirichlet form satisfying D3 is associated with an m-tight special standard process. By an algebra of functions we understand a linear space that is closed under multiplication. The following condition SD3 There exists an algebra of functions was introduced in Ref. [15]. We have the following: Proposition 2.1 It holds: (ii) SD3 holds for any (sectorial semi-)Dirichlet form Proof (i) The proof is standard; cf. e.g. proof of IV. Proposition 2.1 in Ref. [12]. Strict capacities and strictly quasi-regular generalized Dirichlet forms The following definition is a notational simplification of Definition 1 of Ref. [15]. where e U := lim k→∞ (1 ∧ G 1 (kϕ)) U exists as a bounded and increasing limit in By Theorem 1 of Ref. [15] Cap 1, G 1 ϕ is a finite Choquet capacity. A priori the function e U depends on the chosen ϕ but in the next lemma we will see that this is actually not the case. where P denotes the 1-excessive elements of V, and Proof (i) Clearly "≤" holds in the statement. For f ∈ L 2 (E; m) it is not difficult to see that e. whenever f > 0 m-a.e. Indeed, since m is σ-finite and ( G α ) α>0 is positivity preserving by I. Remark 4.2 of Ref. [12], we can easily show by I. Proposition 3.4 in Ref. [12] that if A ∈ B(E) and A G 1 f dm = 0, then m(A) = 0. Thus sup k≥1 1 ∧ G 1 (kϕ) = 1 m-a.e. But then we have by (9) of Ref. [14] sup u∈P F ,u≤1 On the other hand since obviously which contradics (1). Remark 2.4 (i) Let Cap 1, G 1 ϕ be the strict capacity as defined in V.2 of Ref. [4]. Then in the sectorial case, i.e. when (E, F) is a Dirichet form, we have F = V and P F = P. Thus Cap 1, G 1 ϕ = Cap 1, G 1 ϕ by Lemma 2.3 and Definition V.2.1 in [4]. Moreover, the function e U defined in Definition 2.2 is an explicit realization of the functon e U of Lemma 2.2 in Ref. [4]. (ii) It follows immediately from Lemma 2.3 that the strict E-nests, and hence the strict notions, do not depend on the special choice of ϕ. Adjoining the cemetery ∆ to E we let We will consider different topologies on E ∆ . If E is a locally compact separable metric space but not compact, E ∆ will be the one point compactification of E, i.e. the open sets of E ∆ are the open sets of E together with the sets of the form E ∆ \ K, K ⊂ E, K compact in E. Otherwise we adjoin the cemetery ∆ to E as an isolated point. We extend m to (E ∆ , B(E ∆ )) by setting m({∆}) = 0. Any real-valued function u on E is extended to E ∆ by setting u(∆) = 0. Given an increasing sequence (F k ) k≥1 of closed subsets of E, we define as on page 360 of Ref. [15]. Accordingly to Cap 1, G 1 ϕ , the notions strictly E-exceptional (s.E-exceptional), strict E-nest (s.Enest), strictly E-quasi-everywhere (s.E-q.e.), strictly E-quasi-continuous (s.E-q.c.), and strictly Equasi-lower-semicontinuous (s.E-q.l.s.c.), are defined (see Ref. [15]). We observe that Proposition 2(i) in Ref. [15] can be generalized as follows: Proposition 2.5 Let u ∈ H with s.E-q.l.s.c. m-version u and suppose further that e u exist. Then for any ε > 0 Proof We have for any α > 0 the result now easily follows. From now on we fix a generalized Dirichlet form (E, F) that is strictly quasi-regular (see Definition 2 in Ref. [15]). Using Proposition 2.5 it is not difficult to see that strict versions of statements in Ref. [12] hold as stated in the following Lemma 2.6. However, we remark that the strict quasi-regularity in Lemma 2.6 is only used to ensure the existence of a strict E-nest of compact metrizable sets for the proof of (ii) (cf. III. Proposition 3.2 in Ref. [12]). Lemma 2.6 (i) Let S be a countable family of s.E-q.c. functions (resp. s.E-q.l.s.c. functions). Then there exists a s. (ii) If f is s.E-q.s.l.c. and f ≤ 0 m-a.e. on an open set U ⊂ E, then f ≤ 0 s.E-q.e. on U. If f, g are s.E-q.c. and f = g m-a.e. on an open set U ⊂ E, then f = g s.E-q.e. on U. (iii) Let u n ∈ H with s.E-q.c. m-version u n , n ≥ 1, such that e un−u + e u−un → 0 in H as n → ∞ for some u ∈ H. Then there is a subsequence ( u n k ) k≥1 and a s.E-q.c. m-version u of u such that lim k≥1 u n k = u s.E-quasi-uniformly. (iv) Let u n ∈ F with s.E-q.c. m-version u n , n ≥ 1, and u n → u in F. Then there is a subsequence ( u n k ) k≥1 and a s.E-q.c. m-version u of u such that lim k≥1 u n k = u s.E-quasiuniformly. By strict quasi-regularity one can find a strict E-nest of compact metrizable sets (E k ) k≥1 as in IV. Lemma 1.10 in Ref. [12] (see also Ref. [4]). Let By Lemma 2 in Ref. [15] we know that for any α > 0 there exists a kernel R α from (E, Moreover, the kernel R α is unique in the sense that, if K is another kernel from (E, B(E)) to (Y 1 , B(Y 1 )) satisfying (R1) and (R2), it follows that K(z, ·) = R α (z, ·) s.E-q.e. The next Proposition 2.7(ii) is even when applied to the sectorial case an improvement over Lemma 3.4 in Ref. [7] since we can choose the function h of Lemma 3.4 in Ref. [7] to be in the domain of the infinitesimal generator. Note also that Proposition 2.7(ii) is a statement about existence and not stated for any ϕ with the given properties. Proof (i) Using Lemma 2.6(iv) the proof is the same as in IV. Proposition 1.9 in Ref. [12]. Since { R 1 u + n , R 1 u − n ; n ≥ 1} separates the points of E ∆ \ N we have h(x) > 0 for all x ∈ E \ N . Since g k := k n=1 c n (u + n + u − n ) converges in L 1 (E, m) to some g with 0 ≤ g ≤ 1 and R 1 is a kernel we obtain h = R 1 g. Now choose ρ ∈ L 1 (E, m) with 0 < ρ ≤ 1. Then ϕ := ρ ∨ g is the desired function. From now on we assume that the strictly quasi-regular generalized Dirichlet form E satisfies D3. Using Lemma 2.6, Proposition 2.7, and the strict version of IV. Proposition 2.8 in Ref. [12] we obtain the following: Lemma 2.8 There exists a countable family J 0 of bounded strictly E-quasi-continuous 1-excessive functions and a Borel set Y ⊂ Y 1 satisfying: and set Since J 0 separates the points of Y ∆ , so does J. The following lemma is also clear. Lemma 2.9 Let (R α ) α∈Q * + and J be as in (2), (3). Then the statements of Lemma 2.8 remain true with J 0 , Y and R α replaced by J, Y ∆ and R α respectively. The construction of nice excessive functions Since strict quasi-regularity implies quasi-regularity by Proposition 2(ii) in Ref. [15], and since D3 is in force, we obtain by IV.Theorem 2.2 in Ref. [12] that (E, F) is associated with some m-tight m-special standard process. We denote the process resolvent by By Remark 2.4(ii) the strict capacity does not depend on the special choice of ϕ. We may and will hence from now on assume that ϕ is as in Proposition 2.7(ii). The following two lemmas are crucial for the later study of weak limits. Lemma 2.10 Let U n ⊂ E, n ≥ 1 be a decreasing sequence of open sets such that Cap 1, G 1 ϕ (U n ) → 0, as n → ∞. Then we can find m-versions e n of e Un such that: (i) e n ≥ 1 E-q.e. on U n , n ≥ 1. In particular, there are E-exceptional sets N n ∈ B(E), N n ⊂ U n , such that e n (x) := e n (x) + 1 Nn (x) ≥ 1 ∀x ∈ U n , n ≥ 1. Proof Define for n ≥ 1 where (1 ∧ V 1 (lϕ)) Un is some bounded measurable m-version of (1 ∧ G 1 (lϕ)) Un . Clearly e n is an m-version of e Un . Since (1 ∧ G 1 (lϕ)) Un is 1-excessive, and R α+1 f is s.E-q.c. for any (measurable) f ∈ H by (R2), by Lemma 2.6(ii) it is clear that the first part of (ii) holds. The second part of (ii) similarly also holds once we have shown that N n is E-exceptional, hence in particular m-negligible. This is done at the end of the proof. Obviously e n is s.E-q.l.s.c, s.E-q.e. decreasing in n, lim n→∞ e n exists s.E-q.e. and lim n→∞ e n ≥ 0 s.E-q.e. We have Then, since Cap 1, G 1 ϕ is a Choquet capacity we obtain with Proposition 2.5 Thus the first part of (iii) holds. The second part of (iii) is clear since lim sup n≥1 1 Nn ≤ 1 ∩ n≥1 Un = 0 s.E-q.e. By right-continuity and normality of the process Y we obtain for all z ∈ U n lim α→∞ αV α+1 I Un (z) = lim Hence the first part of (i) holds. For the second part of (i) we can find E-exceptional sets N n ∈ B(E), N n ⊂ U n , with e n · 1 Un\Nn + 1 Nn ≥ 1 pointwise on U n . But e n ≥ 0 everywhere since R α+1 is a kernel and so we obtain e n ≥ 1 on U n as desired. Remark 2.11 In Lemma 2.10(i) we were not able to show directly e n ≥ 1 s.E-q.e. on U n , n ≥ 1. (Unfortunately, (1 ∧ V 1 (lϕ)) Un has only a s.E-q.l.s.c. m-version in general and the inequality in Lemma 2.6(ii) is just the wrong way around.) (6) is used in Lemma 3.5, Lemma 3.6, and proof of Theorem 3.3 in Ref. [7] in an essential way. Instead, we will use the functions e n defined in Lemma 2.10(i) which is sufficient (cf. Lemmas 2.12 and 4.1, and Theorem 4.4). We remark that it is even sufficient to only know that e n ≥ 1 m-a.e. on U n , so that the sets N n in Lemma 2.10(i) are only m-negligible. It will turn out a posteriori that (6) actually holds. In fact, by our main result Theorem 4.6 below it follows that the process resolvent V α+1 f , f ∈ H ∩ L ∞ (E, m), is s.E-q.c. Thus applying Lemma 2.6(ii) V α+1 f = R α+1 f s.E-q.e. Therefore (5) holds s.E-q.e. and (6) follows. Lemma 2.12 In the situation of Lemma 2.10 there exists S ∈ B(E), S ⊂ Y such that E \ S is strictly E-exceptional and the following holds: (ii) e n (x) ≥ 1 f or x ∈ U n , n ≥ 1, and R α 1 Nn (x) = 0 ∀x ∈ S, α ∈ Q * + , n ≥ 1. Proof The first assertion of (ii) holds by definition in Lemma 2.10(i). By Lemma 2.10, (R2), and Lemma 2.6(ii), the rest of the proof works as in IV.3.11 of Ref. [4]. 3 The approximating forms E β and the approximating processes X β Let J, Y ∆ and (R α ) α∈Q * + be as in Lemma 2.9. First, we collect some results of Chapter 4 section 2 of Ref. [1]. For a fixed β ∈ Q * + , let {Y β (k), k = 0, 1, . . .} be a Markov chain in Y ∆ with initial distribution ν and transition function βR β . Let further (Π β t ) t≥0 be a Poisson process with parameter β and independent of {Y β (k), k = 0, 1, . . .}. Then it is known that is a strong Markov process in Y ∆ with transition semigroup i.e. we have for all t, Here (8) easily follows from (2.14) of Chapter 4 in Ref. [1]. Furthermore from the formula (7) one can see that (P β t ) t≥0 is a strongly continuous contraction semigroup on the Banach space . The corresponding generator is Define the forms E (β) , β > 0, by where we recall that (G β ) β>0 is the L 2 -resolvent of E. It is known (see e.g. Chapter I in Ref. [4]), that the C 0 -semigroup of submarkovian contractions on L 2 (E; m) that is associated to E (β) is given by From (7), (8), and (10) it follows that (X β t ) is associated with E (β) . Since R β f is an m-version of G β f for any measurable f ∈ H, by I. Examples 4.9(ii) in Ref. [12] we see that E (β) is a generalized Dirichlet form and For an arbitrary subset M ⊂ E ∆ let Ω M := D M [0, ∞) be the space of all càdlàg functions from [0, ∞) to M . Let (X t ) t≥0 be the coordinate process on Ω E ∆ , i.e. X t (ω) = ω(t) for ω ∈ Ω E ∆ . Ω E ∆ is equipped with the Skorokhod topology (see Chapter 3 in Ref. [1]). Let P β x be the law of X β on Ω E ∆ with initial distribution δ x if x ∈ Y ∆ , and if x ∈ E ∆ \ Y ∆ let P β x be the Dirac measure on Ω E ∆ such that P β x [X t = x for all t ≥ 0] = 1. Finally, let (F β t ) t≥0 be the completion w.r.t. (P β x ) x∈E ∆ of the natural filtration of (X t ) t≥0 . is a Hunt process associated with E (β) , i.e. for all t ≥ 0 and any m-version of u ∈ L 2 (E; m), x → u(X t ) dP β x is an m-version of T β t u. Proof By construction M β is a right process that has left limits in E ∆ . The quasi-left continuity up to ∞ can be shown by a routine argument following Ref. [3] (cf. IV.3.21 in Ref. [4]). Let J = {u n | n ∈ N} and g n := R 1 u n , n ∈ N. By Lemma 2.8(ii) and Lemma 2.9 {g n | n ∈ N} separates the points of Y ∆ and hence ρ defines a metric on Y ∆ . We may assume that Y ∆ is a Lusin topological space (cf. IV.Remark 3.2(iii) in Ref. [4]). It follows by Lemma 18 on p.108 of Ref. [11] that B(Y ∆ ) = σ(g n | n ∈ N) = (ρ-)B(Y ∆ ). ρ) is a compact metric space by Tychonoff's theorem. We extend the kernel (R α ) α∈Q * + to the space E by setting for α ∈ Q * + , A ∈ B(E), We may regard (X β t ) t≥0 as a càdlàg process with state space E and use the same notation as before: P β x denotes hence the law of (X β t ) t≥0 in Ω E with initial distribution δ x . Each g n is ρ-uniformly continuous and extends therefore uniquely to a continuous function on E which we denote again by g n . For the convenience of the reader we include the proof of the following theorem, which as we feel is slightly more transparent than the corresponding proof of Theorem 3.2 in Ref. [7]. Proof We first show that assumptions of 9.4 Theorem of Chapter 3 in Ref. [1] are fulfilled with C a = C(E) (where C a is as in the just mentioned theorem of Ref. [1]). Since g n ∈ D(L β ) it follows that is an (P β x , (F β t ) t≥0 )-martingale for any x ∈ E. Since L β g n = 1 Y ∆ βR β (g n − u n ) we have for all n ∈ N sup β∈Q * So, we proved that R 1 J := {g n | n ∈ N} ⊂ D where D ⊂ C(E) is the linear space from the theorem in Ref. [1]. Since for any u ∈ J we see by Dini's theorem that every u ∈ J has a unique (ρ-uniformly) continuous extension to E that is again denoted by u. Thus we may and do consider J as a subset of C(E). In particular, if A · ∞ denotes the uniform closure of A ⊂ C(E), we have Since J − J contains the constant functions, is inf-stable and separates the points of E we obtain that J − J is dense in C(E) by the Stone-Weierstraß theorem. Hence D · ∞ = C(E) and so by the theorem of Ref. [1] {f • X β | β ∈ Q * + } is relatively compact for all f ∈ C(E). Since E is compact, the compact containment condition trivially holds and so by 9.1 Theorem of Chapter 3 in Ref. [1] {X β | β ∈ Q * + } is relatively compact as desired. 4 Limiting process associated with the strictly quasiregular generalized Dirichlet form Theorem 4.4 There exists a Borel subset Z ⊂ Y and a Borel subset Ω ⊂ Ω E with the following properties: (iii) If ω ∈ Ω, then ω t , ω t− ∈ Z ∆ for all t ≥ 0. Moreover, each ω ∈ Ω is càdlàg in the original topology of Y ∆ and ω 0 t− = ω t− for all t > 0, where ω 0 t− denotes the left limit in the original topology. (iv) If x ∈ Z ∆ and P x is a weak limit of some sequence (P β j x ) j∈N with β j ∈ Q * + , β j ↑ ∞, then P x [Ω] = 1. Since the identities (8) and (9) carry over to B b (E) it is straight forward to check that the resolvent of the Yosida approximation has the following form For the explicit calculations we refer to Lemma 4.1 in Ref. [7]. It is equally straight forward to check (see Lemma 4.2 in Ref. [7]) that if P x , x ∈ E, is a weak limit of a subsequence (P β j x ) j≥1 with β j ↑ ∞, β j ∈ Q * + , then the kernel P t f (x) := E x [f (X t )] := Ω f (X t (ω))P x (dω), f ∈ B b (E), satisfies ∞ 0 e −αt P t f (x) dt = R α f (x), ∀f ∈ B b (E), α ∈ Q * + . In particular, the kernels P t , t ≥ 0, are independent of the subsequence (P β j x ) j≥1 . Then for every x ∈ Z ∆ (Z as in a Theorem 4.4) the relatively compact set {P β x | β ∈ Q * + } has a unique limit P x for β ↑ ∞, and the process (Ω E , (X t ) t≥0 , (P x ) x∈Z ∆ ) is a Markov process with the transition semigroup (P t ) t≥0 determined by (13). Moreover, P x [X t ∈ Z ∆ , X t− ∈ Z ∆ f or all t ≥ 0] = 1 (14) for all x ∈ Z ∆ . The proof of this is again the same as in Theorem 4.3 of Ref. [7]. Up to this end let (P x ) x∈Z ∆ be as in (14), and Ω and Z ∆ be specified by Theorem 4.4. Since P x [Ω] = 1 for all x ∈ Z ∆ , we may restrict P x and the coordinate process (X t ) t≥0 to Ω. Let (F t ) t≥0 be the natural filtration of (X t ) t≥0 . Then exactly as in Theorem 4.4. of Ref. [7] one shows that M Z := (Ω, (X t ) t≥0 , (F t ) t≥0 , (P x ) x∈Z ∆ ) is a Hunt process with respect to both the ρ-topology and the original topology. (2.18) in Ref. [12]). Then M is again a Hunt process and strictly properly associated in the resolvent sense with (E, F) by (R2) and (13). The Hunt process M is unique up to the equivalence described in IV.6.3 of Ref. [4]. In this sense M is the same process as the one constructed in Theorem 3 of Ref. [15] under the condition SD3. Theorem 4.6 Let E be a strictly quasi-regular generalized Dirichlet form satisfying D3. Then there exists a strictly m-tight Hunt process which is strictly properly associated in the resolvent sense with E.
2011-11-05T07:56:54.000Z
2011-03-16T00:00:00.000
{ "year": 2011, "sha1": "6649a17482bd3e4fd26f1e4f9f44c45f8d220648", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1103.3126", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6649a17482bd3e4fd26f1e4f9f44c45f8d220648", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
242412583
pes2o/s2orc
v3-fos-license
Systematic review and meta-analysis of perioperative compilations of hand-sewn esophagojejunostomy used for laparoscopic total gastrectomy Background: We examine the perioperative complications of hand-sewn esophagojejunostomy (EJ) methods used for totally laparoscopic total gastrectomy (TLTG) for the treatment of gastric cancer. Methods: We reviewed PUBMED, EMBASE and the Cochrane Central Register for studies published from May 1998 to May 2018 to evaluate the perioperative complications of hand-sewn esophagojejunostomy applied for TLTG. Five studies were found to meet the inclusion criteria for our meta-analysis. After data extraction and quality assessment, we used Stata 12 to pool the data. Results: Five studies involving 234 patients were considered in our meta-analysis. The pooled data show an anastomotic leakage value of 1% (95% CI 0 to 4%) an anastomotic stricture value of 1% (95% CI 0 to 3%)a conversion value of 0 and a postoperative bleeding value of 2% (95% CI 0 to 6%). Conclusions: TLTG involving intracorporeal handsewn end-to-side esophagojejunostomy serves as a safe approach to the treatment of gastric cancer. As this method is adopted by professionals, intracorporeally handsewn EJ could become an accepted means of executing widely used laparoscopic procedures of EJ. Background Each year, 990,000 people are diagnosed with gastric cancer worldwide, and 738,000 ultimately die from the disease [1], rendering it the fourth most common form of cancer and the second most common cause of cancer death [2]. The rst laparoscopic distal gastrectomy was performed in 1991 [3]. Laparoscopic gastrectomy for the treatment of early gastric cancer has been used widely, as it is less invasive than open surgery treatments. Laparoscopic gastrectomies present unique advantages, causing only mild postoperative pain and allowing for rapid recovery to normal bowel functions. However, totally laparoscopic total gastrectomy (TLTG) treatments for gastric cancer have only been performed on a few occasions. Randomized controlled trials and meta-analyses have con rmed that laparoscopic gastrectomies support better postoperative outcomes than those of open surgery, causing less intraoperative blood loss and postoperative pain, requiring a shorter hospital stay, and resulting in decreased levels of morbidity with comparable oncological results [4][5][6]. The presence of an esophagojejunostomy anastomosis can be technically demanding, limiting the feasibility of the TLTG approach. The creation of an esophagojejunal anastomosis (EJ) after total gastrectomy can be technically di cult, and reconstruction complications such as anastomotic leakages and strictures account for a signi cant proportion of postoperative morbidity values. Several reports, including systematic reviews, meta-analyses, and retrospective comparative studies, have compared the EJ tool of the TLTG process. Shim et al. [7] reported on four forms of EJ anastomosis applied after TLTG, which could still not be used to perform a standard EJ anastomosis. Additionally, Zheng et al. [8] performed a meta-analysis comparison of intra-esophagojejunostomy and extraesophagojejunostomy methods. They conclude that an intracorporeal EJ can ensure the same clinical outcomes as an extracorporeal EJ [8]. However, no studies have yet compared different EJ procedures. The aim of this study was to evaluate the perioperative complications related to hand-sewn esophagojejunostomy used for laparoscopic total gastrectomy. To evaluate the safety of hand-sewn esophagus-jejunostomy methods, we performed a meta-analysis. Search strategy Our meta-analysis was performed according to the Preferred Reporting Items for systemic Reviews and meta-analysis (PRISMA) statement. We searched PUBMED, EMBASE and the Cochrane Central Register for relevant studies published in English from May 1998 to May 2018. We used the following search terms: "laparoscopic," "total gastrectom" gastric cancer," "esophagojejunostomy," "handsewn esophagojejunostomy"and "single arm trial." Then, all titles, abstracts, or related citations were scanned and reviewed. We also used combined Boolean operators "AND" or "OR" in the Title/Abstract search eld. Inclusion and exclusion criteria The two investigators reviewed the articles. The following inclusion criteria were used: articles covering (1) gastric cancer performed via hand-sewn TLTG; (2) single arm trials; and (3) anastomotic leakages, intraluminal bleeding, anastomotic strictures, and open conversion. The following exclusion criteria were applied: (1) case reports, reviews, editorial comments, meeting abstracts and articles without applicable data; (2) studies with insu cient data such as missing values and (3) comparative studies. We identi ed relevant studies as illustrated in Figure 1. Outcome Two authors reviewed the relevant studies. Disagreements were resolved by discussion to reach a consensus. The two authors extracted data on anastomotic leakage, intraluminal bleeding, anastomotic stricture, and open conversion. Baseline comparative data, data on clinical outcomes, and data on postoperative complications were also recorded. Table 1 summarizes the baseline characteristics and assessments used. Statistical analysis We used Stata 12.0 to perform an analysis of the data. We used Q and I 2 (ranges from 0 to 100%) statistics to evaluate levels of heterogeneity, with I 2 <50% and P>0.1 denoting the presence of no signi cant heterogeneity, and we used xed-effects model. When I 2 >50% and P<0.1 denoting the presence of signi cant heterogeneity, the random effects model was applied. We used the ES (estimate) and 95% CI to evaluate binary data. The level of statistical signi cance was set to 0.05. Results Five studies were considered [9][10][11][12][13]. We obtained these studies as depicted in Figure 1. From the selected databases, 69 studies were obtained. After the screening of titles and abstracts, 29 studies were excluded. After processing, 24 studies were excluded. Finally, ve studies were included in our meta-analysis. Table 2 Quality assessment of the included studies Five studies report on cases of anastomotic leakage. Pooled data on anastomotic leakage account for 1% (n = 243, I 2 = 23.05%, p = 0.27, 95% CI 0 to 4%, xed-effects model, Figure 2). Data on conversion were available in two studies. The pooled data on conversion account for 0 (n = 136, I 2 = 0, p = 0.74, xed-effects model, Figure 4). Discussion We considered four clinical studies involving a short-term follow-up period. This meta-analysis was performed to evaluate the hand-sewn EJ approach to TLTG. The present study is the rst meta-analysis to evaluate perioperative complications of hand-sewn EJ for TLTG. Our meta-analysis shows that patients with upper gastric cancer present comparable basic characteristics. Our study compares shortterm follow-up outcomes of patients who have received TLTG. These patients share comparable baseline characteristics. The postoperative complication rate was found to be high relative to the sample size of the considered group (42.8% for leakage and 7% for stenosis) [7]. Another EJ approach described involves using a circular stapler mimicking that of the commonly applied open EJ [14] that involves extending the size of a laparoscopic port to accommodate a stapler. Potential risks of jejunal limb rotation or latero-lateral EJ twisting related to using a linear stapler can levels of increase tension acting on the upper portion of the anastomosis, complicating mediastinum leakage treatment. Bracale et al. [9] conducted a multicenter study on four patients experiencing anastomotic leakage, accounting for approximately 6% of patients. This may be associated with the surgical side-to-side anastomosis approach. Three different centers also presented different surgical results. Puntambekar et al. [11] reported an anastomotic leakage value of 0, consistent with that found in our study. Additionally, Norero et al. [10] reported on a single center study involving 51 patients with gastric cancer who had received TLTG. The surgical process applied involved hand-sewn esophagojejunostomy. Two cases of anastomotic leakage resulted (0.36%). A two-layered anastomosis was introduced using the hand-sewn EJ technique during surgery. For one layered suture, a two layered suture can mitigate anastomosis tension to prevent leakage. An EJ anastomosis allows the surgeon to abandon a purse suture, reducing operation time [10]. Different anastomosis techniques considered may have increased the heterogeneity of the present meta-analysis, limiting its accuracy. Xu et al. [13] performed a single center study of 100 patients with gastric cancer who had received TLTG. Of the 100 patients, one patient experienced anastomotic leakage (1%). They found the Endo Bulldog Clamp to be tightly clipped and potential due to the ischemia of the EJ anastomotic method. Cases of anastomotic leakage were found to be serious. Several studies have shown that cases of anastomotic leakage account for 0 to 7.6%, while cases of anastomotic stricture account for 0 to 4.8%. These results are consistent with those of our study (differ from those of our study). Morimoto et al. [15] performed a study of 77 patients and found anastomotic leakage occurs in 2.6% of cases. Several studies show that intracorporeal EJ involves applying circular-stapled methods such as side-to-side, functional end-to-end, and end-to-side methods [14,[16][17][18][19][20]. Several EJ methods have be proven safe and feasible to use [13,[15][16][17][18][19]. Several factors render laparoscopic EJs di cult to apply. An anastomotic EJ is positioned within the upper abdomen to restrict the diaphragm crus. This can complicate an anastomotic EJ. Additionally, the anastomotic esophagus naturally retracts to the mediastinum, further complicating EJ anastomosis procedures. SoKo et al. [12] reported that the extensive mobilization of the distal esophagus reduces anastomotic tension. They reported a lower incidence of anastomotic cases, namely, 0 cases. This may be related to low levels of anastomotic tension and to the limited mobilization of the distal esophageal. A limited hiatal eld does not provide enough space for EJ anastomosis execution. Patients have different BMIs, and particularly, an adequate anastomotic location is challenging in obese patients. Some centers have adopted an extra-operative EJ anastomosis with mini laparotomy to ensure better anastomotic outcomes, particularly for overweight patients. This technique involves the extensive dissection of the distal esophagus and crura, which can increase risks of bleeding and of further hiatal hernia development. Additionally, Norero et al. [10] found that a hand-sewn EJ anastomosis involves less crura and distal esophagus dissection and allows for the EJ placement at or below the crura. Furthermore, the use of circular staplers presents additional challenges, including technical di culties associated with the positioning of the stapler anvil. While systems of transoral anvil delivery have been developed, which can damage esophageal mucosa and can cause bacterial infections of the abdominal cavity [21]. The anastomotic stricture rate is 1% 95% CI 0 to 3%. Xu et al. [13] conducted a study of 100 patients with a stricture value of 0. Liu et al. [22] also performed intracorporeal circular stapled esophagojejunostomy using a conventional purse-string suture instrument after laparoscopic total gastrectomy. Norero et al. [10] found a higher EJ stenosis value (3.9%) than we did through our study. The use of linear staplers for EJ purposes has been examined Chen et al. and Inaba et al. During EJ anastomosis procedures, they found that lateral-lateral EJ movement made using a linear stapler can increase tension levels, leading to anastomotic leakage complications. Bracale et al. [9] conducted a multicenter study of 56 patients and found an anastomotic stricture value of 3%, echoing the literature showing values of 3 to 10% resulting from circular stapler use [23][24][25]. The same authors also reported on a form of anastomosis similar to that described above but involving an isoperistaltic jejunal loop. When applying this technique, the rate of anastomotic leakage (6%) observed does not differ considerably from that described in the literature. Higher rates of 12.8% have been reported from mixed surveys (stapler and manual approaches) and rates of 3-5% have been reported from surgeries performed with a circular stapler. From eight total laparoscopic gastrectomies Huscher et al. [4] did not observe leakage or stenosis resulting from the use of side-to-side EJS methods. Regarding intraluminal bleeding, several studies have reported similar, higher or lower values relative to those of our study. This may be related varying levels of familiarity with suturing tasks among professionals. Our study presents several limitations. The considered studies are rst not focused on RCTs, and a small sample of studies was considered, which may have reduced the quality of our results. Additionally, the considered studies present signs of selection bias, are not based on quantitative data, and do not focus on survival outcomes. Conclusions TLTG methods involving the use of intracorporeal hand-sewn esophagojejunostomy are safe methods for the treatment of gastric cancer. At present, the reported anastomosis method and TLTG approach more generally should only be applied in high-volume laparoscopic surgical centers.
2019-10-31T09:11:18.055Z
2019-10-23T00:00:00.000
{ "year": 2019, "sha1": "a71f7a83767e0cfa40273b31ab20e0e7bcd08071", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-7026/v1.pdf?c=1585614907000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "c0adeb1b01e7a86712105980908f676103dfdd58", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
145747048
pes2o/s2orc
v3-fos-license
Restraint and Seclusion: Can They Become Obsolete Practices? Facilities that treat individuals with mental illness strive to offer safe environments which are conducive to treatment and foster human potential. Environments that utilize restraint and seclusion (R/S) as a treatment option may engender fear, and for individuals who have histories of traumatic victimization, can trigger a recapitulation of traumatic experiences thereby exacerbating symptoms of Post-Traumatic Stress Disorder (PTSD) or other mental illness.1 These types of flashback episodes can be counterproductive to treatment. F acilities that treat individuals with mental illness strive to offer safe environments which are conducive to treatment and foster human potential.Environments that utilize restraint and seclusion (R/S) as a treatment option may engender fear, and for individuals who have histories of traumatic victimization, can trigger a recapitulation of traumatic experiences thereby exacerbating symptoms of Post-Traumatic Stress Disorder (PTSD) or other mental illness. 1 These types of flashback episodes can be counterproductive to treatment. Incidents which lead to R/S often involve violent and harmful encounters between patients and staff, eliciting distress in both.Many patients in psychiatric settings report that R/S is among the most harmful and traumatic events that they have experienced. 1Using R/S is controversial, and has ethical implications.In 1998, the Hartford Courant's series on deaths associated with R/S reported 142 deaths in the United States from these techniques from 1988 to 1998. 2 Psychiatric hospitals have been using R/S for centuries to control people with disturbed or violent behaviors. 3Many facilities use R/S as a "measure of last resort."The International Society of Psychiatric-Mental Health Care Nurses (ISPN) indicates that R/S is "an emergency clinical intervention employed only as a last effort when less restrictive alternatives have failed to ensure safety for patients, staff and families." 4Restraint is any method, physical or chemical, that restricts one's freedom of movement or access to one's body.Seclusion is the process of confining an individual to a room and physically preventing them from leaving for any period of time. Proponents of R/S reduction initiatives have stated that the most important aspect of these efforts is culture change. 3,5SAMHSA Administrator Charles G. Curie, suggests that "success begins with a change in culture, from one of power to empowerment, from coercion, to caring and from hopelessness to hope."To initiate a culture change "leadership at the top is essential…" In addition, he suggests that efforts to reduce R/S may benefit from staff training, better data collection and dissemination and resolving to use R/S only when "the potential exists for imminent physical danger to the patient or others." 5me administrators and researchers see R/S as an indication of treatment failure, 5 challenging mental health programs to find better ways to deal with crises.Methods include training staff in more effective de-escalation techniques, introduction of Psychiatric Emergency Response Teams (PERTs) as well as increasing staff-to-patient ratios. 3Others have suggested adopting a "Best Practice Model" for successfully reducing R/S. 6"Best Practice" is defined as collecting and managing information and resources in a cost effective manner. 6 Duration of restraint from 12.1 to 1.9 hours -a reduction of 84% Studies of R/S Reduction The authors attributed these reductions to a number of factors including effective leadership, state policy change, the implementation of PERTs and an increased staff-topatient ratio on hospital units. 3.A retrospective analysis of a public psychiatric hospital's attempts to reduce R/S evaluated and reviewed a variety of interventions that were successful in reducing R/S. 7e results of this study showed that the use of R/S decreased 75% over a 5 year period.The only variable that was significantly associated with reduced use of R/S was a changed process for identifying critical cases and initiating a clinical and administrative case review.This change was a reduction in the number of restraint or seclusion applications permitted on the patient before their case was labeled as critical.Critical cases required administrative and clinical review. This study underscores the importance of clinical and administrative priorities in efforts to reduce R/S. 6The findings support the belief that leadership is the most important priority in any attempts to change the culture and consequently reduce the use of R/S. 3,5,7I.One Massachusetts study compared the difference between costs of restraint usage one year prior, and one year after, a reduction initiative. 8 • Facility wide, restraint use decreased from 3991 to 373 episodes after implementation -a reduction of 91% • This reduction was associated with a reduction in facility costs associated with the application of restraints from $1,446,740 to $117,036 -a reduction of 92%. In addition, the reduction of restraints was associated with better patient treatment outcomes, more effective usage of staff time, and decrease in the use of sick time and staff turnover. 8 Future Directions Creating treatment environments where R/S are practices of the past will be challenging.When used conscientiously, and only as a last resort, some argue that it can keep individuals safe from harm; however, others argue that R/S can create an environment of tension and fear.Can treatment environments be safe, conducive to treatment and foster human potential if they use R/S as a treatment modality? Massachusetts is one of eight states that have recently been awarded funding by SAMHSA in order to carry out evaluations of R/S reduction strategies implemented by facilities.Currently, CMHSR, in collaboration with the Massachusetts Department of Mental Health, is involved in the evaluation of eleven Massachusetts sites.These efforts promise to identify successful strategies for reducing R/S, and may provide examples of psychiatric settings that have successfully eliminated them. I . One long-term study examined patterns of use of R/S from 1990 to 2000 in Pennsylvania's state hospital system.3Patterns over the 11-year period among nine sites included average decreases in:• Rates of seclusion from 7.2/1000 to 0.3/1000 patient days -a reduction of 96% • Duration of seclusion from 11.6 to 1.3 hours -a reduction of 89% • Rates of restraint from 6.4 episodes/1000 days to 1.2 episodes/1000 days -a reduction of 81% 7. Donat, D.C. (2003).An analysis of successful efforts to reduce the use of seclusion and restraint at a public psychiatric hospital.Psychiatric Services, 54, 1119-1123.8. LeBel, J., Goldstein, R. (2005).The Economic cost of using restraint and the value added by restraint reduction or elimination.Psychiatric Services, 56, 1109-1114.Visit us on-line at www.umassmed.edu/cmhsrOpinions expressed in this brief are those of the authors and not necessarily those of UMass Medical School or CMHSR.
2016-10-26T03:31:20.546Z
2006-02-01T00:00:00.000
{ "year": 2006, "sha1": "1e67fe4e4d6c928cf0218a177273aa9a473e1975", "oa_license": "CCBYNCSA", "oa_url": "https://escholarship.umassmed.edu/cgi/viewcontent.cgi?article=1023&context=pib", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "175c6ea7424608750ff02a748f8de2be5a691e47", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
4435400
pes2o/s2orc
v3-fos-license
Universality in Sandpile Models A new classification of sandpile models into universality classes is presented. On the basis of extensive numerical simulations, in which we measure an extended set of exponents, the Manna two state model [S. S. Manna, J. Phys. A 24, L363 (1991)] is found to belong to a universality class of random neighbor models which is distinct from the universality class of the original model of Bak, Tang and Wiesenfeld [P. Bak, C. Tang and K. Wiensenfeld, Phys. Rev. Lett. 59, 381 (1987)]. Directed models are found to belong to a universality class which includes the directed model introduced and solved by Dhar rule (ii) is invoked. (ii) The relaxation rule. If the dynamical variable at site i exceeds the threshold E c , relaxation takes place, whereby energy is distributed in the following way: where e are a set of (unit) vectors from the site i to some neighbors. As a result of the relaxation the dynamic variable in one or more of the neighbors may exceed the threshold. The relaxation rule is then applied until a stable configuration is reached. The sequence of relaxations is an avalanche which propagates through the lattice. The parameters δE and E c are irrelevant to the scaling behavior [2,11]. Thus the only factor determining the exponents is the vector ∆E, to be termed relaxation vector. For a square lattice with relaxation to nearest neighbors it is of the form ∆E = (E N , E E , E S , E W ), where E N for example is the amount transferred to the northern nearest neighbor. The original BTW model is given by the vector (1, 1, 1, 1). The relaxation in the directed model of Dhar and Ramaswamy [3] is specified by any vector with ones in two adjacent directions and zeroes in the two other directions, such as (0, 0, 1, 1). In a random relaxation model a set of neighbors is randomly chosen for relaxation. Such a model is specified by a set of relaxation vectors, each vector being assigned a probability for its application. As an example, a possible realization of a two-state model makes use of the six relaxation vectors (1,1,0,0),(1,0,1,0),(1,0,0,1), (0,1,1,0),(0,1,0,1) and (0,0,1,1), each one applied with a probability of 1/6. In Manna's two-state model [8] the variable is decreased to zero on relaxation, with sand distributed randomly among the nearest neighbors. We define a current which is the net flow in a relaxation. We also define These variables also scale against each other in the form for x, y ∈ {s, a, t, r, d, p}. The exact definition of the γ's is in terms of conditional expecta- [6]. The exponents are not independent. Scaling relations are found in [7]. We just note that Avalanches are proven to be compact for BTW type models [7] but have a fractal boundary. It is reasonable to assume that the fractal dimension D f of the boundary is given by the scaling of the perimeter (p) against the linear size of the avalanche. It seems that for models which are non-directed the radius of gyration is the proper measure of size [11]. Therefore we identify D f with γ pr . For directed models the maximum distance from the origin to the perimeter is the proper measure of size, and D f is identified with γ pd . It is accepted that the dynamical exponent z of non-directed models should be identified with γ tr [11]. In the case of directed models we identify the dynamical exponent with γ td . Having defined the models, we now describe the simulations. We used open boundary conditions and system sizes up to 512 2 , with 5 million grains dropped, in two dimensions; in three dimensions system sizes were up to 112 3 , with 20 million grains dropped. An algorithm due to Grassberger [5] was used. We ascertained the dynamics has reached the critical state by applying Dhar's "burning algorithm" [2], or by starting with a configuration belonging to the critical state. Manna's and our own simulation results for the BTW model indicate that the distribution exponents are system size dependent, with a logarithmic convergence to the infinite system values. The values of the γ's on the other hand, seem to be almost independent of system size. Moreover, we found that the relations that specify the γ's hold during avalanches as well, and are not just a scaling property of completed avalanches. Thus the γ's provide a robust characterization of the dynamical properties of a sandpile model, and can be used for a reliable classification of sandpile models into universality classes. Previous studies clearly show that directed and non-directed models belong to different universality classes [3,7,8]. On the basis of Manna's simulation results it was concluded that the Manna two-state model and the BTW model are in the same universality class [8]. This conclusion is based on measurements of a limited set of exponents: τ s ,τ t and γ ts . We measured the extended set of exponents introduced by Christensen and Olami, and the fractal dimension. The γ's we obtained in two dimensions are listed in Table I. Our results are consistent with known analytical results and simulation data: Dhar and Ramaswamy's analytical solution of a directed model [3]; simulation results and scaling arguments given by Christensen and Olami [7]; simulation results of Manna [4,8]. A momentum-space analysis of a Langevin equation indicates that for the BTW model z = (2 + d)/3 [11]. Our results for γ rt which is identified with 1/z, confirm this scaling relation. This agreement supports our observation that the γ's are size independent, and indicates that we are in the right avalanche size regime for the observation of γ rt . On the basis of the difference in the γ's for the BTW and two-state models we conclude that the two models are not in the same universality class (Fig. 1). In order to establish that the classification introduced above is a classification into universality classes we provide evidence that some details of the models are irrelevant (Fig. 2). Simulation results of the BTW model on the triangular lattice and square lattice were compared [4,11]. No significant difference was reported. We define N as the number of states of the E(i) in stable configurations of discrete models. When the components of the relaxation vector are all 1's, N also equals the number of neighbors. In sandpile models the question of the lattice dependence or interaction range dependence of the exponents is actually a question of the dependence on N. We observed a crossover effect when increasing N. The scaling obtained for the BTW model on a square lattice (N=4) is shifted to larger avalanches when N is increased. Similar cross-over was observed in the other universality classes. Note that the requirement that J[∆E] = 0 does not imply isotropy. This is the reason the universality class was called non-directed, rather than isotropic. As an example, a model with a toppling vector (1,2,1,2) fulfills this requirement, and simulations show that it belongs to the universality class of non-directed models. Continuous models were simulated as well. There are two types of realizations of continuous models. In one, the variables are turned into continuous variables, and when the amount of sand added is not a multiple of the amount distributed on relaxation (or is a random variable taking such values) then the height profile is turned into a continuous distribution. The other is the Zhang realization, where on relaxation the dynamic variable is decreased to zero and sand distributed equally among the nearest-neighbors [10]. Both types seem to be in the same universality class [11]. This is indicated by our simulations as well. There is a number of possible realizations of a two-state model. The neighbors to which sand is distributed can be chosen as distinct (no neighbor chosen twice) or not. In Manna's two-state model [8] the variable is decreased to zero on relaxation, with sand distributed randomly among the nearest neighbors. In this case the relaxation process depends on the variable value. Continuous variants of the model may also be defined. We have simulated realizations of such models and all were in the same universality class (Fig. 2). Simulations of two-state models were performed with annealed randomness only [12]. On the basis of the wave structure of avalanches in the BTW model [13], it can be shown that avalanches have a "shell" structure, i.e. the sites which relaxed at least n+1 times form a connected cluster with no holes which is contained in the cluster of sites which relaxed at least n times ( Fig. 3(a)). Avalanches in random relaxation models do not share this property, and their structure is more irregular. A typical avalanche in a two-state model is shown in Fig. 3(b). These geometrical differences reflect in the fractal dimension of the boundary, which is greater for the two-state model. The distinction between the universality classes of non-directed models and models which are non-directed only on average holds in three dimensions as well (Table II). The difference is less marked because the exponents are nearing their mean field values. Directed models also form a universality class. In addition to the models studied by Dhar and Ramaswamy, where the relaxation vector is of the form (1,1,0,0) or (1,1,1,0), we simulated models with the relaxation vectors (1,1,1,2) and (1,1,2,2). In the latter, multiple relaxations are possible, but it does not reflect in the scaling behavior. We found the same exponent values in all these models. The values we obtained in simulations (Table I) Recently Pietronero et al. [14] introduced a novel theoretical framework for calculating the exponents of sandpile models, in a manner which immediately reveals their universality. Within their scheme, which is purely phenomenological, the Manna two-state model and the BTW model are found to be in the same universality class. Its failure to distinguish between the two models indicates that some key ingredient is missing from their scheme. We suspect that multiple relaxation is the missing element. Work is now in progress to extend the procedure to include some form of multiple relaxation. TABLES TABLE I. γ exponents for universality classes in two dimensions. The other γ's can be found from the scaling relations, Eq. [5]. The values of these exponents were observed to be independent of system size. The typical spread of data for different runs of different models within the universality class is ±0.01 about the mean.
2014-10-01T00:00:00.000Z
1996-02-01T00:00:00.000
{ "year": 1998, "sha1": "f204f634bdd99eab0e7e054e63390b4256903ece", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9803236", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f204f634bdd99eab0e7e054e63390b4256903ece", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Medicine" ] }
234419600
pes2o/s2orc
v3-fos-license
Study of the operation of a combine harvester cleaning system with a sieve screw separator in the conditions of operation on slopes The article presents the results of studies of a cleaning system with a sieve screw separator in operation conditions of a combine harvester on slopes. The grain heap in the proposed cleaning system is moved by screws, which ensures uniform feeding of the grain heap to the upper sieve with longitudinal tilts of the combine. Autonomous operation of the screws reduces the uneven distribution of grain heaps across the width at the beginning of the upper sieve of the cleaning system. Preliminary separation of grain reduces the grain flow to the upper sieve by more than 60%. Laboratory studies have shown a decrease in grain loss with a transverse tilt of the combine of 8 degrees from 1.17% in the compared treatment system to 0.75% in the treatment system with a sieve screw separator. An additional reduction in grain losses was achieved by installing longitudinal partitions with a height of 0.13 m on the upper sieve, preventing the grain heap from shifting sideways on the upper sieve. The theoretical coefficient of variation in the grain heap layer thickness at the end of the upper sieve is 0.18. Grain losses in the proposed cleaning system, calculated by the mathematical model of separation, amounted to 0.14%, which is 8 times lower than the compared cleaning system. Introduction. The grain heap in the combine cleaning system is in a fluidized state due to sieve vibrations and exposure to airflow. When the combine moves up or down the slope, the speed of the grain heap increases or decreases, respectively. With a lateral tilt of the combine, the grain heap is shifted to the sidewall. This leads to deterioration of the cleaning system quality and the lateral tilt of the combine leads to increasing grain losses [1,2]. Stabilization of the upper sieve position in two planes relative to the horizon [3,4] provides a reduction in grain losses, while the uneven feeding of grain heaps to the upper sieve reduces the efficiency of the cleaning system. Therefore, the cleaning system of the hillside combine harvester should provide not only stabilization of the distribution of the grain heap on the upper sieve but also a uniform feed of the grain heap. One of the structural and technological solutions for the combine harvester cleaning system in the hillside operating conditions is the combination of preliminary separation, uniform feeding and distribution of grain heaps on the upper sieve of the cleaning system. The article aims to study the operation of a cleaning system with a sieve screw separator in the conditions of operating a combine harvester on slopes. The object of research is the separation of grain heaps in the cleaning system, including a sieve screw separator [4] and an air-sieve cleaning system. Object of study. The sieve screw separator (figure 1) is intended for preliminary separation of a grain heap. The sieve screw separator is installed instead of the shaking board and contains augers 1 with blades, a wavelike sieve 2 and a shortened shaking board 3. The centrifugal fan 4 contains an additional nozzle. The separation of the grain heap in the considered cleaning system occurs due to the influence of the airflow and the blades when moving the grain heap on the sieve. Research methods. The study of grain separation in the proposed cleaning system was performed using comparative laboratory tests and mathematical modeling. In theoretical studies, the grain loss after the cleaning system (P,%) was determined by the separation model [5], which for the studied cleaning system has the following form: Where i is the number of the current section of the grain heap along the length of the upper sieve; k is the number of Δx -long sections along the length of the upper sieve; P  is the separation coefficient of the grain heap; hn V is coefficient of variation of grain heap layer thickness at the beginning of the sieve; h V  is the step of changing of layer thickness variation coefficient along the sieve length. The parameter h V  was determined by the formula: hk V is the coefficient of variation of the layer thickness at the end of the sieve. The separation coefficient depends on the separation coefficient in the basic cleaning system and on the thickness of the grain heap layer in the cleaning under examination. In the adopted separation model, the parameter is set equal to the shift of the grain heap on the upper sieve in one oscillation [5]. For one oscillation of the sieve, the grain heap moves over a distance Δx , and the coefficient of variation changes by value. The coefficient of variation at the beginning of the upper sieve depends on the operation of devices installed in front of the upper sieve of the cleaning system. In the studied cleaning system it depends on the operation of the sieve screw separator. The heap variation coefficients were determined experimentally using grain heap samplers and theoretically according to the developed method [5]. The comparison was carried out with a basic cleaning system containing a shaking board, an upper sieve, a lower sieve and a centrifugal fan. The experiments were carried out on wheat heap with a moisture content of 10 to 12% and a content of straw impurities of 30%. Design and technological parameters of a laboratory setup with a sieve screw separator: pitch of sieve screw separator blades -100 mm; blade width -60 mm; screw rotation speed -330 rpm, sieve holes diameter -12 mm; fan impeller rotation speed -690 rpm; the gaps between the fins of the upper and lower sieves are 13.5 mm and 9 mm, respectively. Qualitative indicators of the operation of the cleaning system were determinedgrain loss (P, %) and the content of impurities in grain (Z, %). Results and Discussion. The . The sideways shift of the grain heap on the upper sieve leads to an increase in grain losses (figure 3). It was found that with a transverse roll of 8° (α), grain losses of 0.5% were observed when feeding grain heaps of 1.6 kg / s (q), and when feeding 2.9 kg / s, grain losses amounted to 1.17%. The theoretical value of grain losses calculated by the formula (1) equals P=1.25%. In a cleaning system with a sieve screw separator, screw conveyors operate autonomously, transporting and separating grain heaps. This leads, firstly, to a decrease in the load on the upper sieve (up to 40% of the total grain weight comes onto the sieve), and secondly, the distribution of the grain heap is improved -the coefficient of variation at the beginning of the upper sieve is 24 . 0 = hn V . However, in the cleaning system of the scheme under consideration, the grain heap is shifted sideways on the upper sieve towards the slope, which causes an increase in the coefficient of variation by the end of the sieve to 0.52, which explains the high level of grain loss. When feeding a grain heap of 2.9 kg / s, the grain loss was 0.75% (figure 4). The shift of the grain heap sideways towards the slope can be prevented by installing longitudinal partitions on the upper sieve. Since the sieve screw separator consists of four screws, it is advisable to install three longitudinal partitions on the upper sieve, dividing the upper sieve into four parts. Figure 5 shows the results of calculating the grain heap distribution at the end of the upper sieve, provided that at the beginning of the upper sieve the grain heap is fed uniformly across width. The data were obtained according to the developed method [6]. To avoid the redistribution of grain heaps on the upper sieve, the height of the partitions should be at least 0.13 m. Figure 5. Therefore, one of the technical solutions of a combine harvester with a sieve screw separator is the use of four longitudinal partitions 0.13 m high on the upper sieve. To feed the grain heap to the upper sieve at a rate of 2.9 kg / s and with a lateral tilt angle of 8 °, the coefficient of variation at the beginning of the sieve is 0.24, and at the end of the sieve 0.18. The grain loss of the proposed cleaning system was determined by the formula (1). The calculation was carried out with a length of the upper sieve of 1.3 m, a grain separation coefficient of 5.1 m -1 . The estimated value of grain losses is 0.14%, which is 5 times lower than the cleaning system with a sieve screw separator and 8 times lower than the basic cleaning system. Conclusion The cleaning system with a sieve screw separator containing four screws and three longitudinal partitions 0.13 m high mounted on the upper sieve is proposed. Loss of grain in the proposed cleaning system is reduced eightfold in comparison with the basic cleaning system.
2020-12-31T09:06:31.237Z
2020-12-25T00:00:00.000
{ "year": 2020, "sha1": "a48fadc6cf8dc029f20cc85060025d3fc8b44ea4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/941/1/012044", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2386a208a81c11c088654ccc1fafd3db54bfd649", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
239482922
pes2o/s2orc
v3-fos-license
Staem5: A novel computational approachfor accurate prediction of m5C site 5-Methylcytosine (m5C) is an important post-transcriptional modification that has been extensively found in multiple types of RNAs. Many studies have shown that m5C plays vital roles in many biological functions, such as RNA structure stability and metabolism. Computational approaches act as an efficient way to identify m5C sites from high-throughput RNA sequence data and help interpret the functional mechanism of this important modification. This study proposed a novel species-specific computational approach, Staem5, to accurately predict RNA m5C sites in Mus musculus and Arabidopsis thaliana. Staem5 was developed by employing feature fusion tactics to leverage informatic sequence profiles, and a stacking ensemble learning framework combined five popular machine learning algorithms. Extensive benchmarking tests demonstrated that Staem5 outperformed state-of-the-art approaches in both cross-validation and independent tests. We provide the source code of Staem5, which is publicly available at https://github.com/Cxd-626/Staem5.git. INTRODUCTION There are more than 170 types of RNA chemical modifications (RCMs) that have been found in transfer RNAs (tRNAs), ribosome RNAs (rRNAs), mRNAs, and non-coding RNAs. [1][2][3][4][5] The RCMs is determined by three coordinating factors, including methyltransferase, RNA binding protein, and demethylase. 3,6,7 Among all RCMs, 5-cytosine-methylation (m5C) is one of the most important modifications in mRNA. However, it is challenging to identify m5C accurately. Because of the instability of mRNA molecules, high-throughput sequencing technologies usually fail to accurately identify m5C sites at single-nucleotide resolution. 6,[8][9][10] Therefore, computational approaches that can accurately identify m5C sites would be highly valuable and may provide insights into the functional roles of this important RNA modification. A number of computational approaches based on sequence-derived information and machine learning algorithms have been developed to predict m5C sites of four species, including Homo sapiens, Mus musculus, Saccharomyces cerevisiae, and Arabidopsis thaliana. These approaches can be classified into two categories according to the machine learning algorithm they applied: (1) support vector machine-based predictors, including m5C-PseDNC, 11 M5C-HPCR, 12 pM 5 CS-Comp-mRMR, 13 RNAm5CPred, 14 m5CPred-SVM, 15 and iRNA-m5C_SVM 16 ; (2) random forest (RF)-based approaches, including PEA-m5C, 17 RNAm5Cfinder, 18 and iRNA-m5C. 19 In addition, some studies developed computational methods for predicting multiple types of RNA modifications, including m5C. For example, Liu and Chen developed iMRM 20 based on extreme gradient boosting (XGBoost) to recognize five types of RNA modifications. Song et al. 8 developed an attention-based multi-label neural network, MultiRM, to predict 12 types of RNA modifications simultaneously. Table 1 summarizes these two categories of predictors specially designed for m5C in several aspects, including the feature extraction, performance evaluation strategy, species, webserver or software availability, and benchmark datasets. We found that most of the methods were developed for H. sapiens, and only a few predictors were designed and tested for m5C sites of M. musculus and A. thaliana, such as iRNA-m5C, iRNA-m5C_SVM, RNAm5Cfinder, and m5CPred-SVM. 15,16,19 In addition, the predictive performance of the m5C site in M. musculus and A. thaliana is unsatisfactory compared with that in H. sapiens. For example, m5CPred-SVM, iRNA-m5C_SVM, and iRNA-m5C were developed on the same benchmark dataset of A. thaliana and achieved 71.8%, 73.06%, and 70.7% in terms of average accuracy of the cross-validation tests, respectively. The reason is probably that these predictors were developed based on a single RF or SVM algorithm. With recent advances in ensemble learning strategies used in bioinformatics to develop robust prediction models, we were motivated to leverage the ensemble learning techniques to improve m5C prediction in M. musculus and A. thaliana. In this study, we introduce Staem5, a stacked ensemble model for predicting m5C sites in A. thaliana and M. musculus. Staem5 was developed based on four types of sequence features, such as position-specific propensity, k-mer, electron-ion interaction pseudo potentials of trinucleotide, and parallel correlation pseudo dinucleotide composition. The base models to build the optimal stacked model of each species were selected from five popular machine learning algorithms, and the feature selection strategies were employed to further optimize the predictive performance. The cross-validation and independent tests demonstrate that Staem5 achieved competitive predictive performance compared with state-of-the-art approaches. RESULTS In this work, we propose a novel computational method, Staem5, to identify m5C sites for both A. thaliana and M. musculus. The model integrates four kinds of encoding schemes, i.e., position-specific propensity (PSP), k-mer (k = 1, 2, and 3), parallel correlation pseudo dinucleotide composition (PCPseDNC), and electron-ion interaction pseudo potentials of trinucleotide (PseEIIP). Bayesian optimization was applied to tune parameters for each classifier. Then, we evaluated the different combinations of base classifiers, including SVM, XGBoost, light gradient boosting machine (LightGBM), extremely randomized trees (ExtraTree), and gradient boosting decision tree (GBDT), by stacked tactics to identify the optimal ensemble model for A. thaliana and M. musculus, respectively. Meanwhile, the F score is used to reduce the dimension of features and computing time. Compared with training and independent datasets, Staem5 exhibits its superiority to other existing approaches. The source code of Staem5 can be found at https://github.com/Cxd-626/Staem5.git. Nucleotide preferences of the m5C site This section analyzes the nucleotide preferences of the sequence fragments containing m5C sites using the Two Sample Logo (http://www. twosamplelogo.org/). 22 The sequence logos of A. thaliana and M. musculus generated by Two Sample Logo are presented in Figures 1A and 1B, respectively. As observed, cytidine (C) was enriched upstream of the m5C sites of A. thaliana, especially at positions À18 to À10 and À7 to À1. In contrast, adenine (A) and guanine (G) are abundant upstream of the non-m5C sequence fragments, especially at positions À19, À18, À15, À12, À11, À9, À7, À6, À3, and À1. For M. musculus, C and G have relatively higher frequencies than Yes: the publication is accompanied with a webserver/softpackage and it is still functional; decommissioned: the webserver/softpackage is no longer available; no: the publication has no webserver or softpackage. the other two nucleic acids, especially at positions À20, À10, À9, À5, À3, À2, À1, and 2. Also, the non-m5C sequences had a frequent A and uridine (U) pattern at positions À9, À6, À5, À3 to À1, and 1 to 3 of the corresponding sequence segments. These results demonstrate that m5C sites in A. thaliana and M. musculus do not have notable sequence motifs compared with non-m5C sites, and that the sequence segments have different nucleotide preferences in these two species. Therefore, it could be difficult to develop a general model for cross-species prediction, and it is necessary to set up species-specific models. The effectiveness of parameter optimization In this section, we evaluate the predictive performance of five popular machine learning algorithms, i.e., SVM, 23 GBDT, 24 XGBoost, 25 LightGBM, 26 and ExtraTree, 27 for m5C site prediction in A. thaliana and M. musculus. For each classification algorithm, the hyperparameters were pre-set according to previous experience [28][29][30] and optimized by the Bayesian optimization, 31 which has effectiveness in many prediction tasks in bioinformatics. 29,30,[32][33][34][35][36][37] We searched the optimal combination of hyperparameters according to the value of accuracy based on the 10-fold cross-validation tests. The performance comparison results in terms of accuracy of the five base classifiers before and after parameter optimization on the 10-fold cross-validation tests are shown in Figure 2 (the detailed values of other performance metrics are provided in Table S1), and the selected parameters are listed in Table S2. We can observe that the performance of the five base classifiers enhanced after parameter optimization and performance improvement of SVM was the largest among the five base classifiers. The accuracy of the SVM model of A. thaliana increased from 62.40% to 73.62%. In addition, the accuracy of the GBDT model also witnessed an increase from 65.69% to 71.77%. In comparison, ExtraTree had the most negligible performance improvement with a 0.05% increase in terms of accuracy. In addition, from Figure 2 and There are two levels in the stacking ensemble learning strategy, and the classifiers in these two levels are referred to as base and meta-classifiers, respectively. In the first level, a set of base classifiers generate the probability values, which are subsequently used as the input for the meta-classifier. In this study, we used logistic regression as the meta-model to ensemble the base classifiers into a stacked model. www.moleculartherapy.org The stacking strategy was implemented in the "mlxtend" package 38 in Python. The selection of base classifiers was based on the accuracy of the model. Take A. thaliana for an example, and the process of stacking is as follows: we first ensembled the top two best-performed classifiers, SVM and XGBoost, and evaluated whether the model performance in accuracy increased or not. We identified that the stacked model achieved increased accuracy compared with support vector machine (SVM) only from 73.62% to 73.85% on the 10-fold crossvalidation. Therefore, we further integrated the third-ranked classifier LightGBM to the stacked model, and the accuracy further improved by 73.85%-73.89%. However, when combined with the fourth-ranked classifier GBDT and ExtraTree, the accuracy decreased in varying degrees. Therefore, we accordingly selected SVM, XGBoost, and LightGBM as the base classifiers for the stacked model, which achieved 73.89% accuracy and 0.479 MCC. Figure 3 illustrates the performance comparison results of different base classifiers' combinations in accuracy and MCC (the detailed results are provided in Table S3). Subsequently, we also compared the stacked strategy with the voting strategy, which is another popular ensemble learning strategy. To ensure the fairness of the comparison, the voting models were constructed according to the same principle as the stacked model (with logistic regression). The performance comparison results of different classifier combinations are provided in Table S4, and we summarize the performance comparison results between the best stacking and voting models in Table 2. The results demonstrated that the stacking model achieved better predictive performance, which is more suitable for m5C site prediction in A. thaliana and M. musculus. Feature selection analysis To remove redundant information caused by high-dimensional input features and further optimize the meta-models, we evaluated three popular feature selection algorithms, including maximum-relevance-maximum-distance (MRMD), 39 Pearson correlation coefficient (PCC) feature selection, 40 and F score, to find the optimal feature subset. We first ranked all features by each feature selection algorithm and then reduced the dimension of the feature set by step of 50. The performance comparison results of three feature selection algorithms are provided in Table S5. The results suggested that these three feature selection approaches did not further improve the predictive performance of m5C sites in A. thaliana; however, the selected features enhanced the model performance of the M. musculus model on 10-fold cross-validation tests. For the model performance during feature selection by F score, the average accuracy first increased and then decreased with the decrease of features, and at the best average accuracy 77.26% at the feature dimension of 180. In contrast, the best average accuracies for MRMD and PCC are same, which were achieved to 77.21% at the dimension of 230 for MRMD and 280 for PCC, respectively. These results demonstrated F score achieved slightly better performance compared with MRMD and PCC. Therefore, we used F score to select the optimal features and reduce the feature dimension by setting a smaller step of 5 and provided the feature selection results at the feature dimension of 165-195 in Table S6. From Table S6, we can see the best performance in terms of accuracy (77.42%) and AUC (0.855) achieved with 185 features. Finally, we further selected the optimal feature subsets by step of 1 on the feature dimension of 175-190 and report the results in Table S7. The results doubly confirm that the feature subset with 185 features can secure the best performance in accuracy and AUC. Therefore, these 185 features were used as the input features for the stacked model to predict m5C sites in M. musculus. In addition, the performance comparison results of the models before and after selection on the independent test dataset are provided in Table S8. Performance comparison with state-of-the-art methods In this section, we compare the predictive performance of Staem5 with several state-of-the-art predictors on the same training and independent test datasets of A. thaliana and M. musculus. For A. thaliana, we compared Staem5 with iRNA-m5C, 19 m5CPred-SVM, 15 and iRNA-m5C_SVM 16 ; while for M. musculus, we compared Staem5 with m5CPred-SVM. The performance comparison results on training and independent test datasets are provided in Tables 3 and 4, respectively. From Table 3, we can see that Staem5 achieved the best performance on the training dataset of both A. thaliana and M. musculus for almost all the evaluation metrics with the only exception that iRNA-m5C_SVM achieved the best Sp of A. thaliana. The independent test results in Table 4 show that Staem5 was inferior to iRNA-m5C_SVM and m5CPred-SVM on the independent test set of A. thaliana. However, Staem5 achieved better predictive performance than m5CPred-SVM on the independent test set of M. musculus. Although the Staem5's performance on the independent test set of A. thaliana was slightly lower than iRNA-m5C_SVM and m5CPred-SVM, the training and testing performance differences were lower compared with these two approaches. The independent test results of iRNA-m5C_SVM and m5CPred-SVM were much higher than their performance on the training dataset. Instead, Staem5 showed similar performance on the independent dataset and training dataset, e.g., 73.70% versus 73.89% in terms of accuracy, which indicates that Staem5 is more robust and stable compared with others. Therefore, we can conclude that Staem5 can accurately predict M. musculus and A. thaliana m5C sites. Benchmark datasets The schematic flowchart of Staem5 is shown in Figure 4. There are four major steps, including data collection, feature extraction, feature selection, and model construction. In the first step, the training and independent test datasets of A. thaliana were collected from the datasets constructed by Chen et al. 15 The m5C site data of A. thaliana was derived from the NCBI Gene Expression Omnibus (GEO) database http://www.ncbi.nlm.nih.gov/geo/ using accession number GEO: GSE94065 http://www.ncbi.nlm.nih.gov/geo/, while the M. musculus dataset was collected from Yang et al. 6 A statistical summary of the training and independent test datasets of A. thaliana and M. musculus is provided in Table S9. The A. thaliana dataset contains 5,298 positive and 5,298 negative training samples, 1,000 positive and 1,000 negative testing samples. In comparison, the M. musculus dataset has 4,563 positive and 4,563 negative training samples, 1,000 positive and 1,000 negative testing samples. Sequence encoding schemes In this study, we employed four types of sequence encoding schemes, including parallel correlation pseudo dinucleotide composition (PCPseDNC), position-specific propensity (PSP), k-mer, and electron-ion interaction pseudo potentials of trinucleotide (PseEIIP). PCPseDNC was calculated by iLearn, 41 and there are 38 physicochemical properties in PCPseDNC. PSP, k-mer and PseEIIP have been extensively applied in the field of prediction RNA N6-methyladenosine (m6A) sites, protein S-sulfenylation sites, and identifying N4-acetylcytidine (ac4C) sites in mRNA. [42][43][44][45] We provide the detailed definitions and formulas in the supplemental information. Stacked ensemble learning framework There are two levels in the stacking ensemble learning strategy, and the classifiers in these two levels are referred to as base classifiers and meta-classifier, respectively. In this work, we explored five popular machine learning algorithms, including SVM, 23 GBDT, 24 XGBoost, 25 LightGBM, 26 and ExtraTree, 27 as the base classifiers, and applied the logistic regression 46 algorithm as the meta-classifier to build the stacked ensemble model. The base classifiers were built using the scikit-learn package, 47,48 and the model stacking was implemented using the "mlxtend" package. 38 In this study, we employed the radial basis kernel function in SVM and optimized the regularization parameter C and kernel parameter g to find the most suitable hyperparameters. 14,23,49 GBDT is a treebased boosting algorithm that learns directly from mistake residual errors rather than updating the weight of the data. It uses the gradient descent algorithm to minimize training error. 24,50 XGBoost improves GBDT by employing parallel learning techniques and regularization terms, which makes the model more efficient and robust. XGBoost achieved great success in many bioinformatics tasks, such as protein/DNA/RNA functional sites prediction. 25,[51][52][53][54] LightGBM is a further extension of XGBoost, which improves training speed and reduce memory consumption by applying a histogram algorithm. 26 In addition, LightGBM proposes gradient-based one-side sampling, exclusive feature bundling, and leaf-wise growth strategy to obtain better accuracy and efficient computation. Meanwhile, it also adopted limiting maximum depth parameters to mitigate over-fitting 55,56 and LightGBM has been widely used in bioinformatics. 57,58 ExtraTree is also a tree-based algorithm that was proposed by Pierre Geurts et al. 27 in 2006. Although ExtraTree is very similar to RF, there are two major differences between them. First, RF is a bagging method, while ExtraTree uses all the training samples to train the decision tree. Second, the RF gets the best bifurcation feature in a random subset; while ExtraTree performs a completely random bifurcation. 59 Model evaluation To evaluate and compare Staem5 with existing approaches, 10-fold cross-validation and independent tests were conducted based on training and testing datasets, respectively. We applied five commonly used evaluation metrics for model evaluation, including Sn, Sp, accuracy (Acc), MCC, and area under the receiver operating characteristic curve (AUC), defined as: (Equation 4) where TP, TN, FP, and FN indicate the number of true-positive, truenegative, false-positive, and false-negative sequences, respectively. Experimental environment The experiments were conducted on a PC with a 64-bit Windows 10 operating system. The PC is equipped with an Intel(R) Core (TM) i7-7700 CPU and 16 GB physical memory; the CPU's main frequency is 3.60 GHz. Staem5 was developed based on Python 3.7, ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China no. 62071079. AUTHOR CONTRIBUTIONS C.J. and F.L. conceived the initial idea and designed the methodology. J.Z. and D.C. implemented the algorithm, conducted the experiments, and processed the results. All authors drafted, revised, and approved the final manuscript. DECLARATION OF INTERESTS The authors declare no competing interests.
2021-10-24T15:15:17.130Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "cbab4ae4df807737c8f32d7d27d8bff93bf6895b", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2162253121002559/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d1deec8f5a0c6fc7a97db210bfd17d45e5023a0", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53331599
pes2o/s2orc
v3-fos-license
Morphological changes in the fast vs slow fiber profiles of the urethras of diabetic pregnant rats Background. This study was undertaken to test the hypothesis that diabetes and pregnancy detrimentally affect the normal function of urethral striated muscles in rats, providing a model for additional studies related to urinary incontinence. The aim of this study was to evaluate morphological alterations in the urethral striated muscles of diabetic pregnant Introduction Recent studies have established that urinary incontinence (UI) is more prevalent among women with diabetes. 1,2In a previous paper, we found a statistically significant association between diabetes mellitus (DM) during pregnancy and UI and pelvic floor muscle dysfunction two years after a Caesarean section (CS).The overall prevalence of gestational UI two years post partum was significantly higher among women with previous GDM (50.8% vs 44.4%, respectively) than among normoglycemic pregnant women (31.6% vs 18.4%, respectively). 1 The risk factors for pelvic floor muscle dysfunction among these women were related to a high newborn weight and high maternal weight gain during pregnancy because of gestational diabetes mellitus (GDM).Furthermore, the risk factors for UI were indirectly influenced by GDM and pelvic floor muscle dysfunction.This framework confirms an association between GDM and subsequent pelvic floor muscle dysfunction two years after a CS. 1 The roles of pregnancy and childbirth in determining UI are still controversial.Many hypotheses have attempted to explain the origin of UI during pregnancy, its association with vaginal delivery, and the protective effect of a CS.The observed increase in the concentration of collagen and the decrease in muscle fibers in the urethras of female rats after vaginal delivery may provide insight into the mechanisms involved in the development of UI in women. 2 However, Barbosa et al., showed that an elective CS was insufficient for preventing UI two years post partum. 3I is a debilitating disorder caused by a malfunctioning urethral sphincter. 4,5Stronger clinical support for a causal relationship between a decreased urethral sphincter thickness and UI has been reported. 6triated fibers are the dominant muscle components of the mid-urethra 7 and have been classified into two major groups, type I (slowtwitch) and type II (fast-twitch) fibers, based on the presence of myosin heavy chain (MHC) isoforms.Slow-twitch type-I muscle fibers are rich in mitochondria, exhibit a high oxidative capacity, and are resistant to fatigue.Conversely, fast-twitch type II muscle fibers have robust glycolytic metabolism and fatigue easily. 8The roles of each fiber type in striated sphincter contraction are controversial and depend on the species studied and the method used to determine the fast and slow fiber types. 9iven the high prevalence of UI among women with previous GDM and that striated muscle is one of the two most important tissue types affected by insulin resistance and type 2 diabetes, the purpose of this study was to eval-uate the urethral striated muscle fiber composition of the urethras of diabetic pregnant rats to understand the influences of diabetes and pregnancy on urethral muscle fibers.Alterations in the two basic types of urethral striated fibers, type I (slow) and type II (fast) fibers, were analyzed in the urethral muscles of pregnant diabetic rats.We hypothesized that diabetes and pregnancy would detrimentally affect the normal function of urethral striated muscle in rats, providing a model for additional studies related to UI. g and 220 g, respectively, were allowed to adapt to the laboratory for seven days.The rats were housed in collective cages under controlled temperature (22±3ºC), light (12 h light/dark cycles) and relative humidity (60±5%) conditions.The animals were fed laboratory chow (Purina ® ) and tap water ad libitum and cared for in accordance to the principles in the Guide for the Care and Use of Experimental Animals. The adult female rats were distributed among four groups: Control group 1: five virgin rats euthanized on a similar period as pregnant group; Control group 2: five pregnant rats that underwent a CS at the term of pregnancy and were immediately euthanized; Control group 3: five diabetic virgin rats that were euthanized on a similar period as the pregnant group; Study group 4: five diabetic pregnant rats that underwent a CS at the term of pregnancy and were immediately euthanized. Induction of diabetes Diabetes was induced using streptozotocin (STZ; SIGMA Chemical Company, St. Louis, MO, USA) seven days prior to mating.A dose of 40 mg/kg body weight was intravenously administered to produce a permanent and severe diabetic state.The rats' blood glucose levels were measured at the beginning and end of the experimental period using glucose oxidase reagent strips (One-Touch Ultra Johnson & Johnson  , Milpitas, CA, USA).Only the rats with glucose levels greater than 200 mg/dL were assigned to the diabetic groups. 10e female rats (the pregnant and diabetic pregnant groups) were mated overnight with non-diabetic male rats.The morning when sperm were found in the vaginal smear was designated as gestational day 0. On day 21 of pregnancy, the fed rats were weighed to determine the maternal weight gain (final weight -initial weight) and lethally anesthetized with sodium thiopental (3% Thiopentax  ).The urethra and vagina were extracted as a unit to facilitate their handling.Each unit was immediately placed in a position suitable for transverse sectioning and frozen with liquid nitrogen.The samples were stored at -80°C until sectioning and staining.Cryostat sections (6 µm thick) were cut and stained with hematoxylin-eosin (H&E) to visualize the nuclei, membranes, cytoplasm and connective tissue.Immunohistochemical procedures were performed on the 6-µm-thick serial cross-sections to visualize the fast and slow myosin heavy chains.A myosin heavy chain (slow) NCL-MHCs mouse monoclonal antibody (Novocastra; 1:120 dilution) and a myosin heavy chain (fast) NCL-MHCf mouse monoclonal antibody (Novocastra; 1:160 dilution) were used. Data analysis The rat urethra was analyzed using morphological analysis, and a semiquantitative method was used to analyze the immunohistochemical staining of the fast and slow skeletal muscle fibers. For this analysis, the fast and slow type fibers were considered separately.The immunolocalization intensity was evaluated by averaging the results from two blind and independent readings.The urethral striated muscle was analyzed based on the following parameters: the presence of each type of fiber throughout the circumference of the layer (++++ if the layer was complete throughout the circumference to + if the layer was incom-plete); the thickness of the muscle fiber layers (++++ for a thickness of more than five layers to + for a thickness of one muscle fiber layer); the degree that the layers maintained a normal anatomic localization (++++ for a normal anatomic localization to + for a loss of normal anatomic localization). The scores for the circumferential presence, thickness, and anatomic localization (based on the above criteria) were multiplied for each fiber type.The obtained values of the fast fibers were then divided by those of the slow fibers to establish the fast/slow index. Convenient transformations (Neperian log) were performed to adjust the offspring weight, maternal weight gain and glycemic data to a symmetrical distribution with a homogeneous variance.Analysis of variance (ANOVA) followed by Tukey's multiple comparison test was used.Statistical significance was considered to be P<0.05.The data were expressed as mean ± standard error of the mean (SEM). Urethral histology The transverse sections of the center of the urethra in the virgin group revealed the following various layers from the lumen to the periphery: a stratified squamous epithelium (arrow), a lamina propria (), a spongy vascular plexus (P), smooth muscle including both longitudinal (1) and circular (2) fibers, and striated muscle (3) (Figure 1). Morphological and semi quantitative analyses of the striated muscle fiber composition of the rat urethra The virgin control group The H&E-stained transverse cross-sections of the striated muscle fiber revealed many layers and compact outer circular layers.The fibers were long, with a similar thickness throughout the circumference (Figure 2A).Immunohistochemical staining revealed that the striated myofibers predominantly expressed the fast myosin heavy chain isoform.The layer containing the fast fibers was thick, and the fibers were present throughout the outer circular layer (++++) (Figure 2B).The proportion of fast vs slow fibers was 4:1 (Table 1).A thin, inner circular layer of slow, striated muscle fibers was observed (+) with small and thin individual fibers (Figure 2C).The image suggested different localization patterns for each type of fiber, with fast fibers occurring in the outermost layer and slow fibers occurring in the innermost layer. The pregnant control group The H&E-stained transverse cross-sections revealed of the presence of a striated muscle layer that was similar to that of the control group.An increase in the amount of connective tissue separated the fibers from one another.The most important findings in this group were the large interstitial spaces found between the fibers (Figure 3A).Immunohistochemical staining revealed that the distribution of fast vs slow fibers and the proportions of each were similar to those of the virgin group (4:1) (Figure 2 E,F) (Table 1). The diabetic virgin control group H&E-stained transverse cross-sections revealed that the circular annulus was lost.Additionally, there was fiber thinning and atrophy, and the striated muscle was disrupted.Complete striated muscle layers were scarce (Figure 3B). Immunohistochemical staining revealed that the specific localization of each type of fiber was lost, with fast and slow fiber colocalization and a decrease in the proportion of fast vs slow fibers to 1.5:1 (Figure 2 H-I) (Table 1). The diabetic pregnant study group H&E-stained transverse cross-sections revealed that the circular annulus was lost.The fiber layers were thin, atrophic, and disorganized, and the striated muscle was disrupted.The findings were similar to those of the pregnant group with respect to the increase in the amount of connective tissue separating the fibers from one another and the large interstitial spaces (Figure 3C). Immunohistochemical staining revealed the loss of the specific localization of each type of fiber, with fast and slow fiber colocalization and a decrease in the proportion of fast vs slow fibers to 1.5:1 (Figure 4 B,C) (Table 1). Maternal and perinatal results The mean maternal weight gain and offspring weights in the pregnant group showed no statistically significant differences compared to the diabetic pregnant rats (Table 2). The diabetic virgin and diabetic pregnant groups presented increased glycemia during pregnancy compared to the virgin and pregnant groups (P<0.05)(Table 2).(A, B, C), pregnant group (D, E, F), diabetic virgin group (G, H, I Discussion The goal of this study was to gain a more comprehensive understanding of the striated muscle fiber composition of the urethra of the pregnant diabetic rat and the proportions of the two basic types of urethral striated muscle fibers (type I (slow-twitch) and type II (fasttwitch) fibers).It is important to understand the effects of DM and pregnancy on striated muscle and develop new therapeutic strategies.Human studies are often limited due to ethical concerns, the challenges of obtaining large tissue samples, and the use of strictly managed control groups.To understand how the various risk factors for UI affect the morphological properties of striated muscle, animal models are useful because the experiments are conducted under controlled conditions. 11he striated muscle fiber composition of the diabetic pregnant rat urethra is presented along with the importance of considering the experimental conditions and the inclusion of three control groups (virgin, pregnant and diabetic virgin).With this methodology, it was possible to separately analyze the influences of diabetes vs pregnancy.Of particular note, we found that, relative to these three groups, the urethral striated muscles of the diabetic pregnant rats presented the following: thinning, atrophy, disorganization, disruption in the circular annulus associated with the co-localization of fast and slow fibers, and a steady decrease in the proportion of fast vs slow fibers (fast:slow, 1.5:1).The amount of connective tissue separating the fibers from one another (i.e., in the interstitial spaces between the fibers) increased as an effect of pregnancy on urethral muscles.Our results confirm that diabetes per se was implicated in the pathological findings, and pregnancy was only related with respect to the thickness of muscle fiber layer. Thinning, atrophy, disorganization, and disruption in the circular annulus of striated muscle were extensive damage caused by diabetes. 12,135][16] Previously, the relationship between oxidative stress and diabetes in pregnant rats was confirmed by our group in a report by Damasceno et al. 17 Increasingly, reports have suggested a potential role for inflammation in the pathogenesis of type 2 DM.This has been supported by the results of both pre-clinical studies and new clinical trials using anti-inflammatory approaches to treat the disease. 18y analyzing this data, we were able to explain that extensive damage to striated muscle fibers characterized by reduced skeletal muscle mass and an altered myofiber composition links diabetes and pregnancy to UI in diabetic pregnant rats.This specific loss of skeletal muscle mass is called diabetic myopathy. 19his study confirms previous findings showing that both diabetic myopathy and pregnancy are involved in the pathogenesis of UI. Differences in the fiber type composition were detected in the urethral striated muscles of the diabetic pregnant rats compared to the control groups.In the studied animals, the expression profiles of the fast vs slow fibers revealed two main differences.First, fast fibers lost their predominance with respect to slow fibers.Second, the fast fibers lost their typical architecture, and the tissue was transformed into a mixture of slow and fast fibers.To the best of our knowledge, these findings are described here for the first time and may be labeled as a diabetic pregnant myopathy.Studies in animal models have shown a strong relationship between muscle fiber types and the development of diabetes. 20keletal muscle is responsible for movement and is the largest organ for glucose utilization.Our finding of an increased number of type I (slow-type) fibers could be related to the abundant availability of lipids. 21Changes in a muscle's fiber composition are often associated with glucose metabolism, diabetes and obesity. 22Since muscle is an important site for glucose uptake, reduced muscle mass and changes in a muscle's fiber type composition may directly impair acute glucose utilization.Skeletal muscle can adapt to functional and metabolic demands by remodeling (via fibertype switching) to maintain normal energy balance and nutrient utilization. Chen et al. 8 confirmed a higher proportion of type I fibers and the presence of fast-to-slow fiber-type switching, which appears to be dissociated from the expected change in oxidative capacity.Our findings suggest that DM alters the profile of fast vs slow fibers in the urethral striated muscles of diabetic pregnant rats, and eventual fiber-type switching could be present.The nature of the mechanism underlying the altered fiber types in our model requires further investigation. Because the primary function of the lower urinary tract is the storage and expulsion of urine at the appropriate times, changes in the striated muscle composition may be related to the loss of type-II fibers 23 or the transformation of most type II fibers into type I fibers. 24Given the limitations of this study, its results could represent muscle changes that depend on glucose levels or provide early evidence for tissue inflammation during the pathogenesis of insulin resistance and type 2 DM. 25 The damage revealed by the morphological studies demonstrates the associated impacts of diabetes and pregnancy on urethral striated muscle fibers, as three of the factors related to altered urethral striated muscle during diabetes and pregnancy (the maternal weight gain, the weight of the offspring and any trauma related to a vaginal delivery) were controlled.However, the results of our study should be interpreted with the awareness of the following limitations: rats are quadrupeds; they have tails with associated musculature; their bladders are abdominal and not pelvic organs. 12t is well established that the functional capacity of a muscle is impaired when its fibers are injured. 26As the function of a skeletal muscle is determined by its mass and fiber composition, 8 our results provide evidence that, in a translational study, diabetes and pregnancy injure striated muscles and alter their fast and slow fiber compositions in rats.These data suggest that diabetic pregnant rats may present altered urethral striated muscle contractility, supporting the high prevalence of UI in women with previous GDM two years after a CS. 1 The importance of this study is the support of the previous clinical hypothesis that diabetes and pregnancy detrimentally affect the normal function of urethral striated muscles in rats, which provides a model for additional studies. Conclusions This study allowed us to describe the morphological changes in muscle mass and the fast vs slow fiber profiles of the urethral striated muscle fibers of diabetic pregnant rats.The urethral striated muscles were found to be thin, atrophic, disorganized, and disrupted.They were associated with the loss of the normal anatomic localization of each fiber type (i.e., the colocalization of fast and slow fibers and the loss of the predominance of the expression of the fast fibers with respect to the slow fibers).The results of this translational study suggest that UI may be attributed, in part, to the changes in urethral striated muscles of diabetic pregnant women. i a l u s e o n l y [page 32][Urogynaecologia 2011; 25:e9] Figure 2. Microphotographs of transverse section of the urethra in virgin group (A, B, C), pregnant group (D, E, F), diabetic virgin group (G, H, I).H&E stained (H&E); immunohistochemical staining to visualize fast (FAST) and slow (SLOW) myosin heavy chain (MHCf, MHCs) in the striated muscle fibers.Scale bar, 100 µm.N o n -c o m m e r c i a l u s e o n l y Table 2 . Maternal weight gain and offspring weight in pregnant and diabetic pregnant groups. Maternal glycemia (mg/dL) from virgin, pregnant, diabetic virgin and diabetic pregnant groups at beginning and end of the experimental period. Values are reported as mean ± SEM; *P<0.05,significant statistically difference compared to virgin and pregnant groups (Tukey's Multiple Comparison Test).
2018-10-28T11:47:34.318Z
2011-11-04T00:00:00.000
{ "year": 2011, "sha1": "1acc6c55784afa27cfc79203f754297cbcafbaf1", "oa_license": "CCBYNC", "oa_url": "http://www.urogynaecologia.org/index.php/uij/article/download/uij.2011.e9/71", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1acc6c55784afa27cfc79203f754297cbcafbaf1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235301676
pes2o/s2orc
v3-fos-license
Experimental Methods to Study the Pathogenesis of Human Enteric RNA Viruses Every year, millions of children are infected with viruses that target the gastrointestinal tract, causing acute gastroenteritis and diarrheal illness. Indeed, approximately 700 million episodes of diarrhea occur in children under five annually, with RNA viruses norovirus, rotavirus, and astrovirus serving as major causative pathogens. Numerous methodological advancements in recent years, including the establishment of novel cultivation systems using enteroids as well as the development of murine and other animal models of infection, have helped provide insight into many features of viral pathogenesis. However, many aspects of enteric viral infections remain elusive, demanding further study. Here, we describe the different in vitro and in vivo tools available to explore different pathophysiological attributes of human enteric RNA viruses, highlighting their advantages and limitations depending upon the question being explored. In addition, we discuss key areas and opportunities that would benefit from further methodological progress. Introduction Acute gastroenteritis, characterized by symptoms including nausea, vomiting, malaise, abdominal pain, fever, and diarrhea, is one of the most common health problems worldwide. More than 700 million cases occur annually in children under five years of age, resulting in few deaths in developed countries, but more than 2 million deaths in developing countries [1]. A diverse group of viral, bacterial, and parasitic pathogens are responsible for acute gastroenteritis, but among these, enteric viruses cause almost half of the cases affecting patients of all ages worldwide, and in the United States, viruses are the leading cause [2]. Viral gastroenteritis is usually self-limiting with symptom resolution occurring within a few days, but illness can be prolonged in immunocompromised individuals [3]. Unlike bacterial or parasitic pathogens, enteric viruses cannot be treated with antibiotics, and vaccines are not currently available for many of the key drivers of gastroenteritis. Examination of intestinal contents during diarrheal illness by electron microscopy resulted in the discovery of numerous viral enteropathogens, now classified as caliciviruses, rotaviruses, astroviruses, or 'enteric' adenoviruses [1]. Among these, rotaviruses were the single most important cause of life-threatening diarrhea in children less than five years of age, with an estimated 453,000 pediatric deaths annually in developing countries [4]. However, with the global introduction of the rotavirus vaccine, noroviruses are now recognized as the most important cause worldwide of outbreaks of viral gastroenteritis in humans of all age groups [5,6]. Following rotavirus and norovirus, astroviruses and sapoviruses are the leading viral causes of sporadic gastroenteritis in children [7,8]. Other viruses associated with gastroenteritis in humans include coronaviruses, toroviruses, picornaviruses, Table 1. Summary of commonly employed in vitro and in vivo models to study human rotavirus, norovirus, and astrovirus, with most broadly-used approaches shown in red. Method Origin Experimental Model Viral Strain(s) References Human Rotavirus in vitro Immortalized cells Rotavirus Human rotavirus (HRV) is a non-enveloped, segmented, double-stranded RNA virus from the family Reoviridae, first identified from duodenal biopsies and fecal samples of children suffering from diarrhea [48,49]. A leading cause of acute gastroenteritis in children under five, HRV infection can range from asymptomatic to causing severe non-bloody diarrhea with vomiting and fever lasting from 3 to 8 days [50][51][52][53]. In some cases, rapid dehydration and electrolyte imbalance can lead to death [54,55], and indeed prior to the development of effective vaccines, HRV claimed more than 500,000 lives each year and accounted for an estimated $1 billion in health care costs in the US annually [4,56]. RV can be classified into seven major serogroups (A-G). Groups A, B, and C infect both humans and animals, while the rest have only been found in animals to date, and Group A has been established as the most common RV responsible for causing human illness [57]. These viruses possess a distinct cell tropism, predominantly infecting the mature enterocytes and enteroendocrine cells of the small intestine [19,58]. It has been recently observed that transmission of HRV can occur via vesicle-cloaked virus clusters, indicating that both free and clustered virions may contribute to disease pathogenesis [59]. HRV antigen can be detected in stool samples using ELISA or immunochromatography, but qPCR-based assays provide greater sensitivity and allow genotyping of virus isolates, and thus are routinely used in vaccine and epidemiological studies [60,61]. Currently, two live, attenuated RV vaccines are widely used worldwide: the RV5 or RotaTeq vaccine is a pentavalent vaccine composed of five bovine-human RV strains [62], while the RV1 or Rotarix vaccine is monovalent containing one HRV strain [63]. Since the introduction of these vaccines in 2006 in national vaccination programs of America and Europe [64], a massive reduction in the incidence of HRV infection has occurred [63,65,66]. Based on the performance of these vaccines, in 2009, the WHO extended this recommendation worldwide. Although the effectiveness of RV vaccines remains higher in developed countries than in developing countries, potentially due to a variety of factors including microbiota variation, implementation of these vaccines has universally translated into a substantial reduction in the incidence of severe RV infection [67][68][69]. Norovirus Human noroviruses (HuNoVs) are non-enveloped, positive-sense RNA viruses in the family Caliciviridae [70], and are the leading cause of acute gastroenteritis worldwide. Colloquially referred to as the 'winter vomiting bug', HuNoV was first identified in stool samples collected from a diarrheal outbreak in Norwalk, Ohio and thus the original strain was called 'Norwalk virus' [71]. In developed countries with RV vaccine programs, HuNoV surpasses HRV as the most common cause of gastroenteritis in children. Common symptoms of infection include nausea, vomiting, and diarrhea, usually resolving in 1-2 days, but HuNoV infection can also be asymptomatic. Previous human challenge studies indicate that approximately 30% of HuNoV-infected individuals exhibit no symptoms despite high levels of viral shedding [72,73]. Based on amino acid homology of the major viral capsid protein VP1, the NoV genus is divided into ten genogroups (GI-GX) which are further sub-divided into different genotypes containing individual virus strains [74]. Among these, GI, GII and GIV strains infect humans and genotype 4 of GII (GII.4) is responsible for the majority of human outbreaks [75]. NoVs are generally species-specific, with each genogroup of NoV infecting distinct groups of hosts [76]. While the cellular tropism for HuNoV has been a subject of some controversy, the development of in vitro culture systems, discussed in detail below, have suggested mature enterocytes and B cells as targets for HuNoV [29,31,77]. A dual tropism for immune cells and intestinal epithelial cells, specifically enteroendocrine cells, is supported by histologic analyses of intestinal tissues from HuNoV-infected immunocompromised patients [78,79]. HuNoV clusters cloaked within vesicles have, as for HRV, been implicated in viral transmission [59]. Diagnosis of HuNoV can be performed using commercially available enzyme immunoassays or immunochromatographic assays on samples including diarrheal stool or vomitus. Multiple real-time qRT-PCR (qPCR) assays have also been developed for standard diagnostic use [80]. While there are no HuNoV vaccines currently available, multiple vaccine candidates designed to target the viral capsid have been evaluated in clinical trials [81]. Astrovirus Human astroviruses (HAstVs) were first reported in 1975, after electron microscopic analysis of stool following an outbreak of pediatric diarrhea revealed virions with a characteristic star-like appearance [82,83]. HAstVs are non-enveloped, positive-sense RNA viruses from the family Astroviridae which have been classified into three divergent groups: the classic HAstVs, the nonclassic HAstV-MLB and HAstV-VA/HMO groups [7]. Among these, classic HAstVs include eight serotypes and are responsible for 2-9% of all acute viral gastroenteritis in children worldwide [7]. The more recently discovered nonclassic AstVs are less well-studied, and their pathological effects and prevalence worldwide remain unclear. In general, HAstV induces diarrheal disease that is milder than HRV or HuNoV, and is associated with abdominal pain, vomiting, and fever that lasts 2-3 days. Although infections are generally self-limiting, immunocompromised individuals may succumb to disseminated infection [84,85]. Asymptomatic infections have been reported in children as well as adults [86,87]. HAstV antigen has been found in the mature enterocytes of the small intestine [88][89][90], and an in vitro cultivation system in human intestinal enteroids supports this tropism, but also identifies goblet cells and intestinal progenitor cells as potentially permissive to infection [43]. No vaccines are currently available against HAstVs, and dehydration secondary to gastroenteritis is treated with oral or intravenous fluids. Currently, real-time RT-quantitative PCR assays are sensitive, fast, and reproducible for HAstV diagnosis [91], and next-generation sequencing continues to facilitate the identification of new emerging strains [92,93]. In Vitro Tools to Study Human Enteric RNA Viruses Although composed of only a single cell layer, the human intestinal epithelium is comprised of a variety of cell types including enterocytes, goblet cells, Paneth cells, enteroendocrine cells, and stem cells [94]. Enterocytes are critical for their absorptive capacity, while secretory cells such as goblet and Paneth cells play an important role in maintaining the epithelial barrier through secretion of mucus and antimicrobial peptides, preventing microbial encroachment from the lumen [95]. Culturing of human enteric viruses in vitro has often proven to be a challenge due to the complexity of the human intestinal epithelium [96], but multiple methods have been developed to study viral interactions with host epithelial cells. Immortalized cell lines have often been a first choice, with many advantages including that they are accessible, inexpensive, scalable, stable, and easy to maintain; these have proven tractable for HRV and HAstV, though a reliable and reproducible in vitro system for HuNoV has been a greater challenge. With the development of human enteroid systems, however, cultivation of nearly any virus with an epithelial cell tropism appears practical. An important consideration for in vitro viral experimentation, applicable to enteric RNA viruses [97][98][99][100], is the potential for passaged viruses to mutate substantially as they adapt to in vitro conditions. While fundamental aspects of viral pathogenesis may remain intact even with viral genetic changes, interpretation of results should always be considered with this caveat in mind. Here, we will discuss the in vitro systems available for studying human RNA enteric viruses ( Figure 1). vantages including that they are accessible, inexpensive, scalable, stable, and easy to maintain; these have proven tractable for HRV and HAstV, though a reliable and reproducible in vitro system for HuNoV has been a greater challenge. With the development of human enteroid systems, however, cultivation of nearly any virus with an epithelial cell tropism appears practical. An important consideration for in vitro viral experimentation, applicable to enteric RNA viruses [97][98][99][100], is the potential for passaged viruses to mutate substantially as they adapt to in vitro conditions. While fundamental aspects of viral pathogenesis may remain intact even with viral genetic changes, interpretation of results should always be considered with this caveat in mind. Here, we will discuss the in vitro systems available for studying human RNA enteric viruses ( Figure 1). Immortalized Cell Lines Immortalized cell lines provide a pure population of cells that are easy to maintain and have the potential to divide indefinitely, and thus many host-enteric virus interaction studies have been performed using cancer-derived or immortalized cell lines. Different human intestinal cell lines, manifesting specific functions and characteristics of the gut epithelium, are widely used, and African green monkey kidney cell lines, which are more susceptible to a variety of viruses, have also served as useful lines for enteric viral research [101]. Adenocarcinoma Cell Lines Cancer-derived cell lines from different parts of the human intestine are commercially available and have been widely used to explore gut-pathogen interactions, with Caco-2 and HT-29 serving as two predominant colorectal adenocarcinoma cell lines employed. Caco-2 cells form a polarized monolayer that can differentiate into cells with a remarkable resemblance to enterocytes in the intestinal epithelium [102,103], and they have been successfully employed as an in vitro amplification system for a variety of human Immortalized Cell Lines Immortalized cell lines provide a pure population of cells that are easy to maintain and have the potential to divide indefinitely, and thus many host-enteric virus interaction studies have been performed using cancer-derived or immortalized cell lines. Different human intestinal cell lines, manifesting specific functions and characteristics of the gut epithelium, are widely used, and African green monkey kidney cell lines, which are more susceptible to a variety of viruses, have also served as useful lines for enteric viral research [101]. Adenocarcinoma Cell Lines Cancer-derived cell lines from different parts of the human intestine are commercially available and have been widely used to explore gut-pathogen interactions, with Caco-2 and HT-29 serving as two predominant colorectal adenocarcinoma cell lines employed. Caco-2 cells form a polarized monolayer that can differentiate into cells with a remarkable resemblance to enterocytes in the intestinal epithelium [102,103], and they have been successfully employed as an in vitro amplification system for a variety of human enteric viruses [104]. They can be grown as 3-dimensional Transwell membrane cultures that eventually differentiate to have an apical brush border, a characteristic phenotype of the normal small and large intestine [105]. Moreover, they also exhibit characteristic cell-cell adhesion properties similar to the intestine, including development of tight and adherens junctions [106,107]. HRV can infect both differentiated and undifferentiated Caco-2 cells, but requires trypsin during the entire course of infection for efficient replication [11]. Caco-2 cells also support HAstV replication, for which the usage of serum-free media during infection is recommended [41,108]. While Caco-2 cells express histo-blood group antigens, which are key attachment factors for HuNoV [109][110][111][112][113], they do not consistently support HuNoV replication in monolayer conditions [96,114]. HT-29 cells are a pluripotent, heterogeneous cell line that grows as a multilayer of unpolarized, undifferentiated cells with less than 5% differentiated mucus-secreting cells and columnar absorptive cells [115]. These cells have the potential to further differentiate: in the absence of glucose, HT-29 cells can undergo a typical enterocytic differentiation [116], whereas in the absence of serum,~50% of cells differentiate into goblet-like cells expressing mucins [117]. HT-29 cells support in vitro growth of HRV [118], and a variant, the mucussecreting HT-29-MTX line generated by differentiating HT-29 into mature goblet cells using methotrexate [115], has been successfully employed to investigate the role of glycans in promoting HRV virulence [14]. HT-29 cells can also support efficient replication of some HAstV serotypes when grown without glucose [41]. Unlike HRV and HAstV, HuNoV does not replicate in HT-29 cells [96,114,119]. HuNoV GI and GII strains have been previously cultivated in 3-dimensional models wherein intestinal epithelial cell line INT-407 was grown on collagen-coated porous microcarrier beads in rotating-wall vessel bioreactors [119], an approach also used for HuNoV replication with Caco-2 cells [120]. However, challenges in replicating this model have prevented its widespread implementation [121][122][123]. B Cell Lines After many unsuccessful attempts to grow HuNoV in immortalized cell lines, the identification of an immune cell tropism for a strain of murine norovirus (MNoV), a prominent small animal model for HuNoV studies, suggested potential utility in exploring immune cells for HuNoV cultivation [124]. GII.4 strains of HuNoV have since been cultivated in human-derived transformed B-cell line BJABs [29,77], an infection which is enhanced in the presence of histo-blood group antigen carbohydrates that facilitate viral attachment to the B cells [77]. However, low virus yield and inconsistent results among different laboratories have been reported [29]. Non-Human Primate Cell Lines African green monkey (Cercopithecus aethiops) kidney cell lines are the most common non-human primate cell lines used for enteric viral research. These cell lines are susceptible to many viruses due to the absence of type I interferon (IFN) and cyclin-dependent kinase inhibitor genes [125,126]. Indeed, African green monkey kidney cells were the first used for the growth of HRV [127]. Vero cells are the most widely accepted immortalized cell line for the development of human viral vaccines, and have been used for poliovirus, rabies virus, influenza virus, and HRV vaccine propagation [128], though yield of high-titer virus has been a frequent challenge. Recently, selective disruption of ten antiviral genes in Vero cells including neuraminidase-2 (NEU2) and RAD51 recombinase associated protein 1 (RAD51AP1) was shown to result in higher yields for HRV replication [129]. Although many unsuccessful attempts have been made to use Vero cells to propagate HuNoV [31,96,114], a recent study suggested that replication of HuNoV may be possible in the presence of trypsin and by disrupting six host genes in Vero cells, again including NEU2 and RAD51AP1 [30]. Validation by other groups will be important to demonstrate the widespread utility of Vero cells for HuNoV studies. Vero cells have also been found to support the growth of HAstV [41], though Caco-2 cells are generally more easily infected [130]. MA-104 cells were first reported as rhesus monkey kidney cells [131], but later karyological analysis determined that these originate from African green monkeys [132]. MA-104 cells have been the cell type of choice to cultivate HRV for many years [133,134], with well-documented protocols available [15]. Similar to Vero cells, MA-104s can support HAstV replication as well [41]. These cell lines are also used for the study of enteric RNA viruses causing extraintestinal disease including HAV and HEV [135][136][137]. Advantages and Disadvantages of Using Immortalized Cell Lines Because they are cost-effective and have the potential to grow indefinitely, immortalized cell lines offer multiple advantages over alternate approaches (Table 2). They are easy to maintain and devoid of any ethical concerns associated with the usage of animals or human tissues. Although cell lines serve as a powerful tool for viral studies, due to the homogeneity of the cell population and the lack of host factors such as immune cells and the enteric nervous system, they do not necessarily represent the in vivo tissue environment. These lines have been genetically manipulated, either via transformation in cancer cell lines or via artificial expression of cancer genes to drive indefinite proliferation. This manipulation can alter key phenotypes including immune responses and cellular functions. Further, serial passaging can lead to genetic drift, driving heterogeneity in cell populations that can confer genotypic and phenotypic variation, both over time within a laboratory and between research groups [138]. Primary Cells Primary cells are isolated directly from tissue and are grown in vitro, retaining morphological and functional properties of their tissue of origin. For viral studies, primary cells may facilitate adaptation of virus to in vitro conditions before transitioning to immortalized cell lines. For example, HRV is more efficiently grown in primary African green monkey kidney cells than immortalized lines [131,139], and multiple rounds of passaging in primary cells were critical to adapt virus for growth in MA-104 cells [15,139]. A potential link has been reported between RV infection and pancreatitis and subsequent autoimmune type-1 diabetes [140,141]; replication of HRV has also been demonstrated in primary monkey islet cells [142]. Human intestinal primary cells have also been recently utilized for replication of other enteric viruses such as HEV [143]. Moreover, HEV has also been cultivated successfully in extrahepatic primary cells such as hematopoietic cells, endometrial stromal cells and renal epithelium [144][145][146]. However, HuNoV has not been shown to replicate in primary cells [96]. Advantages and Disadvantages of Primary Cells Primary cells offer advantages including maintenance of the physiological features and genetic makeup of the tissue of origin; they are also cost-effective in comparison to animal models. However, they also exhibit longer doubling times and limited growth potential compared to immortalized cells. They are difficult to maintain and may change with each passage, and cells taken from different sources can exhibit high levels of variation in responses to external stimuli [138] (Table 2). Intestinal Enteroids Because human enteric viruses can be challenging to cultivate in 2-dimensional cultures, especially in the case of HuNoV, a need for models that more accurately recapitulate human intestinal physiology arose. The differentiation of tissue-derived intestinal stem cells into 3-dimensional human intestinal enteroids (HIEs) using specific growth factors has provided a much-needed breakthrough for the study of human enteric viruses [147]. These structures resemble the in vivo human intestinal tissue architecture (hence they are also called mini-guts) in terms of having a columnar epithelium consisting of absorptive functional enterocytes and secretory lineages [147,148]. Clinical isolates of HRV from patient stool samples have been successfully cultivated in HIE cultures [18,20], with species specificity demonstrated by HRVs replicating much more efficiently than rhesus RVs [19]. Interestingly, HRV infection induces water influx into the HIE lumen, recapitulating HRV-induced diarrhea in vitro [19]. HIE cultures have been used to demonstrate the antiviral efficacy of type I, but not type III, IFN and Viruses 2021, 13, 975 9 of 21 nucleoside analog ribavirin against HRV, which varies by viral strain, supporting the use of these cultures towards personalized medicine approaches [149,150]. A variety of HuNoV strains have also been successfully cultivated in HIEs [20,31,151]. Studies exploring HuNoV pathogenesis in HIEs have revealed enterocytes to be a primary target for HuNoV replication, and indicated that cells derived from duodenum, jejunum, and ileum can be permissive to infection [31]. They have also identified viral strain-specific requirements for bile acids and histo-blood group antigens, regulated by FUT2 expression, for susceptibility of HIEs to HuNoV [16,31,152]. Importantly, genetic modifications of HIEs have been successfully performed using CRISPR-Cas9 approaches to explore FUT2 requirements and IFN regulation of HuNoV [32,33]; this capacity to genetically modify HIEs is likely to be increasingly applied to enteric viral cultivation studies (Figure 1). HIEs also permit efficient replication of HAstV, with HIEs derived from all intestinal segments supporting the growth of representative strains from all three clades [43,45,46]. These studies have shown that HAstV can infect multiple cell types such as goblet cells, mature enterocytes, and intestinal progenitor cells [43,45,46], and have revealed the importance of IFN-mediated antiviral responses against HAstV [45]. Because 3-dimensional enteroids have a closed structure, depending upon the pathogen being studied, it may be critical to explore whether luminal/apical entry requires derivation of a 2-dimensional monolayer, wherein the organoids are dissociated by enzymatic treatment and cells are seeded onto plates coated with matrigel [31,152,153], collagen mimicking the extracellular matrix [154], or polyethylene glycol [155]. Two-dimensional monolayers have the advantage of scaling up and can be used for high throughput screening studies [156]. Another option to provide simultaneous apical and basolateral access is to grow the 2-dimensional monolayer on a Transwell insert containing a porous membrane and coated with extracellular matrix-like proteins [156,157]. These Transwell inserts can also be used to co-culture immune and epithelial cells in the different chamber compartments, with the porous membrane permitting transport of secreted factors [158,159]. An alternate approach to provide apical access in a 3-dimensional enteroid system is to leverage a new technique of reverse enteroid development wherein manipulation of extracellular matrix proteins permits access to the apical surface [156]. These 'api-cal-out' enteroids can differentiate into various intestinal epithelial cell lineages, act as diffusion barriers and perform intestinal functions including nutrient absorption and mucus secretion [160], and have been recently applied to study the pathogenesis of pandemic viral strain SARS-CoV-2 [161,162], highlighting the potential for their application to other viral systems. Another consideration for these cultures is the absence of physical forces, such as fluid shear effect or peristaltic movement, characteristic of intestinal physiology. These forces regulate the behavior of intestinal cells [163] and their interaction with luminal contents [164]. Microfluidic devices for introducing shear fluid forces to generate a microenvironment similar to the gut, a model called "intestine-on-a-chip", that employ monolayers derived from 3-dimensional organoids have thus been developed [165][166][167]. However, they require specialized technical expertise, microfabrication, and fluidic systems that are not universally accessible. These specialized systems, described in additional detail elsewhere [168], have been recently adapted for successful cultivation of HuNoV [169]. Advantages and Disadvantages of HIEs Advantages of HIEs include improved recapitulation of intestinal physiology and potential for modification, though technically demanding, using genetic engineering tools. Moreover, once established, HIEs can be maintained on a long-term basis and potentially scaled up for genomic and drug screening. Most importantly, HIEs can be utilized to develop personalized medicine approaches since they are derived from genetically and phenotypically distinct individuals [170]. While this variation in HIEs depending on donor can be an advantage, it can also lead to challenges including high levels of variability in any given phenotype between cultures developed from different donors, as genetic background and factors such as age of the individuals can lead to distinct phenotypic responses [170]. HIE models generally lack components of the host microenvironment, as discussed above [171]. Finally, maintenance of HIEs is very expensive and time-consuming, requiring a substantial degree of expertise, compared to immortalized cell lines (Table 2). In Vivo Tools to Study Human Enteric RNA Viruses While the study of viruses in vitro provides many key insights into virus-cell interactions, it is critical to complement these analyses with the study of infection in the context of an organism. Factors such as adaptive immune responses to and the influence of the microbiota on viral infection in the intestine can often be most readily achieved with the use of animal models [172,173] (Figure 2). tentially scaled up for genomic and drug screening. Most importantly, HIEs can be utilized to develop personalized medicine approaches since they are derived from genetically and phenotypically distinct individuals [170]. While this variation in HIEs depending on donor can be an advantage, it can also lead to challenges including high levels of variability in any given phenotype between cultures developed from different donors, as genetic background and factors such as age of the individuals can lead to distinct phenotypic responses [170]. HIE models generally lack components of the host microenvironment, as discussed above [171]. Finally, maintenance of HIEs is very expensive and time-consuming, requiring a substantial degree of expertise, compared to immortalized cell lines (Table 2). In Vivo Tools to Study Human Enteric RNA Viruses While the study of viruses in vitro provides many key insights into virus-cell interactions, it is critical to complement these analyses with the study of infection in the context of an organism. Factors such as adaptive immune responses to and the influence of the microbiota on viral infection in the intestine can often be most readily achieved with the use of animal models [172,173] (Figure 2). Figure 2. Animal models available for the study of HRV and HuNoV. Numerous animal models described to date support replication of HRV and HuNoV, including a variety of non-human primates as well as gnotobiotic pigs, zebrafish, and humanized mice. Created with BioRender.com (accessed on 21 January 2021). Figure 2. Animal models available for the study of HRV and HuNoV. Numerous animal models described to date support replication of HRV and HuNoV, including a variety of non-human primates as well as gnotobiotic pigs, zebrafish, and humanized mice. Created with BioRender.com (accessed on 21 January 2021). Non-Human Primates Due to the genetic proximity of non-human primates (NHP), including vervet monkeys, cynomolgus monkeys, rhesus macaques, pig-tailed macaques, chimpanzees, and baboons, to humans, they expectedly share many anatomical, immunological, and physiological similarities. NHPs have served as important experimental models for enteric viral research, as they recapitulate the pathogenesis of infections in humans to a greater degree than other animal models [26,[174][175][176]. Baboons and vervet monkeys infected with HRV exhibit viral shedding and elevated level of virus neutralizing antibodies [175], and cynomolgus monkeys similarly exhibit self-limiting diarrhea and shedding of infectious virus [26,27]. Chimpanzees have been used for HuNoV infections, wherein the duration of viral shedding and serum antibody responses in this model are similar to those in humans [40,177]. Pig-tailed macaques are also susceptible to HuNoV, exhibiting diarrheal illness, while rhesus macaques exhibit prolonged shedding and antibody responses in the absence of diarrheal symptoms [39,176]. Enteric viruses isolated directly from NHPs have also been used extensively for study of viral pathogenesis. For example, SA11 is a simian rotavirus initially isolated from a vervet monkey [178] which has subsequently been used as a model for the study of RV both in vivo and in vitro [179,180]. Study of another simian rotavirus strain, rhesus RV, contributed to the formulation of RV vaccines [181,182]. Recovirus is an enteric calicivirus isolated from rhesus macaques [183] which causes diarrhea in infected animals and can be used to study pathogenesis of and immunity to caliciviruses such as HuNoV in vivo [184]. In the last several years, AstVs have been increasingly identified in NHP samples, though these have not yet been extensively characterized [185,186]. Gnotobiotic Pigs Gnotobiotic pigs, referring to pigs for whom the microbial status is well-defined and including germ-free status, have been a long-standing resource for the study of both the microbiota as well as enteric viruses due to their strong similarities to humans in pathophysiological responses [187]. Human microbiota samples can be efficiently transplanted into gnotobiotic piglets, resulting in microbial profiles similar to the donor samples [188]. Infection of gnotobiotic piglets with either HRV or HuNoV recapitulates human symptoms of acute viral gastroenteritis including diarrhea and fecal viral shedding, thus making these models extremely useful for immunological studies [25,189]. Gnotobiotic piglets have been extensively used for RV vaccine evaluation [190], and studies of HuNoV infection in gnotobiotic piglets demonstrate an association of HuNoV infection with intestinal epithelial cell apoptosis and barrier disruption, indicating a mechanism for HuNoV-induced diarrhea in humans [191]. Porcine enteric viruses, causing pathological symptoms of gastroenteritis such as diarrhea, vomiting, dehydration and even mortality in neonatal piglets, have been found which share similarity with HuNoV GII and HAstV strains [192][193][194]. Mouse Models One of the most widely used animal models in biomedical sciences is the laboratory mouse (Mus musculus). Mice have many features that make them ideal for research studies, including a relatively fast reproduction rate with large litter sizes, the existence of large and well-controlled housing facilities in the majority of research institutions, and an enormous breadth of genetically modified models permitting interrogation of the sufficiency or necessity of factors for any given phenotype [195]. In the field of enteric virus research, mice can potentially be infected directly with human viruses. BALB/c and outbred Swiss SW55 mouse pups can be orally inoculated with HRV strains, with infection causing development of symptoms and histopathological changes associated with gastroenteritis [196,197]. Similarly, BALB/c Rag2 −/− Il2rg −/− mice engrafted with human CD34+ hematopoietic stem cells have been used for intraperitoneal infection with HuNoV GII.4, wherein viral structural and nonstructural proteins were found to be expressed in the spleen and liver [35]. Interestingly, in this infection model the genetic background of the mice may play a more important role than the engraftment of human cells [35]. These models have not yet been widely used and may benefit from further development. Genetically related mouse viruses are another option as models to further investigate the pathogenesis of enteric viruses in their natural hosts, which can then shed light on the characteristics of the related human virus as well. Murine rotavirus (mRV), first discovered fifty years ago, has been widely studied and causes diarrheal disease with particular severity in BALB/c mouse pups [198][199][200][201]. More recently, studies using mRV have revealed critical roles for IFNs in regulation of enteric viral infection [202,203], and identified roles for segmented filamentous bacteria as well as bacterial flagellin in mediating antiviral effects through accelerated epithelial cell turnover and IL-22/IL-18, respectively [204,205]. Similarly, murine norovirus (MNoV) [206], for which numerous phenotypically distinct viral strains have been described, has served as a powerful model for HuNoV since its discovery two decades ago. Numerous MNoV studies have contributed substantially to our understanding of the cellular and molecular mechanisms of NoV pathogenesis, revealing important regulatory aspects of the commensal microbiota, IFN signaling, and roles for nonstructural proteins in antagonism of host responses which are conserved with HuNoV [124,[207][208][209]. A recent set of studies identified CD300LF as the host protein which serves as the MNoV receptor, and though human CD300LF does not mediate the same role for HuNoV, this finding raises intriguing questions about the potential existence of a proteinaceous receptor for HuNoV [124,[210][211][212]. More recently, murine astrovirus (muAstV) was identified as a small animal model for HAstVs [213,214]. MuAstV has been shown to mediate viral interference against other enteric viruses in immunodeficient hosts [215], and its tropism for mucus-secreting goblet cells may also permit muAstV to regulate enteric bacterial pathogens [46,216]. Although mouse models of viral infection may not reflect all clinical outcomes of human enteric viruses, their importance in interrogating and defining key aspects of viral regulation in vivo cannot be denied. Zebrafish Recently, zebrafish (Danio rerio) have gained prominence as in vitro and in vivo models for the study of viral pathogenesis. Zebrafish offer numerous advantages including being inexpensive, breeding rapidly, and being genetically tractable [217,218]. In addition, zebrafish are transparent during development, facilitating visualization of internal structures and viral infection. This model shares genetic and physiologic similarities to humans including innate and adaptive immunity, making them widely used vertebrate models of human diseases. Recently, zebrafish larvae have been used as a robust replication model for HuNoV. GI and GII viruses replicate to high titers, with virus detectable in both intestinal and hematopoietic tissues, consistent with the possible dual tropism for HuNoV [34]. In addition, an increase in expression of several immune genes was also observed [34]. The zebrafish model has also been utilized for high throughput anti-HuNoV drug screening [219,220]. Of interest, a transgenic zebrafish expressing green fluorescent protein under the influence of an IFN stimulated gene promoter has been developed, providing an in vivo tracking system for viral infections [221]. Turkeys Turkeys have proven to be useful small animal models for the study of AstV pathogenesis and immunity [222,223]. Turkey poults infected with turkey AstV (TAstV-2) show clinical symptoms including age-dependent diarrhea, and have been used to identify the viral capsid as a critical mediator of diarrhea and to test therapeutic antiviral strategies against AstVs [224,225]. Conclusions and Future Directions Viral gastroenteritis remains a serious global health concern, as we currently lack vaccines for HuNoV and HAstV, and the efficacy of the RV vaccine may be limited in some countries due to unclear environmental factors. While the lack of suitable in vitro and in vivo model systems that could reliably recapitulate the primary aspects of viral pathogenesis in humans has been a limitation, recent advances in cultivation systems in human enteroids as well as the expansion of available animal models offer promise for future breakthroughs. A current challenge with most current in vitro systems is the absence of host factors such as immune cells and the microbiota; these are areas of ongoing development [158,226,227]. In addition, improved immortalized cell line options for some of the human viruses would facilitate application of drug or genetic screening approaches. Immortalization of those cells that are routinely infected in humans in vivo, or overexpression of the viral receptor in already established and easy-to-use lines, could yield these resources which would be a great boon to the field. In vivo models naturally provide the most clinically relevant information about mechanisms of pathogenesis or host-virus interactions. The models discussed here have provided extremely useful insights, but improvements are always possible. Robust and reproducible Viruses 2021, 13, 975 13 of 21 systems to reliably infect inexpensive and manipulatable small animal models such as mice and zebrafish with human enteric viruses would be tremendously useful and may rely on generation of transgenic animals expressing human viral receptors, similar to the introduction of the poliovirus receptor into mice [228]. Additional future advances in these model systems will almost certainly continue to yield an improved understanding of the pathogenesis of these enteric viruses, which will be key to improved vaccine and therapeutic approaches to combat the global burden of acute viral gastroenteritis. Conflicts of Interest: The authors declare no conflict of interest.
2021-06-03T06:17:21.161Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "7d0b506bb909a7d9d114b06c9c456db30cf086be", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/13/6/975/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eaf1f2fe6251d554e7dc598b7e53209ddee2a242", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
204884420
pes2o/s2orc
v3-fos-license
Acute lower respiratory tract infections: Symptoms, findings and management in Danish general practice Abstract Background: Acute lower respiratory tract infections (LRTIs) are among the most common infections managed in general practice. Objectives: To describe differences in reported symptoms, findings and management of patients diagnosed with acute LRTIs, and to explore possible associations between these findings and being diagnosed with pneumonia. Methods: During one winter season (2017 or 2018), a prospective registration of patients diagnosed with either acute bronchitis (ICPC-2: R78) or pneumonia (ICPC-2: R81) was conducted in Danish general practice for 20 days. A 42 item registration chart was filled in for each patient. Descriptive statistics, Pearson's chi-square test and multiple logistic regressions were used for data analysis. Results: In total, 70 general practices participated with 1384 patients registered. Patients diagnosed with pneumonia were more often reported as having a fever, dyspnoea, increased purulent sputum, abnormal pulmonary auscultation/chest retractions, and were more often assessed as unwell by the healthcare professional, than those diagnosed with acute bronchitis. Very few patients had a chest X-ray. Contrary, most patients had a C-reactive protein (CRP) test performed (pneumonia: 83%; acute bronchitis: 71%). Respectively, 93% and 20% of patients were treated with antibiotics. Having a fever, an abnormal pulmonary auscultation/chest retractions or being assessed as unwell increased the likelihood of the diagnosis pneumonia at least fivefold. Even a slightly elevated CRP (≥11 mg/L) was positively associated with being diagnosed with pneumonia. Conclusion: Danish healthcare professionals are highly influenced by symptoms, signs and CRP tests when diagnosing patients with acute LRTIs in general practice. Introduction Antimicrobial resistance is one of the greatest threats to global public health and the World Health Organisation warns against a return to a pre-antibiotic era [1]. Higher prevalence of resistance among human pathogens increases the risk of uncontainable infections, prolonged illness and hospital stay, increased mortality, and consequently increased health care costs [2]. Antibiotic use is the main driver of antibiotic resistance, why addressing the excessive and inappropriate use of antibiotics is essential [3]. In Denmark, general practice accounts for about 75% of the total human antibiotic consumption [4]. Acute lower respiratory tract infections (LRTIs) are among the most common infections managed in Danish general practice [5], with pneumonia being a common indication for antibiotic prescriptions [6]. According to Danish and international recommendations, patients with suspected pneumonia should, in general, be treated with antibiotics [5]. Contrary, acute bronchitis is most often considered a viral infection and thus most patients will not benefit from antibiotic treatment [5]. However, it can be difficult to differentiate pneumonia from other LRTIs by means of symptoms and signs [7], and the point-of-care test (POCT) named C-reactive protein (CRP) has been used since 1999 in Danish general practice [8]. Evidence exists that CRP-testing can reduce antibiotic prescribing for acute respiratory tract infections [9] and many guidelines recommend CRP-testing in patients presenting with symptoms of an acute LRTI [5,10]. However, as CRP is a non-specific marker of inflammation, it is challenging to set a specific cut-off value for treatment with antibiotics. Also, imaging can be used as a supportive diagnostic tool for diagnosing pneumonia, with chest X-ray being the most commonly used. However, diagnostic imaging is far from always used in patients suspected for pneumonia due to low availability, high radiation dose, and high costs. In summary, a great deal of diagnostic uncertainty exists when dealing with patients with acute LRTIs in general practice and this may lead to too many people being diagnosed with pneumonia and thus resulting in inappropriate use of antibiotics [11]. This study aimed to describe differences in reported symptoms, findings and management of patients diagnosed with either acute bronchitis or pneumonia in general practice, and to explore possible associations between the symptoms, findings, and CRP level and being diagnosed with pneumonia. Setting This prospective, cross-sectional study is part of a larger quality improvement project with the overall aim of improving diagnosis and treatment of acute respiratory tract infections in Danish general practice. Both GPs and practice nurses were asked to participate in the project, as many patients with acute minor illnesses, such as acute respiratory tract infections, are taken care of by a practice nurse in Danish general practice. The participating general practices originated from three Danish Regions. During winter 2017, general practices in the North Denmark Region and the Region of Southern Denmark registered all patients presenting with symptoms of an acute respiratory tract infection for 20 days. In winter 2018, general practices in the Central Denmark Region performed the registrations. Only patients who consulted the practice for the first time for the current infection were included. Home visits and telephone consultations were not included. Registration was performed according to the Audit Project Odense (APO) method, using a registration chart with 42 items [12] (Supplementary Material). All symptoms and findings were simply listed in the registration chart, with no specific definitions provided. However, all participating healthcare professionals, i.e. general practitioners and practice nurses, were provided with a guide instructing them on how to fill in the registration chart and specifying that the diagnoses given should be based on the International Classification of Primary Care (ICPC-2). It was recommended to perform the registration during or immediately after each consultation. Ethics All general practitioners and practice nurses consented to the study. Only anonymised patient data were obtained and ethics approval was not indicated according to Danish law. The project is registered at the University of Southern Denmark, Denmark (ID SDU 10.169). Subjects In total, 8232 patients with acute respiratory tract infection were registered. However, only patients diagnosed with either acute bronchitis (ICPC-2 code R81) or pneumonia (ICPC-2 code R78) comprise the study population for the present study. No formal diagnostic criteria had to be met and the diagnosis given was solely based on the clinical judgement of the participating healthcare professional. Patients diagnosed with exacerbation of chronic obstructive pulmonary disease (ICPC-2 codes R95, R79) was not included in the analysis (n ¼ 197). Data The general practitioners/practice nurses were asked to tick off if any of the following symptoms and findings were registered: fever (>38.5 C), cough, dyspnoea, increased purulent sputum, abnormal pulmonary auscultation/chest retractions, and if the healthcare professional deemed the patient unwell or as a weakened/multimorbid patient. Also, it was registered if a CRP test (including the value in mg/L) and/ or a chest X-ray was performed, and if any antibiotic treatment was provided (Supplementary Material). In addition, a short questionnaire focussing on practice characteristics and personal information was completed by each of the participating general practitioners and practice nurses. Statistical analysis Categorical variables were presented as numbers and percentages, and metric variables were presented as medians and percentiles. Pearson's chi-square test was applied with a 5% significant level to test for independence. Multiple logistic regressions were performed to analyse the association between the symptoms, findings, and CRP values and being diagnosed with pneumonia. The odds ratios (OR) of being diagnosed with pneumonia were adjusted for possible confounders (gender, age and weakened/multimorbid patient). As effect modification was suspected, interactions between the various symptoms were tested. In the descriptive statistics missing values are reported in the respective tables and pairwise deletion was used in the logistic regressions. All statistical analyses were conducted using SPSS Statistics 25 [13]. Baseline characteristics of subjects In total, 70 general practices agreed to participate. Table 1 demonstrates the characteristics of the 158 general practitioners and 56 practice nurses managing patients diagnosed with either acute bronchitis or pneumonia. Compared to the total population of Danish GPs, the participating GPs were more likely to be female, to be younger, and to work in partnership practices [14]. Participating nurses were older than the national average age for nurses [15]. A total of 1384 patients were diagnosed with an acute LRTI, of which 50.5% were diagnosed with acute bronchitis and 49.5% with pneumonia (Table 2). Most patients were adults, and slightly more female patients were registered. More children (5 years) were diagnosed with acute bronchitis than pneumonia. Contrary, elderly patients (>65 years) were more commonly diagnosed with pneumonia than acute bronchitis. Management of patients with acute bronchitis or pneumonia The most frequently reported symptoms among patients diagnosed with acute bronchitis or pneumonia were cough, fever, dyspnoea, and increased purulent sputum (Table 3). Patients diagnosed with pneumonia were more often reported with a fever, dyspnoea, and increased purulent sputum, respectively, than those diagnosed with acute bronchitis. Also, patients diagnosed with pneumonia were more often reported with abnormal pulmonary auscultation/ chest retractions and were more often assessed as unwell by the healthcare professional than those diagnosed with acute bronchitis. Patients diagnosed with pneumonia The symptoms fever, dyspnoea, and increased purulent sputum, and the findings of abnormal pulmonary auscultation/chest retractions and being assessed as unwell were all positively associated with being diagnosed with pneumonia compared to acute bronchitis (Table 4). When patients reported/presented with fever they were almost five times more likely to be diagnosed with pneumonia (odds ratio (OR) ¼ 4.6; 95% confidence interval (CI) 3.6-5.9). Fever was found to cause-effect modification increasing the association between reporting the symptoms dyspnoea and increased purulent sputum and being diagnosed with pneumonia (data not shown). Also, the higher number of symptoms, the more likely patients were diagnosed with pneumonia (two symptoms: OR ¼ 2.5; 95% CI 1.9-3.3; three symptoms: OR ¼ 4.7; 95% CI 3.4-6.6; four symptoms: OR ¼ 13.6; 95% CI 6.6-27.7) (data not shown). The likelihood of being diagnosed with pneumonia increased with increasing CRP level (Table 4). Main findings Patients diagnosed with pneumonia were more often reported as having a fever, dyspnoea, increased purulent sputum, abnormal pulmonary auscultation/chest retractions, and were more often assessed generally unwell by the healthcare professional, than those diagnosed with acute bronchitis. Having a fever, abnormal pulmonary auscultation/chest retractions or being valued as unwell increased the likelihood of being diagnosed with pneumonia at least fivefold. Very few patients presenting with symptoms of an acute LRTI had a chest X-ray performed. Contrary, most patients had a CRP test performed, and even a slightly elevated CRP test (11 mg/L) was positively associated with being diagnosed with pneumonia. Strengths and limitations Projects based on the APO-method have been carried out in Danish general practices since 1989 and on very diverse issues like treatment of hypertension, preventive home visits and acute respiratory tract infections [16][17][18]. However, some limitations have to be kept in mind when interpreting the results of this study. First, it is voluntary to participate in APO audits and one can argue that the results do not necessarily reflect the average management of patients with LRTIs in Danish general practices. The participants may have been more interested in quality development and in the topic being investigated than health care professionals in general [19], which could have prompted increased awareness of evidence-based management of patients with LRTIs. Second, a registration chart with predefined variables was used in this study and it is not possible to explore the accuracy of the reported symptoms or the diagnosis given. For example, the variables 'abnormal pulmonary auscultation/chest retractions' and weakened/multimorbid patient included two findings, which makes it impossible to know which one of the findings patients actually presented with, or even if they presented with both findings. Also, fever (temperature >38.5 C) was registered for 63.5% and 34.2% of patients diagnosed with pneumonia and acute bronchitis, respectively. However, it is not possible to identify when an exact temperature was measured and when the presence of fever was solely based on a subjective assessment of either the patient or the health care professional. Also, the risk of missing valuable information (symptoms and findings not included in the chart) needs to be mentioned. However, a major strength of using these simple registration charts is the opportunity to easily perform the registration during the consultation, which enables GPs and practice nurses to work according to their usual routine [20]. Finally, it is well known that health care professionals often first decide if antibiotic treatment is indicated or notand then subsequently label the patient with the most suitable diagnosis. As Howie [21] stated back in 1972 'There are occasions when the diagnostic label attached to consultation is a rationalisation of the management decision made, rather than the determinant of it.' Consequently, when interpreting the results from this study one has to keep in mind that we can only report on 'a picture' of the management of patients diagnosed with either acute bronchitis or pneumonia in Danish general practices, and not necessarily report on the correctness of the diagnoses given. Interpretation of the study results in relation to existing literature In accordance with other studies, we found a large overlap of symptoms in patients diagnosed with either acute bronchitis or pneumonia [22]. Also, previously conducted research has demonstrated that both fever and dyspnoea are associated with being diagnosed with pneumonia [11,22]. Importantly, we found that reported fever was found to cause-effect modification of the symptoms dyspnoea and increased purulent sputum. Thus, the best way to describe which symptoms precede the diagnosis of pneumonia is probably not to report a single symptom but to report on a combination of these symptoms. Being assessed as unwell by the attending healthcare professional was positively associated with being diagnosed with pneumonia. The assessment of patients' general condition is difficult to define and this subjective assessment has not previously been described in the international literature in relation to the management of patients with acute LRTI. Importantly, we report on the healthcare professionals clinical impression of the patient's condition, as it was left entirely to the participating GPs and nurses to deem if the patient was unwell. There is good evidence that most patients with acute bronchitis do not benefit from antibiotic treatment [23]. Still, in this Danish study, about one-fifth of patients diagnosed with acute bronchitis were treated with antibiotics. Several other studies have demonstrated even higher prescribing rates, with 85% in a recent Australian study and 71% in a study from the United States [24,25]. As many as 83.1% of patients diagnosed with pneumonia and 71.4% of the patients diagnosed with acute bronchitis had a CRP test performed. In previous studies, the use of CRP tests has been shown to improve the diagnosis of pneumonia [22,26]. However, CRP testing has a low validity for diagnosing pneumonia compared to a chest X-ray, and there is no agreement about where to set the cut-off point [27]. Previous studies have demonstrated that a combination of symptoms, signs, and CRP have high diagnostic value in detecting and mainly ruling out pneumonia [7]. Contrary, two reviews conclude that CRP testing has no clear diagnostic value in primary care [27,28]. Nevertheless, Falk et al. [28] state that when a doctor is in doubt about the presence of pneumonia, a CRP test can be helpful in ruling out disease. However, one can speculate if the CRP test is used too extensively in Danish general practice, as even a very low cut-off (>11 mg/L) was associated with being diagnosed with pneumonia. A large number of CRP tests performed in this present study perhaps represent a strategy to curb with the diagnostic uncertainty [29]. Implications for clinical practice and future research The emerging threat of antimicrobial resistance is real. Consequently, it is crucial to reduce the diagnostic misclassification of patients with LRTIs to minimise the use of antibiotics as much as possible. This study demonstrated a high use of CRP tests, and moreover, an elevated CRP level was strongly associated with being diagnosed with pneumonia. However, it can be questioned whether the use of CRP tests eliminates the diagnostic uncertainty. Future research should focus on testing other diagnostic tools, or optimising already existing ones, for improving the diagnosis and treatment of patients with LRTIs in general practice. Conclusion Danish healthcare professionals are highly influenced by symptoms, signs and CRP tests when diagnosing patients with acute LRTIs in general practice.
2019-10-26T13:08:44.476Z
2019-10-25T00:00:00.000
{ "year": 2019, "sha1": "d194b6698d6db5ae079c456cfdbfa7029e65a9e3", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13814788.2019.1674279?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "947cadf078a487e64e8cc6ef1817d9b4a93ecd3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
42994410
pes2o/s2orc
v3-fos-license
Escherichia coli RNase E has a role in the decay of bacteriophage T4 mRNA. Bacteriophage T4 mRNAs are markedly stabilized, both chemically and functionally, in an Escherichia coli strain deficient in the RNA-processing endonuclease RNase E. The functional stability of total T4 messages increased 6-fold; we were unable to detect a T4 message whose functional stability was not increased. There was a 4-fold increase in the chemical stability of total T4 RNA. The degree of chemical stabilization of six specific T4 mRNAs examined varied from a maximum of 28-fold to a minimum of 1.5-fold. In the RNase E-deficient strain, several minutes delay and a slower rate of progeny production led to a reduction in final phage yield of approximately 50%. Although the effect of the rne temperature-sensitive mutation could be indirect, the simplest interpretation of our results is that RNase E acts directly in the degradation of many T4 mRNAs. Escherichia coli RNase E is an endoribonuclease that has been shown to process 9S RNA to a precursor of 5S rRNA (Ghora and Apirion 1978) and also to process RNA1, the inhibitor of ColE1 plasmid replication (Tomcs~inyi and Apirion 1985) both in vitro and in vivo. RNase E did not appear to have a general role in mRNA decay in E. coli , although the synthesis of some E. coli proteins was affected by the mutation . The enzyme is involved in the processing of mRNA from bacteriophage T4 genes 32 and 59 in vivo. Processing of these messages resulted in destabilization of the portion of the mRNA upstream of the cleavage site (Mudd et al. 1988;Carpousis et al. 1989) and thus may have a role in retroregulation (Schmeissner et al. 1984) of upstream gene expression. A comparison of the cleavage sites found in the noncoding RNAs and the T4 mRNAs revealed similarities in sequence at the cleavage sites and in the potential to form RNA secondary structure just downstream of the sites (Tomcs~inyi and Apirion 1985;Mudd et al. 1988;Carpousis et al. 1989). As yet, the nucleases that determine the rate of functional decay of total mRNA in E. coil or its phage have not been identified. A mutation in the E. coli ams gene was found to have a five-to sixfold effect on the chemical stability of total E. coli RNAs, but it did not affect functional stability significantly (Kuwano et al. 1977;Ono and Kuwano 1979). E. coli RNases III (see Portier et al. 1987), E (Mudd et al. 1988;Carpousis et al. 1989), and other as yet unidentified endonucleases (Cannistraro et al. 1986;Baga et al. 1988 1988; Uzan et al. 1988) have been implicated in the decay of a few specific bacterial or phage mRNAs. In this paper we present evidence suggesting that E. coli RNase E has a major role in the functional and chemical decay of many bacteriophage T4 mRNAs. E. coli RNase E affects the functional stability of T4 mRNAs The functional stability of mRNA can be estimated from the ability of the mRNA to direct the synthesis of proteins after transcription initiation has been blocked with rifampicin. Figure 1B shows the effect of the temperature-sensitive rne mutation on the functional stability of T4 mRNAs at the nonpermissive temperature. At different times following the addition of rifampicin, infected rne + and me-cells were pulse-labeled with 14C-labeled amino acids and the labeled T4 proteins analyzed by SDS-PAGE. As assayed by the protein synthetic rates, the functional stabilities of all the T4 mRNAs whose gene products are detected in Figure 1B appear to increase in the me-strain. A message whose stability is not affected by this mutation cannot be readily detected. If such mRNAs exist, their protein products must be minor species. The total proteins synthesized at each time point were determined by integration of densitometric scans of each lane in Figure lB. From this, the estimated functional half-life of the messages in the rne § strain is 7 min, whereas it is 42 min in the rnestrain. The overall effect of the rne mutation is therefore a sixfold functional stabilization of the messages. It should be noted that this measurement of functional half-life of total mRNA is biased in that mRNAs with Figure 1. Protein synthesis at (A) 30~ or (B) 43~ in phage-infected rne + and m e -E. coli cells before and after rifampicin treatment. In B, the cells were infected with T4, and after 6 rain, rifampicin was added to a final concentration of 150 ~g/ml. Before the addition of rifampicin (time 0), or at the times shown (minutes) after the rifampicin treatment, samples of cells were removed, pulse-labeled for 3 min with 14C-labeled amino acids, and chased with excess cold amino acids for 3 rain. The times represent the midpoint of each pulse; for example, 8 min represents a pulse begun at 6.5 min after the addition of rifampicin and terminated by the chase at 9.5 min. In A, rifampicin was added 10 rain after infection (final concentration 175 sg/ml), and samples of cells were pulsed for 5 min and chased for 5 rain. Rifampicin was added later at 30~ to compensate for differences in the rate of development between 30~ and 43~ On the basis of the patterns of protein synthesis at the time of rifampicin addition (0 rain), the infection appears to be at nearly the same stage of development. Labeled proteins were subsequently analyzed by SDS-PAGE with a 10% polyacrylamide gel. In B, the observed differences in amounts of labeled protein are not due to unequal loadings, because Coomassie Blue staining (not shown) indicated that the total amount of protein was essentially the same in each track. Some T4 gene products are indicated; the 23" represents the processed gene 23 protein (Vanderslice and Yegian 1974). In the m e -strain at the 0 time point, additional protein bands are evident that are not visible in the rne § strain; these are host proteins whose synthesis has not been completely shut off by the phage infection, presumably because of the slight delay in the infection of this mutant strain, which has been observed previously (Mudd et al. 1988). Because it takes several minutes for rifampicin to enter the cells and block transcription fully, this delay could partly explain the small increase in the levels of some of the T4 late gene products (e.g., 34, 7, and 37) observed between the 0-and 8-min time points after the addition of rifampicin. However, the delay in infection of the m e -strain cannot explain the differences in functional stability in the two strains because these differences were also observed when rifampicin was added at a later stage of infection (not shown). higher translational yields contribute more to the halflife estimate than those with lower yields. Many of the mRNAs are so highly stabilized in the m e -strain that it is difficult to estimate their individual half-lives. For a few messages, protein synthesis even increases slightly at late times after the rifampicin treatment. This could reflect an increase in the number of ribosomes available for translation of these more stable messages as other messages decay. It is also apparent in Figure 1B that the various messages are not all stabilized to the same extent. In a comparable experiment at the permissive temperature of 30~ (Fig. 1A), the estimated functional half-life for total T4 m R N A was 10-12 min in both the r n e + and m e -strains, confirming that the stabilization observed at 43~ in the m e -strain correlates with the inactivation of RNase E. In addition, the functional half-life of total T4 m R N A in a wild-type host strain at 37~ was 5 rain (not shown), which is similar to that in the r n e § strain at 43~ suggesting that other enzymes involved in T4 m R N A decay are functioning normally at 43~ To exclude the possibility that transcription initiation had not been blocked to equivalent extents by the rifampicin treatment in the mutant and wild-type strains, we measured the inhibition by rifampicin of the incorporation of [3H]uracil into trichloroacetic acid (TCA)-precipitable RNA in the two strains. In the phage-infected r n e + and m e -cells (Fig. 2), in the absence of rifampicin, the rate of incorporation of [3H]uracil into RNA falls initially, as observed previously (Young et al. 1980), and remains constant for the rest of the infection. After the addition of rifampicin, there is a rapid reduction in the incorporation of label, and the extent of this reduction is the same in both strains. The equivalent degree of inhibition of RNA synthesis in the two strains suggests that differences in rifampicin sensitivity are not the cause of the differences in protein synthesis patterns observed in Figure lB. T h e e f f e c t o f R N a s e E o n t h e c h e m i c a l s t a b i l i t y o f T 4 m R N A s To quantitate the effect of the r n e mutation on the chemical stability of newly synthesized T4 mRNA, host cells at 43~ were infected with T4 and pulse-labeled with [aH]uridine. The pulse was terminated by the addition of rifampicin and excess cold uridine, and at subsequent times, RNA was isolated from the cells and hybridized to filter-bound T4 D N A (Fig. 3). As estimated from the slope of the initial, linear portion of the curve for the r n e + strain, the half-life of the T4 mRNAs was 4 min. This is similar to the chemical half-life of 3.5 min estimated by Greene and Korn (1967), and 3 . 5 -4 rain estimated by Friesen (1969) and me-{O) cells, the cultures were split into two aliquots: rifampicin (100 ~g/ml) was added to one (dotted line, solid symbols), and the other was untreated (solid line, open symbols}. The [aHluracil incorporated is plotted at the midpoint of each 2-rain pulse-labeling, (e.g., 5 rain represents labeling begun at 4 rain and terminated at 6 rain by the addition of icecold TCA). The mean counts per minute from duplicate TCA precipitations are plotted on a logarithmic scale against the time at which the cells were pulse-labeled. RNAs that are more stable. In the me-strain, the estimated half-life of T4 mRNA was 15 min. This represents an approximately fourfold increase in the chemical stability of a large fraction of the newly synthesized T4 mRNAs, which is comparable to the sixfold increase in functional stability observed above. However, the 15rain chemical half-life is significantly less than the 42rain functional half-life estimated above for T4 transcripts in the me-strain. The estimates of functional and chemical message stability need not agree precisely because the measurement of average functional stability depends on the efficiency of translation of each message in the population as well as the degree to which the me-mutation affects their stability. For instance, mRNAs that are highly stabilized in the me-strain and that also have high translational yields would contribute disproportionately to the estimate of functional half-life. In addition, the two measurements are not strictly comparable. Although the chemical measurement determines the rate of decay of messages synthesized in the pulse prior to the addition of rifampicin and excess cold uridine, the functional measurement follows the decay of all the messages in the infected cell. Because the inhibition of transcription upon rifampicin addition is not immediate Isee I~ig. 2}, the functional measurement will be affected by the residual RNA synthesis following rifampicin addition, whereas the chemical measurement is less sensitive to this effect because of the cold uridine chase. To determine whether this general effect of stabilization of T4 mRNAs was uniform for all mRNAs, we examined the effects of the me mutation on the chemical stability of specific T4 messages. Figure 4 shows Northern blots of total RNA isolated from phage-infected rne + and me-strains at various times after rifampicin treatment. A gene 43-specific probe hybridizes to a transcript of 2.9 kb in size. This is the expected size for a transcript encoding only the gene 43 product and could be either the monocistronic or the processed polycistronic gene 43 transcript {Guild et al. 1988}. The approximate half-life of this transcript in the wild-type host is 4 rain, whereas it is 11 rain in the mutant strain. Once decay is initiated, the message presumably decays rapidly because discrete intermediates are not detected. The level of the 2.9-kb transcript at the 8-min time point after rifampicin addition is higher in the mutant than in the wild-type strain (see below). Figure 4B shows an example of a more complex but more typical T4 transcript pattern. The gene 32 RNA probe hybridizes to polycistronic and monocistronic transcripts, as well as their processed products and decay intermediates (Carpousis et al. 1989}. Although it appears that most of the species are significantly more stable in the me-strain, the corn- . The chemical decay of pulse-labeled phage transcripts from T4-infected rne + and me-E. coli at 43~C. The rne § {[21 and me-(@) cells were infected for 8.5 rain, pulsed with [aH]uridine for 3 rain, and rifampicin and excess cold uridine were added for 3 rain. RNA was isolated at the end of the chase (time 01 and at the subsequent times indicated. RNA (5 ~g) from each time point was hybridized in duplicate to filterbound T4 DNA. The counts per minute plotted on the logarithmic scale are the mean values from the duplicate filters after subtraction of the background counts per minute for calfthymus DNA filters. This background was -20 cpm. The lines were drawn from a linear regression analysis of the data. . Northem blot analysis of RNA isolated from phage-infected rne + and m e -host cells. Cells were infected with T4 at 43~ for 6 min before the addition of rifampicin [time 01. Total RNA was isolated at the times shown, and the same RNA preparations were used in A and B. Equal amounts of RNA [6 p.g} were loaded in each lane of the 1% agarose/6% formaldehyde gels and, after electrophoresis, were transferred to nylon membranes by electroblotting {A) or capillary transfer {B). The probes used in A and B, respectively, were a2P-labeled gene 43 plasmid DNA [Table 11 and gene 32 RNA from plasmid pTAK64 Isee Materials and methods). Fig. 4B). Multiple transcripts were also observed in Northern blots when the gene 3 9 and rIIA/rIIB probes were used, and, again, the levels and stability of these various transcripts were higher in the m e -strain [not shown). The observed increases in transcript levels in the m e -strain could reflect either increased message stability or a delay in the infection of this strain, as discussed below. Because of the difficulties in estimating half-lives of messages that are represented by more than a single species by Northem blot analysis, the effect of the m e mutation on the chemical stability of specific, newly synthesized transcripts was examined by hybridization to filter-bound plasmid DNA. In this method, only the stabilities of transcripts that specifically hybridize to the filter-bound DNA are analyzed because pancreatic RNase treatment removes the portions of transcripts that extend beyond the filter-bound probes. Figure 5A shows the results of hybridizing the same RNA preparations used for the experiment shown in Figure 3 to filter-bound gene-specific probes {Table 1). The approximate half-lives of the transcripts estimated from the slopes of the curves are shown. There is a 24-fold stabilization of gene 3 7 transcripts and a similar 19-fold stabilization of gene 23 transcripts in the m e -strain. The gene 3 9 transcripts are stabilized almost 6-fold, and the smallest effect was observed with the gene 43 transcript that was stabilized 2.5-fold in the mutant strain. This small but significant effect on the gene 43 transcript agrees with the estimated effect of the mutation on the stability of this transcript from the Northern blot analysis {Fig. 4AI. The degree of chemical stabilization of newly synthesized transcripts in the RNase E-deficient strain therefore varies with different mRNAs. To determine whether the effect of the r n e mutation depends on the stage of phage development {i.e., the length of time after infection at which the pulse chase was carried out), RNA samples labeled at an earlier time in infection than those in Figure 5A were used for the hybridizations shown in Figure 5B. With a gene 32 probe, there is a 28-fold stabilization of the mRNAs in the m e -strain compared with the r n e + strain. The rIIA/ rIIB transcripts are stabilized almost 4-fold in the m estrain, and a similar effect is observed for gene 3 9 transcripts. The gene 43 transcript is only stabilized by 1.5fold in the mutant strain. For the gene 3 9 and gene 43 probes, the extent of the stabilization is similar to that observed when the RNA was labeled at a later time in infection (cf. Fig. 5A). At least for these species, the stage of the phage infection does not appear to be a variable in the chemical stabilization of T4 mRNAs in the r n e mutant strain. The estimated half-lives of between 3 and 4 m i n for specific T4 messages in the r n e + strain {Fig. 5} are very similar to the estimated chemical half-life of 4 m i n for total T4 m R N A {Fig. 3). The exception was the gene 43 mRNA, which had a half-life of 6-6.5 m i n in the r n e + strain, and it is interesting to note that the stability of this m R N A was also the least affected by the r n e mutation. The chemical half-life of 4.5 m i n for the gene 32 m R N A in the m e + strain is surprisingly short. The functional half-life of gene 3 2 m R N A had been estimated previously as 15-30 m i n at 30~ {Russel et al. 19761. However, we have found that the chemical stability of gene 3 2 m R N A is highly temperature-dependent [A.J. Time (rain.) Figure 5. The chemical decay of specific pulse-labeled transcripts in T4-infected rne + and me-E. coli at 43~ The experimental conditions are the same as in the legend to Fig. 3, except that in B, the labeling was begun at 3.5 rain after infection and the chase was added at 6.5 rain. The specific filter-bound T4 plasmid DNAs (Table 1) were hybridized with [SH]uridine-labeled RNA from the infected rne + (O) and me-(Q) cells, pBR322 DNA filters were included in each hybridization, and this background (-20-70 cpm) was subtracted. Samples with <20 cpm after correction for background were not included. GENES & DEVELOPMENT Carpousis, unpubl.): The message is considerably less stable at 42~ than at 30~ It is evident from Figure 5 that there are differences in the initial levels of newly synthesized transcripts in the two host strains, depending on the transcripts examined. The observed differences in levels are consistent with a delay in the infection of the me-cells or a reduced rate of phage development in this host. The stage of phage development at which transcript levels are measured is important because transcription of "early" genes is turned off as "late" gene transcription becomes predominant. In Figure 5A, a possible explanation for the higher starting levels of the late transcripts (genes 37 and 23) in the me + strain is that the infection is more advanced than in the me-strain, and for this reason as well, the early transcripts (genes 39 and 43) are already decreasing in the me + strain compared with the me-strain. Even at the earlier labeling time used in Figure 5B, the early transcription (genes rIIA/rIIB, 39 and 43) is already being reduced in the rne + strain. Gene 32 transcripts are synthesized throughout infection (Belin et al. 1987), but the exact contribution of the various early and late transcripts to the overall transcription rates here is not known. As described above, many transcripts showed increased starting levels in the me-strain by Northern blot analysis. For the early transcripts, this effect could again be explained by the delay in the infection of the me-strain, but it could also be due, in part, to the increased stabilities of the messages. However, there may not be a simple relationship between mRNA stability and mRNA levels in a T4 infection, because it is unlikely that a steady-state balance between synthesis and decay is achieved during the infectious cycle. Regardless of differences in the stage of infection, the results shown in Figure 5 clearly demonstrate that there are significant differences in the chemical stabilities of specific T4 mRNAs in the me + and me-cells. The number of phage progeny produced per infected bacterium is significantly reduced in the rnestrain The synthesis of T4 proteins is slightly delayed in the me-host {Mudd et al. 1988), and densitometric scans of gene 34, 7, and 37 tail fiber structural proteins indicate that the rate of synthesis of these proteins is reduced 1.3-fold compared to that in the rne + strain (not shown). We were therefore interested in determining the effect of the rne mutation on the production of viable progeny phage. In a typical wild-type phage infection at 37~ -200 phage progeny per bacterium can be released upon spontaneous lysis of the infected cells {Guttman and Kutter 1983). Figure 6 shows that in a wild-type infec- The vector used for all of the plasmid constructs was pBR322. The plasmid pSP64TAK, used as the template for the preparation of the cRNA probe, contains the gene 32 leader region (from -343 to +3) cloned in the SP6 vector (Belin et al. 1987 tion of the rne § strain at 43~ the number of viable progeny phage produced per bacterium is -250. However, in the me-strain, there several minutes of delay, a slower rate of progeny production, and a reduction in final phage yield of -50%. In a comparable experiment at 30~ there were no significant differences in the number or timing of phage progeny produced (not shown). The observed differences at 43~ therefore correlate with the inactivation of RNase E. Discussion Inactivation of host RNase E in T4-infected cells results in marked stabilization of many T4 mRNAs, both chemically and functionally. This suggests that the decay of these messages is mediated by RNase E. The RNase E deficiency also resulted in a significant reduction in the number of phage progeny produced in each infected bacterium. This result was expected because we observed reduced levels of synthesis of some late proteins, which are required for assembling new phage, in the rne mutant strain. The deleterious effect of the rne mutation on the production of progeny phage may be linked to the observed effects on mRNA stability. For example, reduced degradation of early T4 messages could interfere with late gene expression. Even small alterations in the level of expression of certain T4 genes could have a significant effect on the production of viable phage. The simplest interpretation of our results is that RNase E acts directly in the degradation of T4 mRNAs. Nevertheless, it is possible that the effect is indirect. Some RNA species that accumulate in vivo in the RNase E-deficient strain are apparently not processed in vitro by partially purified RNase E (Pragai and Apirion 1982;Gurevitz et al. 1983). Although there are several possible explanations of these results, Apirion and collaborators prefer models in which RNase E is part of a processing enzyme complex and the RNase E mutation leads to disruption of the efficient function of other nucleases in the complex. Regardless of whether RNase E acts directly or indirectly, this is the first demonstration of a role for RNase E in mRNA degradation, and the first identified endonuclease involved in mRNA functional decay. In our experiments, the RNase E temperature-sensitive enzyme was inactivated by incubating the bacteria at 43~ for 10 min prior to infection. It is unlikely, however, that the increased stability of the T4 phage mRNA in these cells was due to an indirect effect of RNase E inactivation on E. coli gene expression. Infection of the rne temperature-sensitive host at 30~ results in the shutoff of host gene expression, yet subsequent inactivation of RNase E by a shift to 43~ still leads to stabilization of T4 mRNAs (not shown). Because the effect of RNase E inactivation on T4 message stability was apparent within a few minutes of the shift to the nonpermissive temperature, the activity of any host or phage mediator would have to be tightly coupled to that of RNase E. Mechanisms of prokaryotic mRNA decay have been debated extensively (for reviews, see Kennell 1986;King et al. 1986;Brawerman 1987Brawerman , 1989 Figure 6. Time course of the number of progeny phage produced per infected rne § and me-host cell at 43~ The rne § (El) and me-(0) cells were infected with T4 for 3 min, at which time T4 antibody was added to neutralize unadsorbed phage. At 8 min after infection, the cells were diluted 1A0oo in M9S medium to prevent readsorption of any remaining free phage, and the numbers of infected bacteria were titered. At subsequent times after infection, the numbers of progeny phage produced were titered after lysing the cells prematurely with chloroform. Titers were average values from duplicate platings. The number of progeny phage per infected bacterium was calculated by dividing the phage titers at the subsequent time points by the number of infected bacteria at 8 min; this number was plotted against the time after infection. 1988). One model for mRNA decay proposes that endonucleases initiate decay by cleaving mRNAs, thereby creating 3' ends that are then processively degraded by 3'---~ 5' exonucleases. Several such events lead to the decay of entire messages (for a recent discussion of this model, see Belasco and Higgins 1988). The endonucleases that initiate this decay have not yet been identified. RNase E could directly affect T4 mRNA decay by cleaving the messages endonucleolytically, thereby creating sites for the entry of 3'--~ 5' exonucleases, which then processively degrade them. This interpretation is supported by our previous studies of RNase E-dependent mRNA processing at the -71 and -1340 sites in the gene 32 transcription unit (Mudd et al. 1988;Carpousis et al. 1989), in which the mRNA upstream of the cleavage site was rapidly degraded. In addition, we have recently identified an RNase E-dependent cleavage site RNase E and mRNA decay at position + 831 within the gene 32 message (A.J. Carpousis, unpubl.), which is one of the messages that is highly stabilized in the RNase E-deficient strain. All three of the RNase E-dependent cleavage sites within the gene 32 transcription unit show some similarity in sequence at the cleavage (5'-Pu $ A $ U-U-3') and in the potential to form RNA secondary structure just downstream of the site. The RNase E deficiency does not affect T4 message degradation uniformly. The principle decay pathway for messages such as the gene 37, 23, and 32 RNAs appears to be almost completely blocked in the RNase E-deficient strain. However, additional degradation pathways apparently exist because some T4 mRNAs, such as gene 43 and rIIA/rIIB RNAs, are still rapidly degraded in the RNase E-deficient strain. We have found that E. coli RNase III does not have a significant role in the decay of total T4 mRNAs because the functional stabilities of these mRNAs were very similar in isogenic RNase III § and RNase III-strains (not shown). Factors that could affect the degree to which mRNA decay is RNase E-dependent include the number of RNase E-sensitive cleavage sites, the degree of ribosomal loading, the susceptibility of the messages to decay mediated by other endonucleases, and the vulnerability of their 3' ends to exonuclease digestion (see Higgins et al. 1988). We have presented evidence that an E. coli RNA processing enzyme is involved in the degradation of T4 mRNAs, and for many of the mRNAs examined, the activity of this enzyme appears to be the major determinant of their decay. These messages are apparently degraded by use of a pathway that differs from the pathway for degradation of bulk E. coli mRNA, although it is possible that the decay of some E. coli messages is RNase E-mediated. Future studies will be aimed at identifying additional RNase E-mediated cleavage sites and determining whether they have sequence and/or structure in common with the sites already mapped. It will be interesting to discover whether these cleavages are the limiting step in the degradation of the messages that are highly stabilized in RNase E-deficient E. coli. Growth of bacterial strains and infection conditions The conditions used for growth and phage infections of the isogenie rne + IN3433) and temperature-sensitive me-{N3431) strains (Goldblum and Apirion 1981) were as described (Mudd et al. 1988). Wild-type phage (T4D + from the Geneva collection) was used for all infections. Bacteria were grown in Mgs medium containing 0.2% casamino acids (Champe and Benzer 1962) at 30~ to 5 x 107 cells/ml, centrifuged, and resuspended at 4 x 108 cells/ml. After a 10-min preincubation at 43~ the cells were infected with a multiplicity of 20 phage per bacterium. The percentage of surviving bacteria 2 rain after infection was generally -20% for the me + strain and 30% for the me-strain. Pulse-labeling of proteins and RNA isolation The methods for labeling proteins with 14C-labeled amino acids and for RNA isolation have been described (Mudd et al. 1988). [all]uracil labeling of RNA and TCA precipitation One-milliliter samples of cultures were removed and pulse-labeled with 20 ~Ci [5-3H]uracil (NEN; 26 Ci/mmole) for 2 rain at 43~ Duplicate 0.25-ml samples of the labeled culture were then added to 2.5 ml of ice-cold 5% TCA, which lyses the bacteria. Carrier bacteria were mixed with the TCA samples, which were then filtered onto GF/C glass fiber filters (Whatman) and washed twice with 5% TCA and once with absolute ethanol. The filters were dried and counted in scintillant (toluene/PPO/POPOP). [aH]uridine pulse-chase labeling of RNA and hybridization to DNA filters Cells were labeled with 40 ~tCi/ml [5-aH]uridine (Amersham; 27 Ci/mmole) for 3 min, followed by a 3-min chase with a mix of excess cold uridine (100 p.g/ml) and rifampicin (200 ~g/ml). RNA was isolated from 2-ml samples at subsequent time points. The specific activity before rifampicin treatment was -40,000-60,000 cpm/p.g RNA. DNA filters were prepared and hybridized, without formamide, by using 5 ~g of labeled RNA, as described (Young et al. 1980). The amounts of DNA bound per whole filter were 16 ~g of calf thymus, 32 ~g of whole T4, and 3.2 p.g of plasmid. Onequarter of each filter was used per hybridization. After digestion with pancreatic RNase and washes to remove nonhybridized RNA, the filter quarters were dried and counted in scintillant. The number of counts per minute that hybridized depended on the concentration of labeled RNA used, showing that the T4 DNA was in excess to the mRNAs (not shown). Northern blot analysis of RNA RNA samples were separated by 1% agarose gel electrophoresis in the presence of formamide/formaldehyde, transferred to GeneScreen nylon membranes (NEN), stained with methylene blue to verify the quality of the transfer, and hybridized with DNA or RNA probes following the procedure of Khandjian (1986). The DNA probe was prepared by nick-translation of plasmid DNA (Rigby et al. 1977}, and the RNA probe was prepared in vitro from the plasmid template pSP64TAK, which contains the gene 32 leader region (from -343 to + 3) cloned in the SP6 vector (Belin et al. 1987). The endogenous E. co~~ 23S and 16S rRNA, as well as labeled RNAs of known size prepared in vitro using the T7 system, were used as size standards. Half-life estimations from autoradiographs Autoradiographs were scanned with a GS300 scanning densitometer [Hoefer Scientific Instruments (HSI)], and the appropriate peak areas, or total peak areas from scans of whole lanes, were determined by integration with the HSI program GS-370. Half-lives were estimated from semilog plots of peak areas versus the time after rifampicin addition. Estimation of the number of phage progeny released per infected bacterium For experimental details on the estimation of the number of phage progeny released, see the legend to Figure 6. Equal volumes of bacteria and phage, both at 2 x 108/ml, were mixed at 43~ Three minutes after the T4 antibody treatment, the titer of unabsorbed phage was reduced 100-fold. The number of infected bacteria immediately after dilution was -2 x 104/ml. Phage and infected bacteria were titered on the E. coli strain S/6 by using standard techniques (Steinberg and Edgar 1962).
2018-04-03T05:29:18.375Z
1990-05-01T00:00:00.000
{ "year": 1990, "sha1": "e035f28bdb402e5e35cc44761ad455c68a6a36b8", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/4/5/873.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "76fc71557502a2ca25bc99fd9c045d4dd050af9b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238369446
pes2o/s2orc
v3-fos-license
Validity and Reliability of the Malaysian Perceived Stress Scale (PSS) using Rasch Measurement Model This study was conducted to produce empirical evidence of validity and reliability of the item using a survey questionnaire Perceived Stress Scale. The 14-item Perceived Stress Scale (PSS-14) is one of the most widely used psychological instruments for measuring stress perception in practice and research works, but has sparked some controversy regarding its factor structure. Furthermore, no study has been conducted to date using a sample of ‘houseman’ medical officers to test the reliability and validity of this instrument. The Rasch model analysis, aided by Winsteps software Version 3.69.1.11, was used to examine the functional items from the reliability and separation of item and respondent, polarity and items fit measuring constructs and standardized residual correlation value. The questionnaire was distributed to 42 ‘houseman’ officers who work in a hospital in Selangor, Malaysia. The findings of this study support the use of the PSS-14 as a reliable and valid instrument to assess perceived stress in a sample of ‘houseman’ medical officers in Malaysia. Introduction Rasch measurement model has proven that learning transfer questionnaire has a level of validity and reliability then be used to develop a model of learning transfer. This is because the use of Rasch measurement model is a solution to the issue of validity as Rasch measurement model provides useful statistics and offers a tremendous opportunity to probe the validity (Bond & Fox, 2015). In addition, the application of Rasch measurement model in a study will be able to facilitate and produce a more efficient, reliable and valid measurement while increasing convenience to user (Abdul Aziz et al., 2007). A study to identify the validity and reliability of the instrument is very important for maintaining the accuracy of the questionnaire (Ariffin et al., 2010). This is necessary to determine the questionnaire to measure what is to be measured consistently and accurately. According to Howard and Braun (1988), consistency means that when the same item is tested several time on 669 the same subject at a different time interval, the score result given is approximately the same. In conclusion, the reliability is likely to provide a consistent validity. This study was performed to produce empirical evidence of the validity and reliability of Perceived Stress Scale questionnaire using Rasch measurement model. This is because the Rasch measurement model can test the consistency of interpretation of constructs, the reliability of the items and the respondent and the accuracy of the test. The Perceived Stress Scale has been used by many researchers around the world such as general public, school students, patients, seniors, athletes and teenagers (refer Table 1). Out of the 19 research works listed in Table 1, 15 used SPSS to analyze the data obtained, while four studies used Rasch Model to analyze the data. This indicates that SPSS is a statistical mechanism that is still widely used in research compared to the Rasch Model in order to determine the instrument's validity and reliability values. This table also shows that the Perceived Stress Scale was administered to adults, adolescents, seniors, medical students, nurses, teachers, health 'frontliners', university students, and pregnant women. Hence, in this study, the researchers use 'houseman' medical doctors as the study respondents. In fact, researchers also use the Rasch Model as a statistical mechanism to determine the validity and reliability of the PSS-14 items. Therefore, the objectives of this analysis are to: a. test the reliability and item separation index and the respondent b. detect the polarity items that measure the constructs c. test the item fit of the instrument items In other words, this paper aims to discuss the findings of Perceived Stress Scale (PSS-14) validation study in Malaysian context. It is not only meant for establishing the validity and reliability of the instrument in Malaysian population of 'houseman' medical doctors, but most importantly, to add and increase such research works which use Rasch Measurement Model as a statistical analysis mechanism. 670 Methodology The Perceived Stress Scale (PSS) was developed by Cohen et al. (1983) in order to measure the extent to which situations in one's life are appraised as stressful. Several alternate versions of the PSS exist, which vary in the number of items used to describe perceived stress. Results Through Rasch measurement model approach, the researchers perform an examination of the item functional in terms of: a. item reliability and separation of the respondents b. detecting polarity items that measure the constructs based on the PTMEA CORR c. items fit measuring constructs Reliability and Separation Items and Respondent Based on Rasch measurement model approach, the acceptable reliability Cronbach's Alpha (α) is between 0.71-0.99, where it is at the best level (71% -99%). The findings of the pilot study found that the reliability obtained based on the Cronbach Alpha (KR 20) is 0.88. This indicates that this instrument is a reliable instrument and suitable for the specified sample. dSo this value shows instruments used are in very good condition and effectively with a high level of consistency thus can be used in the actual research. The analysis also performed on the instrument as a whole, namely the reliability and the separation of the item and the respondent. Table 2 shows the reliability and separation items where the reliability of the items was 0.95, while the separation of items is 4.50. Based on the reliability of the items, the value of 0.95 indicates are in good condition and acceptable (Bond & Ford, 2007). According to Linacre (2004), the separation index is better when the value is more than the value of 2.0. While the reliability of the respondents is 0.88, and the separation of the respondents is 2.46. This shows that the respondents are very high reliability and very good. This is because Bond and Fox (2007) describe the reliability of more than 0.8 is good and stronger acceptable. While the separation of the respondents showed good separation of the item difficulty level appropriate to the Linacre (2004), which describes the separation of more than 2.0 is a good value. Polarity Item by PTMEA CORR Value The Point Measure Correlation (PTMEA CORR.) is to detect polarity items intended to test the extent to which the construction of constructs achieves its goal. If the value contained in the PTMEA CORR is the positive (+), it shows the item measure the constructs to be measured (Bond & Ford, 2007). On the other hand, if the value is negative (-), the item is not developed to measure the constructs to be measured. Thus it needs to be improved or dropped because the item is not lead to the question (not focus) or difficult to answer by the respondent. Based on Table 3, there are all items that get PTMEA CORR. are positive, which indicates that the items measuring the constructs to be measured (Bond & Ford 2007). Whereas, the negative PTMEA CORR indicated item needs to give attention or should be repaired or removed. The rest of PTMEA CORR. is positive despite the lowest positive value of item S4 (0.43), S10 (0.46), and S12 (0.41). Thus purification items should be done. However, based on these findings show that positive items moving in one direction with construct and able to measure constructs and does not conflict with the constructs being measured. If the PTMEA CORR. is high, then the item is able to distinguish between respondents capability. Item Fit Measure Constructs Items fit measures the constructs that can be seen through the infit and outfit Mean Square (MNSQ). According to Bond and Fox (2007), the outfit and infit MNSQ should be in the range of 0.6 to 1.4 to ensure the items are suitable for measuring the constructs. If the infit or outfit MNSQ value is more than 1.4 logit, it gives meaning to a confusing item. On the other hand, if the MNSQ value is less than 0.6 logit, it shows that the item is too easily anticipated by the respondents (Linacre 2021). Besides that, the outfit and infit ZSTD value should also be within -2 to +2 (Bond & Fox, 2007). But if the outfit and infit MNSQ be accepted, the ZSTD index can be ignored (Linacre 2007). Therefore, if this condition is not met, then the item can be considered to be removed or having purified. Table 4 shows the misfit order featuring all 14 items having the range of 0.6 to 1.4 analysis statistics: misfit order. Thus from this diagnosis, there were all 14 items having purified by looking at the needs of researchers and experts.
2021-10-07T00:14:59.523Z
2021-08-12T00:00:00.000
{ "year": 2021, "sha1": "2d16e420949bd324d04ab8f44ec3fcc82b187bf4", "oa_license": "CCBY", "oa_url": "https://hrmars.com/papers_submitted/10770/validity-and-reliability-of-the-malaysian-perceived-stress-scale-pss-using-rasch-measurement-model.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0f02f2669b7d628cf23264cde1ca1dfd382f6a13", "s2fieldsofstudy": [ "Psychology", "Business" ], "extfieldsofstudy": [ "Psychology" ] }
51942279
pes2o/s2orc
v3-fos-license
Analysis of disease comorbidity patterns in a large-scale China population Background Disease comorbidity is popular and has significant indications for disease progress and management. We aim to detect the general disease comorbidity patterns in Chinese populations using a large-scale clinical data set. Methods We extracted the diseases from a large-scale anonymized data set derived from 8,572,137 inpatients in 453 hospitals across China. We built a Disease Comorbidity Network (DCN) using correlation analysis and detected the topological patterns of disease comorbidity using both complex network and data mining methods. The comorbidity patterns were further validated by shared molecular mechanisms using disease-gene associations and pathways. To predict the disease occurrence during the whole disease progressions, we applied four machine learning methods to model the disease trajectories of patients. Results We obtained the DCN with 5702 nodes and 258,535 edges, which shows a power law distribution of the degree and weight. It further indicated that there exists high heterogeneity of comorbidities for different diseases and we found that the DCN is a hierarchical modular network with community structures, which have both homogeneous and heterogeneous disease categories. Furthermore, adhering to the previous work from US and Europe populations, we found that the disease comorbidities have their shared underlying molecular mechanisms. Furthermore, take hypertension and psychiatric disease as instance, we used four classification methods to predicte the disease occurrence using the comorbid disease trajectories and obtained acceptable performance, in which in particular, random forest obtained an overall best performance (with F1-score 0.6689 for hypertension and 0.6802 for psychiatric disease). Conclusions Our study indicates that disease comorbidity is significant and valuable to understand the disease incidences and their interactions in real-world populations, which will provide important insights for detection of the patterns of disease classification, diagnosis and prognosis. Introduction Disease comorbidity reflects the shared molecular mechanisms or environmental factors between diseases, which would be important for improving the knowledge and management of diseases in real-world clinical settings [1][2][3]. It has become a major problem in treatment [4,5], because patients with comorbidity diseases have a higher probability of hospitalization and mortality [6,7]. Furthermore, treating patients with multiple diseases is complicate and timeconsuming, as it requires consideration of longer hospital stays and more expert consultations [8,9]. For example, when a patient suffers from multiple diseases, the treating is particularly complicate [10] because it involves uncertainty in diagnosis and treatment. If the patient takes multiple drugs at the same time, and the popular therapies with multiple drugs might cause serious side effects due to their interactions [11,12]. Unfortunately, the patterns and the underlying mechanisms of disease comorbidity are far from fully elucidated [13]. Therefore, recently, it has become a hot research topic on disease comorbidity both from clinical observations and molecular network mechanisms. Related studies explained the mechanism of the disease comorbidities of specific diseases. For example, studies have been conducted on the comorbidities of diabetes of adults [14]. Also, some of the related studies focus on the relationship between diseases of genes, using Relative Risk and Φ-correlation to measure the correlation between two diseases [15,16]. And there exists a study based on complex network including several diseases, for 613 nodes and 3277 edges in its network from 3,354, 043 patients [17]. However, in most cases, these studies are derived from the data in Europe and United States. In addition, it is interesting that machine learning methods are useful for predicting the patterns of biomedical entities, such as genes and proteins [18][19][20], when utilizing the meaningful features involved in biomedical data. Here, we utilized a large-scale clinical data and conducted our research across the full range of diseases in China population. We built a large-scale disease comorbidity network (DCN) and obtained the topological properties and their relationships by complex network measurements. In addition, we validated the shared molecular mechanisms of the clinical disease comorbidities and investigated the possibility to predict the disease occurrence using the disease trajectories by machine learning methods. The results have implications for the disease comorbidity patterns and would be helpful to manage the chronic diseases conditions in clinical settings. Data sources Our main data were derived from the hospital discharge data held in the Data Center of the China Academy of Chinese Medical Sciences, which only includes two attributes, namely diagnostic codes and the encounter sequential identifiers of patients. This made our study strictly preserved the privacy of patients. After removing of the records with missing diagnosis codes, we obtained 8,572,137 high-quality clinical records from 453 different hospitals in China. The diagnostic codes were recorded by ICD10 (the 10th revision of the International statistical classification of diseases [21]) and we deal with them in the form of four-digit ICD10 codes for further analysis. Disease-gene associations were derived from the Mala-Cards database [22], which resulted in 64,245 diseasegene associations with 3193 diseases and 8616 genes. Meanwhile, we collected the pathway information (including 325 pathways and 7253 genes) from the KEGG Database [23]. We further obtained the disease-pathway associations with 175,167 records by linking 3118 diseases and 324 pathways by combining the above two data sets. Correlation analysis We used Relative Risk (RR) and Φ-correlation [15,16] to measure the correlations between disease pairs. When two diseases d i and d j co-occur more frequently than expected by chance, we would have RR ij > 1 and Φ ij > 0. The RR of observing a pair of d i and d j affecting the same patient is given by where C ij is the number of patients affected by both diseases, N is the total number of patients in the population and P i and P j are the prevalence of diseases i and j. The Φ-correlation can be expressed as: We constructed the DCN with those disease pairs with RR > 1.0 and Φ > 0.0 and the weights of disease pairs (links) were set as the co-occurrences of the corresponding diseases. Network analysis We constructed the DCN with nodes for the diseases of the comorbidity patterns extracted before. When two diseases co-occur on a patient, there's an edge between them. The weight of the edge is the co-occurrence times which represents the relationships between the two diseases. The weights of disease pairs of which the two diseases co-occur frequently will be large. We used four topological measurements, namely, degree, betweenness centrality (BC), clustering coefficient (CC 1 ) and closeness centrality (CC 2 ), to evaluate the centrality of nodes in the network. Diseases with larger degree have more relationships with other diseases in the network [23]. BC reflects the diversity of disease connection and the complexity of the disease. CC 1 is used to measure the closeness of the neighbors to each other [24]. That is, if disease d 1 interacts with disease d 2 and disease d 2 interacts with disease d 3 , the possibility of the d 1 interacting with d 3 is also great. CC 2 is an index of distribution of single-source shortest distance based on node, which vividly describes the importance of node's position in the network. However, basic topological properties cannot fully capture the full characteristics of DCN. For example, the degree of a node only focuses on first-order connected nodes, but ignores the relationships beyond the neighboring nodes. The CC 1 considers the closeness of adjacent nodes, but ignores the size of adjacent nodes. Therefore, we calculated the correlations between some topological measurements to identify the coupling and hierarchical patterns underlying the DCN. Classification methods It is well recognized that the dynamic networks of disease comorbidities would contribute to the outcome of patients [15,16]. Here, we investigating the feasibility of predicting disease (e.g. hypertension and psychiatric diseases) occurrence based on the comorbid trajectories of patients using four machine learning algorithms, namely Logistic Regression (LR), SVM, Random Forest (RF) and Neural Network (NN). The main framework including the preprocessing of the data set is depicted in Fig. 1. We curated patient cases that have at least two inpatient encounters. After that, for a particular disease which is diagnosed at a specific encounter for a given patient, we would consider the past histories of diseases as the predictor variables for that particular disease. In addition, we randomly selected a set of negative samples into the benchmark for classification methods. Now we described the main steps of disease prediction task as follows. (a) We extracted totally 427,939 visits from the database based on the identifiers of a patient, which includes the whole comorbid trajectories of each patient; (b) Transform the data records into datasets with features and classification labels. Diseases that the patient had in the previous visits were considered as the feature (excluding the target disease), and diseases that the patient had in the current visit were considered as classification label. To predict the occurrence of a specific target disease, we set to 1 if the target disease appears, and set to 0 for the other diseases. (c) Train the classification models with the preprocessed data. (d) Validate the classification model (using 10-fold cross validations) and obtain the significant associated disease risk factors for a given disease. (e) Use the classification model to predict the disease risks. Basic properties of the disease comorbidity network We constructed the DCN with diseases whose cooccurrence > 5, RR > 1.0 and Φ-correlation > 0.0. For these comorbid diseases filtered by the above two correlations, they actually obtained clinical meaningful relationships. For example, we found that the RR and Φ for hypertension and atherosclerotic heart disease is 2.53 and 0.2760, respectively. While the RR and Φ for hybrid asthma and atherosclerotic heart disease only got 1.3368 and 0.0002 respectively. The DCN has 5702 nodes and 258,535 edges with average degree 90.717(see Fig. 2a for degree distribution) and average edge weight 12, Fig. 1 The framework to predict disease occurrence using the comorbid trajectories of patients 904.494(see Fig. 2b for weight distribution). In addition, the average path length is 2.528 and the average CC 1 is 0.629 (see Fig. 2c for CC 1 distribution), which indicated that DCN is a highly clustering network, with the neighbors of a disease closely connected. The power law distribution of degree and weight ( Fig. 2a and Fig. 2b) showed that DCN is a scale-free network [25], which means that some diseases (e.g. hypertension, atherosclerotic heart disease) have very high comorbidities in China population. We obtained the three disease lists, which are ranked as the top 10 diseases of degree, betweenness centrality and CC 1 (Fig. 2f). It showed that hypertension, anaemia, other disorders of lung and other disorders of glycoprotein metabolism are the top 4 diseases included in all these rank lists. Hierarchical modular structures of disease comorbidity network To identify the more elucidated patterns in the DCN, we calculated the correlations between several pairs of network topological measurements ( Fig. 3a-f). We found that there exists negative correlation between degree and CC 1 (Pearson correlation coefficient (PCC) = − 0.398, see Fig. 3a) in DCN, which indicated that DCN is a hierarchical modular network [26]. Furthermore, consistently, we found that there exists negative correlation between CC 1 and CC 2 (PCC = -0.155, see Fig. 3b). These two results showed that in DCN, the neighbors of diseases located in the center of the network (easier to get to other nodes) have large diversity and diseases with less CC 2 tend to occur simultaneously with diseases in the same module. Furthermore, the positive correlation between CC 2 and degree (PCC = 0.596, see Fig. 3c) indicates that the data is reliable, because both the degree and close centrality reflect the centrality of a node. The BC can reflect the diversity of disease connotation. There exists negative correlation between BC and CC 1 (PCC = -0.181, see Fig. 3f), which shows that neighbors of the disease with large CC 1 are not connected closely as a hub node. For example, as a hub node in DCN, hypertension has high BC and degree (BC = 0.093, degree = 1926), which reflects its diverse mechanisms and comorbid phenotypes. Also, the relationships between its neighbors are sparse (CC 1 = 0.051), which indicate that there exist potential subtypes of hypertension disorder. For disorders of choroid (H31.8), its BC is 0. It has much fewer neighbors (degree = 12) but is more closely related to them than hypertension (CC 1 = 1). That is to say, the number of the comorbidity diseases of the disease is few, but their relationship between their comorbid diseases is strong. Disease comorbidity communities To identify the disease comorbidity groups from the DCN, we applied BGLL community detection method [27] to find the communities, which resulted in 10 communities with denser comorbidity links between the diseases other than random expectations (see Fig. 3g-h). There are both homogeneous and heterogeneous comorbidity diseases in the same communities. Meanwhile, there exist branching relationships between categories. For example, a specific disease comorbidity community (see Fig. 3h), includes 157(accounting for 74.8%) eye related diseases, which are caused by cataracts (H25-H26) and also contains 53(25.2%) diseases from other categories. Ocular comorbidity diseases are common in people with cataracts in real-world clinical settings [28]. This would be insightful for the refinement of disease classification. We found several common disease comorbidity patterns from 5702 diseases, such as diabetes and obesity [29]. Hypertension occurs most frequently in the DCN. Fig. 3 The relationship between topological properties and the network structure. a Degree and CC 1 ; b CC 2 and CC 1 ; c Degree and CC 2 ; d BC and CC 2 ; e Degree and BC; f CC 1 and BC; g Modules in the network; h One specific disease comorbidity module in the network It has significant disease comorbidity patterns with arteriosclerosis heart disease (RR = 2.53, co-occurrence = 475,649), diabetes (RR = 2.56, co-occurrence = 383,436), cerebral infarction (RR = 2.70, co-occurrence = 367,144), hyperlipidemia (RR = 2.24, co-occurrence = 205,967) and heart failure (RR = 5.97, co-occurrence = 201,495). This is consistent with the popular prevalence of hypertension, which can lead to a variety of complications (e.g. cardiovascular disease [30,31], diabetes [32,33], renal failure [34] and obesity [35,36]) and cause damage to organs, such as the heart, brain and kidneys. It is well known that hypertension is a serious threat to the human health. The treatment of hypertension can reduce the occurrence of cardiovascular disease and alleviate its symptom. We also find other disease comorbidity patterns, such as Alzheimer disease and atherosclerotic heart disease, which can be supported by the evidence that cardiovascular and arterial disease is considered an important risk factor for Alzheimer's disease [37]. It is similar for the findings of the relationship of diabetes and senile cataracts. Discovering these disease relationships is beneficial to the prevention of concurrent disease while discovering the primary disease. Shared molecular mechanisms of disease comorbidities To validate the correlation between disease comorbidity and their underlying shared molecular mechanisms [16] in our data, we calculated PCC between the number of shared genes and pathways and the strength of disease comorbidity (RR and Φ-correlation) in 258,543 disease pairs. We found that although the correlation is weak, there does exist significant positive correlation between comorbid diseases and their underlying molecular mechanisms (Table 1), which indicates that if two diseases share genes or pathways, it will tend to have disease comorbidities. In addition, we observed that the degree of disease comorbidity would be higher as their molecular correlation (shared genes and pathways) increased (see Fig. 4a and b). With the increase of molecular correlation, the degree of Fig. 4 The shared molecular mechanisms of disease comorbidity. a The relationship between shared genes and intensity of disease comorbidity b. The relationship between shared pathways and intensity of disease comorbidity c. Disease comorbidity of Alzheimer's Disease and Arteriosclerotic Heart Disease disease comorbidity gradually increases. Compared with the two diseases that do not share genes, the degree of diseases comorbidity of diseases sharing more than 20 genes has increased nearly five times. That is to say, the more genes the two diseases shared, the more likely there exists a disease comorbidity relationship. As the number of shared pathways increases, the comorbidity relationship becomes stronger. However, the impact is relatively weak, and there is a downward trend in the first two intervals. Therefore, we need to prevent the disease from happening while treating its comorbidity disease if they have shared genes or pathways. We further applied two commonly used similarity measures, namely Jaccard and Cosine measures, to identify the relationship between shared genes and pathways. We calculated the similarity and PCC between them. The positive correlation of them (see Table 2) indicates that if the similarity of two diseases increases, the number of shared genes and pathways will increase as well. Furthermore, we found that several pairs of diseases not only have correlation at the gene level, but also show important disease comorbidity relationship, such as Alzheimer's disease and atherosclerotic heart disease (see Fig. 4c). There is a significant disease comorbidity relationship between them (RR = 2.585, Φ-correlation = 0.017), and they have shared genes (ACE, AOPE and NOS3). This shows that the existence of shared genes may lead to the cooccurrence of two diseases, which may be the direct reason of the disease comorbidity of them. Disease prediction using the comorbid trajectories of patients To investigate the possibility of using disease comorbid trajectories to predict disease occurrence, we extracted 27,000 cases from our database and generated two benchmark data sets for two disease cases, namely hypertension and psychiatric diseases to demonstrate the feasibility (see Table 3). It is noted that the coupled negative records were randomly selected from our database. We applied 4 machine learning methods (see Table 4 for detailed parameters) to predict the disease occurrence according to the previous diseases of a given patient. Finally, we found that the prediction results of the 4 classification models on two disease datasets (see Table 5) are acceptable. Among the two data sets, LR had the highest accuracy (0.6193 for hypertension and 0.6478 for psychiatric diseases) and NN had the lowest accuracy (0.5919 for hypertension and 0.6306 for psychiatric diseases), and RF has the highest recall (0.7534 for hypertension and 0.7358 for psychiatric diseases). Altogether, RF has the best F1-score in those four methods (0.6689 for hypertension and 0.6802 for psychiatric diseases). RF reaches the best result because it classified samples in a more interpretative way than NN and more complicated than LR. Also, with the limitation of simple networks and poor interpretability, NN may not be suitable for this task. In addition, we found the risk diseases that lead to hypertension and psychiatric diseases according to the coefficient in LR, SVM and RF (see Table 6). For example, in the RF method, hypertensive heart disease with (congestive) heart failure (I11.0) is one of the risk factors of hypertension. If it appeared on a patient, it will be possible that hypertension appears. Previous study held the view that hypertension is the common reason of heart failure, and 50% patients with hypertension may have heart failure as comorbidities [38]. Also, hypertension may cause effect to eyes and lead to a series of eye diseases (such as H35.0 and H52.3) [39]. Similarly, as one of the risk factors of psychiatric diseases, palpitations (R00.2) appear frequently under the influence of the side effect of anti-psychotic drugs and effects of patients' own heart and disease [40]. For SVM, Aortic (valve) stenosis with insufficiency (I35.2) is the risk factor. It appears with hypertension frequently and several studies counted the comorbidity pattern of them (morbidity = 20%~68% [41,42]). Pulmonary embolism with mention of acute cor pulmonale(I26.0), other specified inflammatory liver diseases(K75.8) and alcoholic liver disease, unspecified(K70.9) are risk factors. Due to the influence of anti-psychotic drugs, the burden on the liver will increase and the liver function will deteriorate. However, without the use of psychotropic drugs, the mood of patients will also cause liver failure. Therefore, patients with psychiatric diseases are more likely to suffer from lung disease, liver disease and heart disease complications than ordinary patients [43]. Similarly, Atherosclerotic heart disease (I25.1) as the common cardiovascular diseases [31,32] have the disease comorbidity relationships, which is similar to diabetes [33,34]. In summary, although some evident cofounders, such as the missing recording of target diseases in the clinical settings, would involve target disease induced comorbidities conversely as the risk diseases, we obtained acceptable prediction results for the two demonstrating diseases. In addition, we found that several common diseases, such as, heart failure, cerebral infarction and lung disease, were filtered by the three classification methods as the main risk factors for the targeting disorders (see Table 6). However, high rates of predicted risk diseases were different among the three methods, which is partially due to the mutual dependences between the risk diseases. For example, although the two risk diseases: E53.9(Vitamin B deficiency) and H35.0(a type of retinopathy and retinal disorders) predicted by SVM and LR respectively are different, they are two well recognized disorders with physio-pathological associations. Meanwhile, these predicted different features also means that it could be combined by more systematic frameworks to obtain more improved results in the future work. Discussion Disease comorbidity holds significant medical insights and has its underlying molecular mechanisms [15,16], which has been a hot research topic in both clinical and network medicine fields [17]. However, most results were mainly derived from the analysis of the clinical data in Europe and United States. Due to the influence from environment factors, ethnicity and social factors to disease patterns, it is important to investigate the disease comorbidity patterns in large-scale populations in China [14,44]. Our research is carried out across 5702 diseases in 22 categories and 8,572,137 patients with full range of the age groups. Therefore, the range of our study is more extensive in both data and scale than most previous studies in China population, which has great significance for the study of disease comorbidities. We focus on the DCN and analyzed the correlation of diseases in the network. Furthermore, we have investigated the relationships between the topological characteristics of DCN network and found biomedical meaningful patterns (i.e. the hierarchical structures of DCN). In terms of disease prediction, the prediction results are greatly influenced by the data, so the differences among countries, regions and populations in the data will also become obvious. It is significant for us to use China's disease comorbidity data to predict disease occurrence and detect the risk factors from comorbid disease conditions. The major limitation of our research is that the recording of diseases in clinical data would prone to incomplete diagnoses. Because clinical practitioners would tend to record the diseases that they primarily treated rather than all the diseases of patients. This would particularly induce cofounders to our prediction results and make them vulnerable. Many factors (such as age, physical condition and treatment methods, etc.) will affect the occurrence and development of a disease, which have not been incorporated in our data set. Moreover, our prediction experiments are limited to the classical supervised learning methods, which mostly provides a feasible demonstration of the prediction of disease occurrence with comorbid trajectories. In the future, we will carry out more dedicated machine learning models with more systematic clinical features, such as deep learning, to obtain more powerful predictors, which might result in practical prediction applications using disease comorbidities. Conclusion We constructed a disease comorbidity network derived from millions of electronic medical records with diagnostic codes in China and found interesting topological patterns (e.g. high clustering and hierarchical modularity) for this network. Furthermore, we identified clinical meaningful disease comorbidity communities and revalidated the shared underlying molecular assumptions of disease comorbidity. Finally, by formulating the disease comorbid trajectories into a binary classification problem, we investigated the feasibility of predicting the disease occurrence using only the temporal relationships between disease phenotypes.
2018-08-09T13:07:11.869Z
2018-08-15T00:00:00.000
{ "year": 2019, "sha1": "bee5167172e2058782ac6803958b2a634ec4f456", "oa_license": "CCBY", "oa_url": "https://bmcmedgenomics.biomedcentral.com/track/pdf/10.1186/s12920-019-0629-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1d0de89f40c067d496672f2fa601825f71bdc3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
235419540
pes2o/s2orc
v3-fos-license
Can patient decision aids reduce decisional conflict in a de-escalation of breast radiotherapy clinical trial? The PRIMETIME Study Within a Trial implemented using a cluster stepped-wedge trial design Background For patients with early breast cancer considered at very-low risk of local relapse, risks of radiotherapy may outweigh the benefits. Decisions regarding treatment omission can lead to patient uncertainty (decisional conflict), which may be lessened with patient decision aids (PDA). PRIMETIME (ISRCTN 41579286) is a UK-led biomarker-directed study evaluating omission of adjuvant radiotherapy in breast cancer; an embedded Study Within A Trial (SWAT) investigated whether PDA reduces decisional conflict using a cluster stepped-wedge trial design. Methods PDA diagrams and a video explaining risks and benefits of radiotherapy were developed in close collaboration between patient advocates and PRIMETIME trialists. The SWAT used a cluster stepped-wedge trial design, where each cluster represented the radiotherapy centre and referring peripheral centres. All clusters began in the standard information group (patient information and diagrams) and were randomised to cross-over to the enhanced information group (standard information plus video) at 2, 4 or 6 months. Primary endpoint was the decisional conflict scale (0–100, higher scores indicating greater conflict) which was assessed on an individual participant level. Multilevel mixed effects models used a random effect for cluster and a fixed effect for each step to adjust for calendar time and clustering. Robust standard errors were also adjusted for the clustering effect. Results Five hundred twenty-one evaluable questionnaires were returned from 809 eligible patients (64%) in 24 clusters between April 2018 and October 2019. Mean decisional conflict scores in the standard group (N = 184) were 10.88 (SD 11.82) and 8.99 (SD 11.82) in the enhanced group (N = 337), with no statistically significant difference [mean difference − 1.78, 95%CI − 3.82–0.25, p = 0.09]. Compliance with patient information and diagrams was high in both groups although in the enhanced group only 121/337 (36%) reported watching the video. Conclusion The low levels of decisional conflict in PRIMETIME are reassuring and may reflect the high-quality information provision, such that not everyone required the video. This reinforces the importance of working with patients as partners in clinical trials especially in the development of patient-centred information and decision aids. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-021-05345-y. Introduction Adjuvant radiotherapy following breast conserving surgery (BCS) plays an important role in the treatment of early breast cancer. The absolute benefit of radiotherapy is dependent on the individual patient's prognosis. Radiotherapy carries risk. For some patients with very low-risk of local relapse, radiotherapy risks may outweigh the benefits, and for these patients, risk adaptation of treatment to omit radiotherapy may be preferable, with this hypothesis being under evaluation in several studies [1][2][3][4][5]. Treatment de-escalation may increase patient uncertainty (decisional conflict) in relation to their care pathway. Uncertainty may be increased if insufficient information is provided. Supplementing standard patient information material with patient decision aids (PDA) has been hypothesised to reduce decisional conflict. PDA are tools helping patients understand treatment risks and benefits, consider values placed on the risk-benefit ratio and participate with clinicians in deciding treatment options. Testing of the hypothesis that PDA reduce decisional conflict requires evaluation in the context of a clinical trial, for example via a 'Study Within A Trial' (SWAT). A SWAT is a research study embedded within a clinical trial enabling assessment of different ways of designing, conducting, analysing and evaluating components of the research conduct [6]. PRIMETIME is a UK-led biomarker-directed interventional cohort study aiming to identify a group of breast cancer patients who can safely avoid adjuvant radiotherapy following BCS [1]. A SWAT was conducted within PRIMETIME to identify whether PDA reduced decisional conflict in patients considering treatment deescalation. In this paper we report the development of the PRIMETIME PDA and SWAT execution. Context for SWAT Details of PRIMETIME have been published previously [1]. The biomarker IHC4+C (incorporating Ki-67) is used to determine the patient's recurrence risk [7]. Patients predicted to be at very-low risk are directed to avoid radiotherapy, and patients at low, intermediate or high-risk are directed to receive radiotherapy as per standard of care [1]. Patients both accepting and declining the recommendation are followed up. Patients were able to consent to the SWAT even if they subsequently declined PRIMETIME. PDA development PDA were developed in close collaboration with PRIM ETIME patient advocates and designed to be used in conjunction with patient information sheets. Diagrams were designed to explain the risks and benefits of radiotherapy using natural frequency formats (numerical values expressed as event rates in groups with and without the intervention). They also explained the risks of recurrence in the different risk groups and compared recurrence risk in patients receiving and not receiving radiotherapy in the low-risk group (Fig. 1a, b). Diagrams were designed to be used in the clinic consultation with the healthcare professional and patient present. Building on the written information it was considered that radiotherapy risks and benefits could also be presented in a different format such as a video to be watched by patients independently. A working group consisting of patient advocates and PRIMETIME Trial Management Group members, together with a series of patient focus groups established content to be included in the PDA video. The PDA were designed according to criteria outlined by the International Patient Decision Aid Standards [8]. SWAT development and execution All patients being approached for PRIMETIME were eligible for the SWAT; all sites participated. The SWAT was implemented using a cluster stepped-wedge trial design. The stepped-wedge design consists of the sequential implementation of an intervention to participants grouped within clusters over a number of time periods [9]. Cluster randomisation ensured all patients within a single site received uniform information for specified time periods. The stepped-wedge design enabled all site clusters to receive the intervention sooner than in a parallel-group cluster randomised trial, where all clusters could be given access to the intervention at the end of the study. Each site cluster was defined as the radiotherapy centre and its non-treating referral sites. All sites began in the standard group which included patient information sheet and diagrams; at pre-specified timepoints, site clusters switched to the enhanced group, which included patient information sheet, diagrams, and video. The intervention (video) pertained to both the cluster and individual participant level. After patients decided whether or not they wished to participate in PRIMETIME, patients were asked to complete a questionnaire (Appendix figure 1 and 2). Questionnaires assessed decisional conflict using a validated tool and patients were asked to indicate their highest level of education. Of note, the outcome was assessed at individual patient level. Questionnaires (paper-based) were distributed to patients in the clinic. Return of the questionnaire indicated patient consent to the sub-study. Site clusters were allocated, via minimisation, to switch from standard to enhanced information at 2, 4 or 6 months. Minimisation was performed manually using a single balancing factor of prior recruitment to the IM-PORT HIGH [10] and or FAST FORWARD [11] breast radiotherapy trials, as an indication of trial research experience. Minimisation was performed at the Institute of Cancer Research Clinical Trials and Statistics Unit. Each site cluster was informed of their cross-over date via email after their first patient had consented to the SWAT. Access to the video was restricted until 1 week before cross-over at which point an email containing a web link to the video and DVDs were sent to the centres for patients without internet access. The SWAT primary endpoint was the decisional conflict score (0-100, with greater scores indicating more decisional conflict) [12] which was assessed on an individual participant level. Secondary endpoints were acceptance of entry into PRIMETIME and acceptance of the recommended treatment within PRIMETIME. The decisional conflict subscale scores of uncertainty, informed, values clarity, support and effective decision were also assessed. Statistical methods The SWAT target sample size was 264 patients based on three steps in the cluster stepped-wedge trial design (at 2, 4 and 6 months) of 33 site clusters (11 per step), with 2 patients/site cluster/2-month period. Of note, the number of participants within each cluster and number of clusters was unknown as the SWAT was planned within a newly recruiting trial. Equal cluster sizes were assumed. There is no published definition of a clinically significant reduction in decisional conflict; two studies conducted in similar populations to PRIMETIME found effect sizes around 0.40, with standard deviations for the total decisional conflict scale score ranging from 11 to 25 [13,14]. There are no published data on the intraclass correlation (ICC) for the decisional conflict scale. Assuming α = 0.05, 264 patients from 33 site clusters would have ≥ 80% power across the full range of ICC values (0-1) to detect a 10-point difference in total score for the decisional conflict scale (effect size = 0.55, standard deviation = 18). Recruitment was extended beyond the original accrual target until all site clusters had switched as per protocol with cluster randomised stepped-wedge trials. Analyses were conducted on an intention-to-treat basis, with questionnaires analysed according to the cross-over date regardless of whether patients reported having watched the video. Of note, each patient completed a single questionnaire. Multilevel mixed effects models used a random effect for cluster and a fixed effect for each step to adjust for calendar time and clustering. Robust standard errors were also adjusted for the clustering effect [15,16]. A linear regression model was used for the decisional conflict scale and subscales; an estimate of the difference in mean scores pre-and postvideo-implementation was obtained (with 95% confidence interval, CI), and groups were compared using the z-test. Secondary endpoints (acceptance of entry into PRIMETIME and recommended treatment) used logistic regression and were reported as odds ratios (OR) with 95%CI. Additionally, total decisional conflict scores were dichotomised using a cut-off of ≥ 25 to define 'clinically significant' decisional conflict [17,18], and groups were compared using logistic regression as for the secondary endpoints. Exploratory analyses including age and education in the models assessed associations with decisional conflict. The ICC value for the overall decisional conflict score was estimated from the primary endpoint model. There is no published guidance for dealing with missing data in the decisional conflict scale, and so EORTC guidance for quality of life measures was used [19], whereby missing items are imputed from the mean of completed items providing ≥ 50% of the questions are completed. Analyses PDA video development Focus groups determined the video should build on the existing patient information sheets and diagrams, providing the same information but in a different format. Patient advocates felt providing additional information would not only be overwhelming, but unethical to have differing content available to participants. Specific themes from existing materials the advocates advised highlighting in the video included; risks of recurrence, benefits and side effects of radiotherapy and lack of clear survival benefit from radiotherapy for low-risk breast cancer. Also, the possibility of treating any subsequent local recurrences radically with surgery +/− radiotherapy was highlighted. It was also felt important to highlight that patients not receiving radiotherapy would undergo extra mammograms from years 6 to 10 and therefore be monitored more intensively compared with standard of care. A script was developed using a question-based format, including an explanation of why the PRIMET IME study was being run, what was needed to calculate the patient's risk and how we weigh up the risks and benefits of radiotherapy (Appendix table 1). The video was developed in collaboration with Eyewitness productions, who also produced the interactive graphics explaining recurrence risk based on the diagrams (Appendix figure 3). The side effects of radiotherapy were explained similarly (Appendix figure 4). The video is available at https://www.icr.ac.uk/ primetime. SWAT execution Five hundred twenty-one evaluable questionnaires were returned from 809 eligible patients (64% return rate) [ Fig. 2] in 24 clusters (Table 1) between April 2018 and October 2019. Median ages (interquartile range) of those patients who did and did not consent to the SWAT were 69 (65-72) and 68 (64-72) respectively. With regard to questionnaire return, 184 questionnaires were returned by the standard group and 337 returned by the enhanced group. Median age was similar between the standard and enhanced groups [70 versus 68 years respectively], as was education level (Table 2). There were no differences in distribution of age or education level over the time period of the study. All patients in both groups read the patient information sheet. However, compliance with the additional material varied; of those with available data, 135 (73%) and 290 (86%) reported using the diagrams in the standard and enhanced groups respectively. In the enhanced group, 121 (36%) reported watching the video, 172 (51%) did not and 44 (13%) had missing data. There were no differences in age and education level between those who did and did not watch the video. Mean decisional conflict scores in the standard group (N = 184) were 10.88 (SD 11.82) and 8.99 (SD 11.82) in the enhanced group (N = 337). There was no statistically significant difference in decisional conflict scores Table 1 Summary of questionnaires returned from eligible patients per site cluster in the standard and enhanced groups in the PRIMETIME SWAT (eligible patients in brackets) White box: Cluster receiving standard information. Pink box: Cluster receiving enhanced information. Number represents the number of patients returning questionnaires in the standard and enhanced groups per site cluster, and numbers in brackets represent the eligible patients in each timepoint. 0 months is baseline. *This questionnaire was returned later than originally planned The majority of patients who returned questionnaires in the SWAT had consented to PRIMETIME [179/184 (97%) and 326/337 (97%) patients in the standard and enhanced groups respectively] (Fig. 2), with no statistically significant difference between groups [OR = 0.95 (0.17-5.22), p = 0.95]. For patients with available data, 163 (93%) patients in the standard group opted for their recommended treatment and 13 (7%) did not; in the enhanced group, 300 (96%) patients opted for their recommended treatment and 13 (4%) did not [ Fig. 2]. There was no statistically significant difference in patients accepting the recommended treatment in PRIMETIME according to whether they were in the standard or enhanced groups [OR = 1.17 (0.59-2.29), p = 0.66]. There was no significant association between either age or education level and decisional conflict scores when allowing for the effects of the standard and enhanced groups and time to cross-over (Appendix table 2). There were also no significant differences in subscale scores between the groups (Appendix table 3). The ICC for total decisional conflict score was estimated to be 0.03. Discussion PDA were designed in close collaboration with patient advocates to help patients consider the risks and benefits of adjuvant breast radiotherapy. The PRIMETIME SWAT investigated whether the addition of a video to patient information sheets and diagrams could reduce decisional conflict. We found that absolute levels of decisional conflict were low on average in both the standard and enhanced groups, with no significant reduction in decisional conflict following video implementation and that less than half of the patients reported watching the video. There was no statistically significant difference in the acceptance of PRIMETIME entry or recommended treatment between the two groups. Development of the PDA was primarily patient-led with advocates identifying important concepts in breast cancer radiotherapy which needed to be communicated to patients. This was facilitated by a series of focus groups where patients and healthcare professionals established the most important concepts for patients to understand when considering a de-escalation trial such as PRIMETIME. These concepts fed into the comprehensive PDA development process. Patient advocates also identified as a challenge being able to clearly explain concepts such as risks of recurrence. In general, patients may be able to more accurately perceive risk when numerical values are used. Using natural frequency formats and expressing probabilities as an event rate out of 100 or 1000 patients can help improve understanding [21]. Natural frequency formats were therefore used throughout the PRIMETIME PDA to aid patient understanding. PDA were designed relatively simply and cheaply to be easily usable for patients both within and independent of the clinic consultation. Although the SWAT preceded the COVID-19 pandemic, there has now been an acceleration into a practice of fewer face-to-face consultations, increase in telephone/video consultations and remote consent. This makes the use of PDA and videos particularly timely to help patients and clinicians in the informed consent process. The PDA were tested using the SWAT concept embedded within the PRIMETIME study across UK cancer centres. This enabled questions regarding decisional conflict in this population to be answered in parallel with the primary question of the main trial which was to identify a group of patients with low-risk breast cancer who can safely avoid radiotherapy. However, it was found the response rate for this SWAT was only 64%. Of note, there were no significant differences in age between those patients who consented to or declined the SWAT (age was the only baseline characteristic available for comparison). An important consideration is that this SWAT encompassed a broader patient group than that entered into the main trial, including those who declined entry into the main trial. Although trial guidance was that all patients who were eligible for PRIMETIME were to be offered entry to the SWAT, it may have been that sites were not able to offer the SWAT for example due to capacity issues. Patients declining the SWAT may have different characteristics or levels of decisional conflict. It is therefore important that sites are supported to approach these patients so they are given the opportunity to participate in other studies albeit with a separate consent process. Also, it was found that data were missing within the SWAT including whether patients watched the video. Missing data can be a challenge in trials but possibly even more so in a SWAT. Regarding the numbers of questionnaires returned in each group, 184 questionnaires were returned from 268 eligible patients (69%) and 337 questionnaires were returned from 541 eligible patients (62%), in the standard and enhanced groups respectively. The reason for the greater number of questionnaires returned in the enhanced group may be explained by SWAT recruitment improving over time meaning that recruitment may coincidentally improve whilst sites were in the enhanced group. It should also be noted that a requirement of a stepped-wedge trial is that all sites must remain open until every site has crossed over. This may result in some sites spending extended periods of time in the enhanced group. The mean decisional conflict scores were not statistically significantly different between the groups, although the proportion of patients with 'clinically significant' decisional conflict appeared to be marginally higher in the standard compared with the enhanced group. The low average decisional conflict scores in the standard group may have made further substantial reductions unlikely [mean scores were 10.88 (SD 11.82) and 8.99 (SD 11.82) in the standard and enhanced groups respectively, with decisional conflict being scored on a scale of 0-100]. Of note, levels of decisional conflict in the PRIMETIME SWAT are similar to those in the IBIS II trial which investigated the use of a PDA in a randomised controlled trial of an aromatase inhibitor in patients at high risk of breast cancer (prevention group) and patients with DCIS (treatment group) [13.2 (SD = 14.5)] [14]. Most cancer clinical trials provide written information and do not usually include pictorial diagrams; incorporation of the diagrams was an intervention in itself and may have contributed to the low decisional conflict scores in the standard group. Furthermore, the SWAT may have been underpowered to detect a more modest effect size in terms of reduction in decisional conflict. However, with no guidelines available for defining clinically significant reduction in decisional conflict, the choice of statistical assumptions for designing this type of study has to be consensus led from the trialists. Study limitations With respect to study limitations, only 36% of patients in the enhanced group reported having watched the video (data missing for 13%). The standard information may have been of sufficient quality to fulfil the information needs of these patients. Some patients may not have been made aware of the video or preferred not to have watched it at a potentially stressful time around their diagnosis. In addition, the SWAT was restricted to patients who were able to read and understand English independently in order to complete the questionnaire, although this is not an eligibility criteria restriction for the main study. It is also possible that patients' decisional conflict may have reduced over time irrespective of the video intervention as researchers at the centres became more experienced at discussing the trial with patients and other trial procedures-termed a 'learning curve'. Furthermore, in a stepped-wedge trial where all centres begin in the control group, this learning curve would disproportionately adversely affect the control compared with the intervention group. A sensitivity analysis of the primary endpoint excluding patients who had returned questionnaires in the first two months of the sub-study being open in their centre was done, but this did not affect the results. Research teams at sites may have adapted the way they described the study after having watched the video themselves, although this was not measured. Implementing a new intervention mid-way through a trial may have been a challenge for sites and an alternative would have been to use a parallel-group cluster randomised trial design whereby clusters are allocated which type of information to use throughout the duration of the trial (albeit with the option of all clusters getting access to the intervention at the end of the study). Clinical implications PDA were designed in collaboration with patients to enhance information for those considering treatment deescalation. The SWAT concept enabled these to be tested in an efficient and economic manner. Levels of decisional conflict were low on average in patients receiving standard information incorporating diagrams. The diagrams alone may have resulted in low decisional conflict scores in the standard group, such that further substantial reductions would be unlikely. De-escalation trials can be a challenge to conduct and recruit to, albeit acceptance to PRIMETIME is high. In general, patients may perceive that 'more is better' and clinicians may practice 'better safe than sorry' [22]. This emphasises the importance of patient-led information delivery to ensure patients understand and feel comfortable with the trial especially in the era of treatment de-escalation. Conclusion The low levels of decisional conflict in PRIMETIME are reassuring and may reflect the high-quality information provision including diagrams designed by patients for patients in collaboration with researchers, such that not everyone required the video. This reinforces the importance of working with patients as partners in clinical trials especially in the development of patient-centred information and decision aids. Furthermore, in an era of increasing use of virtual clinic appointments and consent, videos are an invaluable resource to help patients make informed decisions regarding breast radiotherapy.
2021-06-14T13:44:36.048Z
2021-06-14T00:00:00.000
{ "year": 2021, "sha1": "4983c5b0c5c7d118d34331eee0180fa7e8b277b3", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-021-05345-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4983c5b0c5c7d118d34331eee0180fa7e8b277b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215404087
pes2o/s2orc
v3-fos-license
Brain Morphometry Estimation: From Hours to Seconds Using Deep Learning. Motivation: Brain morphometry from magnetic resonance imaging (MRI) is a promising neuroimaging biomarker for the non-invasive diagnosis and monitoring of neurodegenerative and neurological disorders. Current tools for brain morphometry often come with a high computational burden, making them hard to use in clinical routine, where time is often an issue. We propose a deep learning-based approach to predict the volumes of anatomically delineated subcortical regions of interest (ROI), and mean thicknesses and curvatures of cortical parcellations directly from T1-weighted MRI. Advantages are the timely availability of results while maintaining a clinically relevant accuracy. Materials and Methods: An anonymized dataset of 574 subjects (443 healthy controls and 131 patients with epilepsy) was used for the supervised training of a convolutional neural network (CNN). A silver-standard ground truth was generated with FreeSurfer 6.0. Results: The CNN predicts a total of 165 morphometric measures directly from raw MR images. Analysis of the results using intraclass correlation coefficients showed, in general, good correlation with FreeSurfer generated ground truth data, with some of the regions nearly reaching human inter-rater performance (ICC > 0.75). Cortical thicknesses predicted by the CNN showed cross-sectional annual age-related gray matter atrophy rates both globally (thickness change of -0.004 mm/year) and regionally in agreement with the literature. A statistical test to dichotomize patients with epilepsy from healthy controls revealed similar effect sizes for structures affecting all subtypes as reported in a large-scale epilepsy study. Conclusions: We demonstrate the general feasibility of using deep learning to estimate human brain morphometry directly from T1-weighted MRI within seconds. A comparison of the results to other publications shows accuracies of comparable magnitudes for the subcortical volumes and cortical thicknesses. INTRODUCTION Magnetic resonance imaging (MRI) is the method of choice for non-invasive assessments of brain structure. Clinicians use MRI for diagnosis, disease monitoring, and therapy control in a wide range of neurological and neurogenerative disorders like e.g., epilepsy, multiple sclerosis, Alzheimer's, Parkinson's, or Huntington's disease, which are often associated with structural changes of the brain (1). Structural MRI including high-resolution T1-weighted (T1w) imaging is part of today's protocol recommendations for many of these disorders (2)(3)(4). Beyond visual assessment by trained experts, quantitative brain morphometry is gaining increasingly more attention for medical applications. Precise and automatic reconstruction of structures from MRI is still a topic of active research. Commonly used methods are voxel-based morphometry (VBM) (5) and surfacebased analysis (SBA) (6). A variety of morphometric parameters have been proposed. Three of the most frequently used parameters are the volumes of anatomically delineated regions of interest (ROIs), and the thickness and the curvature of the cortical band. Volumes are either reported in physical units as mm 3 or cm 3 , or as fractions of the intracranial volume. Total gray matter (GM) volume is known to decrease with aging (7), which can regionally or globally be accelerated by neurodegenerative diseases (8,9). Atrophy of brain tissue is generally accompanied by enlarged ventricles and increased volume of cortical (sulcal) cerebrospinal fluid (CSF) that sustains the brain within the skull (10). Cortical thickness is the distance in mm between the white matter (WM) surface (i.e., the interface between GM and WM) and pial surface (i.e., the interface between GM and CSF). The overall mean thickness of the healthy human cerebral cortex is about 2.5 mm, with regional variations between 1 and 4.5 mm (11). A multitude of geometrical definitions for the curvature of a surface exist (12). The mean curvature, as an extrinsic measure for the folding of the cortex (13), roughly corresponds to the inverse of the radius of a sphere fitted to the surface and is measured in mm −1 . Both, thickness and curvature of the cortex, can be reported per vertex on a reconstructed surface mesh or as ROI-wide averages (parcellations). In the interest of readability, we here use the terms thickness and curvature to refer to their parcellation-wise averages. Large-scale studies of brain morphometry are only possible if morphometric parameters are available for a large number of MR images, with high accuracy and in a reproducible manner. However, manual segmentation and measurements are extremely labor intensive, prone to errors, and good intra-and interrater reproducibility depends on task-specific training (14). Software for automatic or semi-automatic extraction of brain morphometry from MRI is available and includes tools such as FreeSurfer (15), FSL (16), ANTs (17), NeuroQuant (18), and IBASPM (19). Among these morphometry tools, FreeSurfer is the most comprehensive, as it provides many metrics, including direct measures of volumes and cortical thickness and curvature. In a large-scale, multi-center study by the ENIGMA consortium (20), significant structural changes in the brains of epilepsy patients have been identified recently (21). When compared to a cohort of healthy controls, altered subcortical volumes and reduced cortical thickness in distinct regions were observed. The feasibility of applying morphometry tools to individual patients and to support clinical diagnostics has been shown (22) by comparing personalized morphometric analysis to a normative database adjusted for confounding factors like age and sex. Brain morphometry is expected to become an essential quantitative neuroimaging biomarker (23). Although currently mainly used in the academic realm, it has great potential to complement today's predominantly qualitative visual assessments of MRI by neuroradiologists. If morphometry is to be used for diagnostics of individual patients in daily clinical practice, the timely availability becomes crucial. Today's state of the art tools for the automatic determination of brain morphometry often come with a high computational burden (∼10 h with FreeSurfer), heavily hampering their use in clinical routine, where time is often an issue. The adoption of deep learning in medical image analysis has increased rapidly over the past years. In current research projects, it has even become the method of first choice for many tasks. In a review of recent studies that use deep learning in medical image analysis (24), MRI was the most frequently used imaging modality, and the brain the most prominent organ. While the vast majority of tasks concern image segmentation and classification, applications of deep learning for regression (prediction) of morphometry in medical image applications are still rare, especially for brain MRI. Technically, convolutional neural networks (CNNs) (25) are the most prevalent architectures for image analysis. Despite the 3D nature of MRI, many methods still use 2D convolutions. Input is often fed patch-or slice-wise into the networks, partially motivated by limited computational resources and the lack of large-scale training data (26). The increase of power and memory of modern GPUs has the potential to change this, though. A regression problem leveraging the full 3D MRI volume using a CNN was proposed by Cole et al. (27), where they successfully predicted brain age directly from raw MRI with a mean absolute error of < 5 years, i.e., much smaller than the age range of available datasets. Deep learning has been used to directly estimate the wall thickness of the ventricular myocardium from a sequence of cardiac images (28). The authors made use of both, the spatial and temporal information, by combining a CNN and a recurrent neural network (RNN). Directly classifying neurological diseases is another popular challenge that is being tackled by deep learning, mainly for Alzheimer's disease (29)(30)(31) where a large public dataset is available from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (32). Regarding brain anatomy, promising results in the application of deep learning-based models were observed for the segmentation of tissue classes and subcortical structures (33)(34)(35)(36)(37)(38). The challenge of having access to enough labeled data for training is addressed by semi-supervised (39) and unsupervised (40) approaches or data augmentation strategies simulating diverse pulse sequences (41). While these segmentation-based methods enable calculation of volumes in a timely fashion, none of them provide thickness or curvature measures of the cortex. Graph convolutional networks (GCN) have been used (42,43) to parcellate the surface of the cerebral cortex. For calculating the cortical thickness, alternative methods like Laplace equations (44) or registration-based solutions (45) have been proposed. Recently, FastSurfer was proposed as an optimized FreeSurfer pipeline, reducing the runtime to about 1.7 h, which is primarily achieved by a deep learning-based whole brain segmentation and a faster surface reconstruction and spherical mapping using marching cube and Laplace eigenfunctions (46). A classical machine learning approach for brain morphometry estimation from MRI was proposed by Suter et al. (47), using a Random Forest to directly estimate cortical thickness and curvature, both on a per voxel and parcellation level. As a limitation, their approach still depended on the first part of the FreeSurfer pipeline to pre-process the data before feeding it into the model. Including feature extraction, this required about 30 min to predict the morphometric parameters of a single subject. Recent advances in deep learning for image analysis motivated us to propose a deep learning-based approach for direct estimation (regression) of brain morphometry from MRI. We hypothesized that a neural network can directly predict the volumes of anatomically delineated subcortical ROI, and mean thicknesses and curvatures of cortical parcellations. Advantages would be the availability of results within seconds while maintaining a clinically relevant accuracy (see Figure 1). While deep learning-based methods are increasingly used for fast brain anatomy segmentation, this is-to the best of our knowledgethe first application to directly regress morphometric measures of the cortex. This paper is structured as follows: after a description of the data, their pre-processing, the network architecture and the evaluation metrics in the methods section, we first analyze the predictions in terms of correlation coefficients against a silverstandard ground truth. The relevance of our predictions beyond correlation is assessed via a group comparison of epilepsy patients with healthy controls approximating the worldwide recognized ENIGMA study, and an analysis of cross-sectional age-related cortical GM atrophy rates. Finally, we contrast the results to the literature and analyze the reliability by means of rescan tests. Data The data for this project were used in previous studies (22,48) by the Bern University Hospital (Inselspital). The dataset consists of anonymized, high-resolution isotropic T1-weighted MR images, acquired at the Inselspital on two 3T MR scanners (Magnetom Trio and Verio, Siemens, Erlangen, Germany). Images were acquired in sagittal direction and MRI protocols were either MDEFT (49), standard 3D MP-RAGE (50), MP-RAGE according to the recommendations of the Alzheimer's Disease Neuroimaging Initiative (51) or MP-RAGE optimized for gray-white contrast (52). Detailed sequence parameters can be found in the Supplementary Material of Rummel et al. (48). Only age, sex, scanner, and sequence are known from the anonymized data. Both healthy controls (n = 443) and patients with epilepsy (n = 131) are included in the dataset. The age Age in years. FIGURE 1 | Deep learning-based estimation of brain morphometry directly from T1-weighted MRI, making results available within seconds. Frontiers in Neurology | www.frontiersin.org range across all subjects is from 6 to 84 years. The demographic distribution of the subsets is shown in Table 1. The dataset contains a certain number of re-scans, i.e., for some healthy controls more than one MRI is available (48) in intervals not longer than 2 years. All MR images of these subjects were intentionally assigned to the test set to enable robustness tests. Since all these subjects are within the age range of 21-41 years, this results in a lower standard deviation of the age in the test set. The remaining subjects were randomly distributed among the three sets. FreeSurfer Due to the lack of a gold-standard ground truth for brain morphometry, we used FreeSurfer to generate a silver-standard ground truth in this project. FreeSurfer (FS) (15) is a freely available software package for the analysis of neuroimaging data. To obtain the volumes of anatomical brain segmentations, FreeSurfer performs a whole brain segmentation of subcortical and ventricular structures, assigning a label to each voxel (53). The SBA is derived from a geometric model of the cortical surface (6). SBA measures are available per vertex or averaged for ROI for which the cortex is parcellated and mapped to a brain atlas. An automatic reconstruction of a topologically correct surface for the highly folded brain cortex is an extraordinarily difficult task. A breakthrough in the development of FreeSurfer was to use a combination of both the pial and the gray/white matter boundaries along with volume intensities to achieve an anatomically accurate surface representation. This iterative process of topological corrections is computationally expensive and the most time-consuming part in the whole FreeSurfer pipeline. It is owed to this high-resolution surface mesh that allows measurements of cortical thickness with submillimeter accuracy, which is necessary to characterize subtle cortical atrophy in diseases (11). The accuracy and reliability of FreeSurfer have been investigated multiple times, e.g., by comparing the results with manual segmentation by experts (54)(55)(56), by performing scanrescan studies (57,58), or through comparison with other tools (59). FreeSurfer's output may be influenced by the image acquisition setup like scanner manufacturer, field strength, and protocols (60), but also the version of FreeSurfer, and even the underlying hardware and operating system, are known to influence the results when applied to the same MR image (61). Ground Truth Generation A silver-standard ground truth for the cortical and subcortical morphometrics was generated with FreeSurfer 6.0 (recon-all) running on CentOS Linux, release 6.9. Average processing time was 11.3 ± 3.3 h per MR image. Subcortical volumes in mm 3 for 29 ROI were extracted from the segmentation statistics (aseg.stats) (53). The volume of the corpus callosum was calculated by summing up its five sub-regions (anterior, mid-anterior, central, mid-posterior, and posterior). Cortical thicknesses in mm and curvatures in mm −1 were extracted from the surface statistics (lh.aparc.stats, rh.aparc.stats) as their parcellation-wise averages defined by the Desikan-Killiany (DK) atlas (62), resulting in 34 ROIs per hemisphere. The reliability of the FreeSurfer output depends on previous steps in the processing pipeline, mainly the tissue segmentation and surface reconstruction. Errors therein may lead to significant deviations. As a simple automatic quality check to detect likely erroneous large outlier, the output from FreeSurfer was fed into an existing pipeline for automated morphometric analysis developed by Rummel et al. (48). The pipeline reported an unusually high number of significantly abnormal regions for 17 subjects which were removed from the dataset. One additional subject was removed after visual inspection due to a severely distorted white matter mask from FreeSurfer. Data Pre-processing Pre-processing of the raw MR images for deep learning included the following steps: The brain mask from the FreeSurfer output was used for skull-stripping the original T1w image. This anonymized image was then re-sampled and cropped to 256 × 256 × 256 voxels with a size of 1 mm 3 (mri_convert) in order to have a common input size across all subjects. The voxel intensities of each image were re-scaled into the range 0-4,095 to account for intensity variations between different images. Last, the center of mass from all foreground voxels was moved to the center of the image to facilitate data augmentation described below. Convolutional Neural Network Architecture The scaffold for the development of the custom network architecture for brain morphometry was to some extent inspired by AlexNet (63), the winner of the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) (64). Motivated by the volumetric nature of MR images, we use 3D convolutions on the full input volume instead of 2D with three input channels in AlexNet. Further modifications include a reduction by two convolution layers, adjustments in the fully connected layers to TABLE 2 | Architecture of the CNN for brain morphometry. Layer Kernel Stride Filters Output size Activation function Dropout (0.4) is applied after the last MaxPool layer and after first FC layer. A bias is added to the first convolutional and all fully connected layers. Conv3D, 3D convolution; FC, fully connected layer; ReLU, rectified linear unit. Frontiers in Neurology | www.frontiersin.org account for different sizes, and a regression output. This results in a network architecture with a total of six layers, as depicted in Table 2. Accordingly, the receptive field after the last pooling layer is 209 in all three dimensions. The total number of trainable parameters in the network is 9 467 877, about half of them being in the convolutional layers. The weights of the convolutional kernels are initialized randomly according to the Xavier Uniform Initializer (65). All variables of the fully connected layers and the bias are zero-initialized. The mean squared error (MSE) objective function is minimized using Adam (66) as gradient-based optimizer with an empirically determined initial learning rate of 10 −5 . With a batch size of 6, the training of one epoch consists of 73 steps and requires about 3 min to complete. The model was implemented in Python using Tensorflow 1.8 (67). Training was performed on a NVIDIA Titan Xp GPU with 12 GB memory. During training, the accuracy was periodically evaluated on the validation set. The model of the best epoch, measured in terms of mean R 2 across all regression morphometrics, was kept for early stopping. We found the following data augmentation strategy allows the model to be trained for more epochs before the onset of overfitting: The skull-stripped input image was randomly translated by up to ±15 voxel in a randomly selected dimension, followed by three consecutive 90 • rotations around a random principal axis. Besides artificially increasing the amount of training data, this has the positive side effect of enabling the model to process images in an arbitrary orientation. These transformations are computationally inexpensive and can be performed for the (pre-fetched) next batch on the CPU while calculations of the current batch are running on the GPU. Evaluation Several metrics exist to evaluate the correlation and reliability of a regression model. For direct comparison with others, we report the results for all three metrics mentioned below in the Supplementary Material. The coefficient of determination, denoted R 2 , is an indicator for the goodness of fit of a linear regression model: where y i is the prediction for the ith sample, g i the silver-standard ground truth andḡ the sample mean for N samples. The Pearson correlation coefficient, denoted r when applied to a sample, measures the linear correlation of two variables: where σ is the standard deviation of the prediction and silverstandard ground truth, respectively. Pearson's r is less susceptible to large outlier than R 2 . A fixed bias remains unrecognized by Pearson's r (e.g., reports a perfect correlation of 1 for y = 2g or y = g + 1). Therefore we employed the intraclass correlation coefficient (ICC) along with a 95% confidence interval as primary quantitative metric to assess the reliability of the predictions (68). Reflecting both degree of correlation and agreement between measurements, ICC is widely used in medicine to measure intra-and inter-rater performance as well as for the evaluation of test-retest experiments. In its original form, ICC is defined as the ratio of true variance (σ 2 g ) to true variance plus error variance (σ 2 ǫ ): Modern definitions use sample mean squares from analysis of variance (ANOVA). Various assumptions lead to slightly different forms of ICC (69). By following the guideline from Koo and Li (70), the appropriate form for our task is two-way mixed effects, absolute agreement, single rater/measurement also known as: where MS R = mean square for rows, MS E = mean square for error and MS C = mean square for columns from ANOVA. However, some papers lack a clear definition of which ICC was used exactly, making one-to-one comparisons more difficult. All three evaluation metrics yield values below 0 for negative correlation or poor agreement, 0 for no correlation, e.g., a model just predicting the average expected outcome, and gradually become 1 for perfect correlation. The metrics were calculated in R (72) with the additional package irr (73) for ICC. Besides simple correlation plots and the quantitative metrics described above, we further analyzed the predictions qualitatively using Bland-Altman plots (74) by plotting the differences against the means of the two methods (75). Studying the difference rather than the agreement is a recommended (76) analysis technique if a new method is to be compared to an existing, well-established method and the underlying true values are actually unknown (as in our case with brain morphometry and FreeSurfer as the established method). Clinical Significance -Patients With Epilepsy A widely used application of brain morphometry in clinical research is the statistical comparison of two different groups in a population. To explore the efficacy of our deep learning-based approach beyond purely technical metrics, we assessed to which degree we could replicate the findings of such a research study with the morphometrics estimated by the CNN. In a large-scale study (21), including more than 2,000 patient cases, the ENIGMA consortium assessed structural brain abnormalities in patients with epilepsy. Among the findings were increased volumes of the lateral ventricle bilaterally, decreased volumes of the thalamus and globus pallidus from the right hemisphere, and a reduced mean thickness of the precentral gyrus and paracentral lobule bilaterally in patients with epilepsy when compared to a group of healthy controls. Only the aforementioned eight metrics showed statistically significant deviations in all four epilepsy subgroups examined by the study. Our dataset contains patients with epilepsy from all four subgroups, but the sample size does not allow for stratification into small subgroups. The baseline from ENIGMA is, therefore, the "All epilepsies" phenotype. Effect sizes adjusted for age and sex to compare healthy controls vs. patients with epilepsy were calculated using Cohen's d, implemented in the R package effsize (77). Statistical significance was determined with a onesided t-test (p < 0.05). To increase the sample size for the test, we created three additional train/validate/test splits of the dataset, each with a unique set of subjects in the validation and test set (nonexhaustive cross validation). Models were trained (as described in section 2.2) independently of each other using these sets. The combined predictions from the four resulting test sets yield a sample of 274 healthy controls and 86 patients with epilepsy. Although our population is much smaller than in ENIGMA (1,727 healthy controls and 2,149 patients with epilepsy), a comparison using the effect size is valid as this statistical test is not confounded by the sample size. Age-Related Cortical Gray Matter Atrophy The overall cortical thickness is known to decrease with normal aging (7). This age-related atrophy varies regionally (78). We assessed whether this trend is recognizable in the predictions from the CNN on the whole cohort of controls and patients. The age effect on the predicted thicknesses was analyzed in R by fitting a general linear model, both globally for the whole brain (all parcellations averaged) and regionally for each parcellation. In order to account for multiple tests, the significance level was Bonferroni corrected with a factor of 68 (number of parcellations in both hemispheres). The results were compared to the study of Lemaitre et al. (78) in which a similar cohort (216 participants with a mean age of 39.8 ± 16.5 years) was analyzed for age-related regional morphometric changes. Reliability by Rescan Tests Due to the lack of a gold-standard ground truth, we should not solely rely on the accuracy to judge on the performance of a method. Reliability is another important quality feature. Repeated measurements of the same subject should ideally yield similar values, or in our case, different MRI from the same subject should report similar results. For nine subjects, between three and six scans are available in the dataset. Since these rescans were acquired within a time frame of maximum 2 years, we assume only minor structural changes in the brain occurred during this time. Hence we assume an unchanged ground truth and assessed the reliability by means of evaluating the standard deviation of the morphometrics predicted by the CNN. RESULTS The final model was trained during 7 days over 4,500 epochs, with the best mean R 2 score on the validation set reached at epoch 3,920 (early stopping). As depicted in Figure 2, the final model using dropout and data augmentation required more training steps to converge. Both translations and rotations contributed to reduce overfitting and to achieve a higher R 2 . Dropout roughly tripled the number of epochs required to converge. About 15% of the performance gain, in terms of mean R 2 , was attributed to data augmentation. The corresponding metrics on the training data can be found in Figure S1 (Supplementary Material), showing earlier convergence without data augmentation. All results below are from the evaluation on the test set consisting of 90 subjects, as described in section 2.1. The total runtime required for predicting all 165 morphometrics for these 90 subjects was 698 s, which is less than 8 s for a single MR image. This included all necessary pre-processing steps of which re-sampling to unit volume and isovoxel took most of the time, whereas passing the data through the CNN on the GPU was below 1 s. Figure 3 shows a Box-and-Whiskers plot of the averaged relative error for each category. The mean relative deviations from silver-standard ground truth were below 5% for all three categories (volume = 3.43 ± 5.41%, thickness = 0.63 ± 2.44%, curvature = 0.02 ± 2.58%). The subsequent sections report and analyze the accuracy of the individual predictions for each of the three categories. Subcortical Volume An overview of all intraclass correlation coefficients along with 95% confidence intervals is shown in Figure 4 and detailed numbers are reported in Table S1 When analyzing individual estimations using Bland-Altman plots, we observe a tendency of the CNN to have overestimated smaller volumes and underestimated the larger (see Figure 5 for an example of the left thalamus). The red horizontal line representing the mean difference between prediction and silver-standard ground truth was close to zero (the relative mean difference was below 3.2% for all structures except for the white matter hypointensities and inferior horn of lateral ventricles). This suggests only a small bias is present. The regression lines in the correlation plots were not as steep as 45 • (perfect correlation) for most of the volumes, which is an indication the CNN was not able to fully capture the variance of the silver-standard ground truth. Correlation and Bland-Altman plots for all subcortical volumes are listed in the Supplementary Material. When looking at the anatomical location, we observed the best results in the parietal and frontal lobes, both for thickness and curvature (see Patients With Epilepsy The predictions from the CNN were used to perform a population study equivalent to ENIGMA (21), dichotomizing epilepsy from healthy controls. Effect size differences between epilepsy and healthy control groups are shown in Table 3. The first column replicates the numbers from the ENIGMA epilepsy study. Cohen's d for the CNN and FreeSurfer were calculated on the combined test dataset of 274 subjects. In agreement with the findings from ENIGMA, the predictions from the CNN showed statistically significant (p < 0.05) positive effect sizes for the volume of the lateral ventricles and negative effect sizes for the mean thickness of the paracentral lobules and precentral gyri bilaterally. Contrary to ENIGMA, the result showed an increased volume of the right globus pallidus for patients with epilepsy. No statistically significant effect size was found for the volume of the right thalamus. For the two deviating structures, both the predictions Age-Related Cortical Gray Matter Atrophy Linear regression revealed a statistically significant crosssectional age-related reduction in global mean cortical thickness (r = −0.65, p = 4.6 × 10 −12 ) with an overall effect of 0.004 ± 0.002 mm per year (average ± SD), see Figure 8A. The regional distribution of the age effects can be seen in Figure 8B. Predominant reductions were observed in the frontal (average −0.0049 ± 0.0020 mm/year) and parietal (−0.0047 ± 0.0008 mm/year) lobes and less in the temporal (−0.0037 ± 0.0029 mm/year) lobe. In the occipital lobe, the age-dependent thickness change was considerably smaller (−0.0009 ± 0.0012 mm/year). Statistically significant (p < 0.0007, Bonferroni corrected) age-related reductions were seen not only globally, but also on most (55/68) of the individual parcellations. Figure 9 shows an FIGURE 7 | Intraclass correlation coefficients of all cortical regions for thickness (first row) and curvature (second row) superimposed on a standard brain. Color scales indicate poor (black, blue) to excellent (orange, yellow) ICC. Figure 9 right), we observed an increased thickness with age until a peak around 45 years followed by a decrease again. This observation is consistent with the finding of Hasan et al. (79). They have identified the same pattern for the entorhinal cortex with a peak thickness at about 44 years in a large cohort of 1,660 participants. Comparison With Others The accuracy and reliability of morphometric measures from MRI have been subject to various studies, both for automatic methods and manual segmentation. A comparison of our results to metrics reported by others is shown in Table 4. The FDA approved software NeuroQuant was compared to FreeSurfer by Ochs et al. (59). Initially developed as a commercial version of FreeSurfer, NeuroQuant meanwhile uses an independent code base and relies on a different probabilistic atlas. A total of 60 MRI scans (20 healthy, 20 Alzheimer's disease patients, and 20 mild traumatically brain-injured patients) were processed by both tools. The authors reported higher correlations for the volumes of the amygdalae and hippocampi, but lower correlations for the globus pallidi and thalami. Using MR images from former professional football players, Guenette et al. (54) Entorhinal cortex − lh corrected labels. Two trained raters manually corrected the labels from FreeSurfer in 108 subjects, followed by a review of a neuroanatomist. To assess inter-observer performance, 10 randomly chosen subjects were independently corrected by a third trained rater. Intraclass correlation coefficients for the interobserver performance were generally higher compared to our CNN, except for the left amygdala (CNN = 0.79, inter-observer = 0.72). However, ICCs for the fully automated vs. manually corrected volumes were slightly lower for the hippocampus and significantly lower for the amygdala where the authors even reported negative values. Since correlation coefficients for the combined amygdala-hippocampal complex were good, the authors suspect a deviating definition of the border between the amygdala and hippocampus in FreeSurfer's atlas. The test-retest reliability of FreeSurfer was assessed by Madan and Kensinger (57). Thirty young volunteers (20-30 years old) were scanned ten times within a 1-month period. The MR images were processed with FreeSurfer 5.3.0, and the reliability measured using ICC (both hemispheres combined for subcortical volumes). In agreement with our findings, they generally observed less reliable measures of the cortical thickness in the temporal lobe. Compared to the results of our CNN, ICCs for subcortical Reliability To assess the reliability of the method, we analyzed the predictions where several rescans of the same subject are available. Figure 10 shows the standard deviations (SD) across all 90 scans (leftmost bars) followed by the SD across rescans within each of the nine subjects separately. For the cortical thickness and curvature, the SD are reported as an average of all 68 parcellations. A general observation is that the SD across all 90 scans were lower for the CNN (±0.116 mm and ±0.005 mm −1 for thickness and curvature, respectively) than for FreeSurfer (±0.193 mm, ±0.010 mm −1 ). This suggests the CNN is unable to fully capture the inter-subject variance. Partially, this is probably due to some of the less accurate parcellations (they show less variance with a bias toward the mean), lowering the averaged SD. When looking at selected morphometrics individually (second row in Figure 10, selected structures of interest for epilepsy), the SD of the CNN was closer to the one from FS. For the rescans, SD from the CNN were lower than those from FreeSurfer for all nine subjects, some significantly. A good to excellent accuracy for the volume of the right thalamus (ICC = 0.79 within CI 95% 0.70-0.86) comes along with good reliability for the rescans (SD below 4.1% for all subjects). As an example, the CNN predicted the following volumes for the right thalamus from the six scans of subject S2: 7,079, 7,066, 7,028, 7,010, 7,021, 7,003 mm 3 . This corresponds to an average of 7,035 mm 3 and a standard deviation of 31 mm 3 . Whereas FreeSurfer reported an average volume of 7,011 mm 3 with a standard deviation of 230 mm 3 for the scans of the same subject. Corresponding reliability plots for the remaining structures can be found in the Supplementary Material. DISCUSSION We have used data from 574 subjects, processed with FreeSurfer, for the supervised training of a CNN to predict brain morphometry from MRI. The customized CNN predicts a total of 165 morphometric measures (subcortical volumes, and cortical thicknesses and curvatures) directly from minimally pre-processed (skull-stripped) T1w MR images, without the need of prior image registration nor segmentation, enabling results to be available within seconds. With 438 samples in the training set, which is considered to be on the lower end for successfully training a deep learning model (80,81), a simple data augmentation strategy of translations and rotations further improved the accuracy. Besides quantitative evaluations of the results, we have shown methods to assess the clinical relevance of the achieved accuracy (sections 3.3, 3.4 and 3.6) beyond correlation coefficients. Convolutional Neural Network Architecture Our aim of directly regressing all morphometric measures requires passing the entire 3D volume as input into the network, ruling out slice-or patch-based strategies. The large input size consequently constrains the network to simpler architectures, or otherwise would require special infrastructure to train large networks with high-resolution input (82). We have not performed an extensive architecture search, but explored different directions within the given constraints and found the proposed architecture suitable for the task to demonstrate the feasibility. Besides optimizing the network architecture, further improvements could be achieved by leveraging recent developments in how to deal with sparse or noisy labels in medical image analysis (83) of which semi-or self-supervised learning might be promising strategies (84). The chosen data augmentation is effective, while still computationally efficient. Arbitrary rotations would require resampling, which is computationally expensive and might cause unwanted artifacts. Future work should also investigate contrastrelated data augmentation techniques (random scale and shift of intensity distributions) to make the network more robust to scanner and sequence variations (85). Evaluation We consider intraclass correlation coefficients (ICC) to be the best suited quantitative evaluation metric for the given task, as it measures both, degree of correlation and agreement. Nevertheless, its interpretation is non-trivial. As we can infer from the general definition of ICC (ratio of true variance to true variance plus error variance), a low ICC could also relate to a lack of variability among subjects (70). Consequently, absolute values of ICC between categories should be compared with care, e.g., between subcortical volumes (naturally higher intersubject variance) and cortical curvatures (lower inter-subject variance). Instead, the results should be contrasted with other established methods. A fair, good or excellent ICC [according (71)] was reached for all 29 subcortical volumes and the vast majority (54 out of 68) of the cortical thicknesses. The reliability of the predictions for the cortical curvatures is questionable, with only about half of them (35/68) being in the range of fair and above. For the cortical structures, the lowest ICC were found in the temporal lobe, an observation that is also reported by Madan et al. in a reliability evaluation of FreeSurfer (57). As we can see from the correlation plots, the CNN model was unable to capture the full variance of the silver-standard ground truth (trend toward the mean expected outcome). This observation is a known challenge in regression tasks (86) which are inevitably prone to the "regression toward the mean" effect (87) when optimizing a model by minimizing its prediction errors. The Bland-Altman plots revealed only a small bias from zero, but a tendency of the model to overestimate smaller values and underestimate the larger ones. Patients With Epilepsy Using morphometry predicted by the CNN, structural changes between healthy controls and patients with epilepsy were observed in our dataset, similar to the findings from the ENIGMA epilepsy study (21). Effect size differences were consistent for six out of eight regions. In case of the two deviating results for the right thalamus and globus pallidus, FreeSurfer is not in agreement with the findings from ENIGMA either. The cause is unknown, but might be related to the type of epilepsies in our dataset. Age-Related Cortical Gray Matter Atrophy Age-related gray matter atrophy is an extensively studied aspect of brain morphometry. Based on the predicted cortical thicknesses, a linear regression model revealed a statistically significant change of −0.004 mm/year in global average thickness for the population in our test set. Exactly the same value has been reported by Lemaitre et al. (78). Regionally, we found age-related atrophy to be less pronounced in the parcellations of the temporal lobe, which is in agreement with the literature (7,78,88). The cortical thickness of the entorhinal cortex was classified as less reliable from an ICC point of view, yet its age trend suggests a better correlation. A linear model suggested a slightly increasing thickness over the lifespan. A closer examination with a quadratic model revealed a remarkably similar pattern to what has been reported by Hasan et al. (79), namely an increasing thickness until around 45 years followed by a decrease again. It is worth highlighting again, that the age of the subjects is not part of the input data for the CNN. Comparison With Others No method can reasonably achieve a 100% accuracy for the given problem (MRI being a surrogate for the underlying anatomy, with a limited resolution and partial volume effects). Therefore, comparing a new method to well-established methods is common practice. We have contrasted the results to publications covering a variety of evaluation methods, such as manual tracing by experts, scan-rescan studies, and comparisons among different tools. The selected subcortical volumes and cortical thicknesses of the parietal lobe showed quite comparable magnitudes of intraclass correlation coefficients. Human inter-rater reliability for segmentation of hippocampi was reported (89) to be in the range of ICC = 0.73 − 0.85, which is considered as a reasonable upper bound on the accuracy of automated segmentation by Stein et al. (90). A comparison to other recently proposed fast methods (section 1) is not directly possible as these are either segmentation methods reporting the spatial overlap with Dice coefficients, or evaluation metrics for parcellation-wise averages are not available. Limitations and Outlook The lack of a gold-standard ground truth is one of the major challenges. Supervised training of a model with ground truth data generated by another method (in this case FreeSurfer) always leads to a bias toward the results from the tool, rather than the (unknown) true underlying values. The evaluation is limited to a comparison with the other method, in which the new model is unable to be superior to the baseline by definition. Furthermore, although FreeSurfer is a well-established and thoroughly validated tool, it is not immune to errors (in rare cases producing exceptionally large outliers). We have not performed any systematic quality control of the FreeSurfer output, such as visual inspection of the pial and white matter boundaries, neither on the training nor the test set. Although we used data acquired on two different scanners, with four different MRI protocols, they are all from the same center (Inselspital). We have no indication how well the trained model would generalize to data from other centers. On one hand, morphometric measures derived from traditional voxel-based morphometry (VBM) are also known to be biased to site-specific variations (91). On the other hand, deep learning has shown its ability to generalize toward a range of acquisition settings in MRI (92). To what extent this applies to brain morphometry remains to be investigated. Although the data comprised of both healthy controls and patients with epilepsy, the behavior of the model on pathologies not present in the training data is unknown. Despite progress to improve the interpretability of deep learning (93), deep neural networks are still considered, to a large extent, as black boxes (94). The difficulty to understand their decision-making-process poses a challenge in its adoption for medical applications (95), especially for direct classification and regression tasks. Future work should address the lack of visual inspection options for quality control, particularly for cortical thickness and curvature measures. For volumetric information of tissue classes and subcortical structures, a segmentation algorithm is probably still the preferred approach as it facilitates a visual verification of the results. The efficacy of a deep learning-based approach for brain morphometry for clinical applications has yet to be shown, ideally on an individual patient level. We plan to further evaluate this novel approach along with other established and emerging morphometry methods on a larger scale, with a broader dataset from several centers including different neurodegenerative diseases. CONCLUSIONS We have shown the general feasibility of using deep learning to estimate human brain morphometry directly from MRI within seconds. To the best of our knowledge, this is currently the fastest reported solution to obtain subcortical and cortical morphometric measures from MRI. A trained CNN predicts a total of 165 morphometric measures within seconds, compared to several hours of traditional methods. Analysis of the results using intraclass correlation coefficients and Bland-Altman plots showed, in general, good correlation with FreeSurfer generated silver-standard ground truth data. Some of the regions (namely subcortical volumes and cortical thicknesses in the parietal lobe) nearly reached human interobserver performance. Besides a good rescan reliability, further indications support the hypothesis of reaching an accuracy to be clinically relevant. Namely, (1) replication of the findings from the large-scale ENIGMA study to detect structural morphometric changes in patients with epilepsy, (2) observed cross-sectional annual age-related gray matter atrophy rates both globally and regionally in agreement with literature, and (3) contrasting the results with other publications reporting accuracies of comparable magnitudes. DATA AVAILABILITY STATEMENT The datasets used for this study cannot be made publicly available. The experiments were performed with data from patients and healthy controls of the Bern University Hospital. All study participants signed informed consent for the use of their data for research. However, this does not include permission to make the raw data publicly available. Code may be shared upon direct request. ETHICS STATEMENT This study was carried out in accordance with the recommendations of Kantonale Ethikkommission Bern with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Kantonale Ethikkommission Bern (protocol 2017-00697). Written informed consent to participate in this study was provided by the participants legal guardian/next of kin.
2020-04-08T19:13:38.755Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "b4dc902448bccea400958cb692563e69fe5b1ed3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.00244/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4dc902448bccea400958cb692563e69fe5b1ed3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14261216
pes2o/s2orc
v3-fos-license
Application of Wnt Pathway Inhibitor Delivering Scaffold for Inhibiting Fibrosis in Urethra Strictures: In Vitro and in Vivo Study Objective: To evaluate the mechanical property and biocompatibility of the Wnt pathway inhibitor (ICG-001) delivering collagen/poly(l-lactide-co-caprolactone) (P(LLA-CL)) scaffold for urethroplasty, and also the feasibility of inhibiting the extracellular matrix (ECM) expression in vitro and in vivo. Methods: ICG-001 (1 mg (2 mM)) was loaded into a (P(LLA-CL)) scaffold with the co-axial electrospinning technique. The characteristics of the mechanical property and drug release fashion of scaffolds were tested with a mechanical testing machine (Instron) and high-performance liquid chromatography (HPLC). Rabbit bladder epithelial cells and the dermal fibroblasts were isolated by enzymatic digestion method. (3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide (MTT) assay) and scanning electron microscopy (SEM) were used to evaluate the viability and proliferation of the cells on the scaffolds. Fibrolasts treated with TGF-β1 and ICG-001 released medium from scaffolds were used to evaluate the anti-fibrosis effect through immunofluorescence, real time PCR and western blot. Urethrography and histology were used to evaluate the efficacy of urethral implantation. Results: The scaffold delivering ICG-001 was fabricated, the fiber diameter and mechanical strength of scaffolds with inhibitor were comparable with the non-drug scaffold. The SEM and MTT assay showed no toxic effect of ICG-001 to the proliferation of epithelial cells on the collagen/P(LLA-CL) scaffold with ICG-001. After treatment with culture medium released from the drug-delivering scaffold, the expression of Collagen type 1, 3 and fibronectin of fibroblasts could be inhibited significantly at the mRNA and protein levels. In the results of urethrography, urethral strictures and fistulas were found in the rabbits treated with non-ICG-001 delivering scaffolds, but all the rabbits treated with ICG-001-delivering scaffolds showed wide caliber in urethras. Histology results showed less collagen but more smooth muscle and thicker epithelium in urethras repaired with ICG-001 delivering scaffolds. Conclusion: After loading with the Wnt signal pathway inhibitor ICG-001, the Collagen/P(LLA-CL) scaffold could facilitate a decrease in the ECM deposition of fibroblasts. The ICG-001 delivering Collagen/P(LLA-CL) nanofibrous scaffold seeded with epithelial cells has the potential to be a promising substitute material for urethroplasty. Longer follow-up study in larger animals is needed in the future. Introduction Urethral strictures were common diseases after urethral injury, which seriously affect the life quality of patients [1]. It has an incidence of 0.6% in male patients and results from a variety of types of etiological factors, such as mechanical, heat, and radiation [2]. The urethral defect was often healed with extracellular matrix (ECM) over expression which inevitably caused a reduction in the urethral caliber and impairment to the flow of urine [3]. Traditionally, these defects need surgery of urethral reconstruction with a substituted material such as an autologous penile flap or oral mucosa graft. However, these autologous materials will inevitably lead to serious morbidities at the donor site like infection, nerve injury and difficulties in opening the mouth etc. [4]. In spite of the rapid developments of the surgical procedures for urethroplasty, the recurrence rate was still high (approximately 20%) because of submucosa fibrosis and scar formation in the urethra after substitute surgery [1,5]. In recent years, the rapid development of regenerative medicine and engineered urethras provided a new procedure for the reconstruction of urethras [6][7][8]. In our previous research, scaffold only or scaffold seeded with epithelial cells for urethral reconstruction were applied in animal models [9,10]. Satisfactory results in a short period have been acquired in previous studies, however, submucosal tissue of neonatal urethra contains more collagen fibers and aligned messily compared with the natural urethral submucosa. This phenomenon is similar to the performance of fibrosis which is the most common cause of failed tissue engineered urethra reconstruction. Excessively deposited ECM of urethral submucosa was believed to be closely related to urethral restructure [11]. Significant graft fibrosis would thus invariably result in surgery failure and restricture in patients. In a preliminary study, we used rabbit fibroblasts for which the TGF-β gene was silenced by siRNA to repair the urethral submucosa [12,13]. These fibroblasts significantly inhibited ECM production in the rabbit urethral submucosa. However, such genetic technology was difficult to apply from bench to bedside due to ethical and technical aspects. It is known that the TGF-β pathway plays an important part in a variety of fibrotic diseases [14][15][16], and that the urethral tissue taken from patients with urethral strictures also overexpressed TGF-β1 gene [17,18]. It was also reported that TGF-β1 injection in urethra can successfully generate a reproducible rat model of urethral spongiofibrosis [19]. Many researches showed that a canonical Wnt/β-catenin signaling pathway was a downstream regulatory pathway of the TGF-β pathway [20][21][22][23]. TGF-β stimulates canonical Wnt signaling and activation of canonical Wnt signaling contributes to the profibrotic effects of TGF-β [24]. Blocking of the Wnt signaling pathway was demonstrated to be efficient in curing fibrosis in skin, kidney, and lung, etc. [25][26][27][28][29][30]. To inhibit Wnt signaling instead of TGF-β signaling might therefore be a promising solution for urethral stricture but without severe adverse effects caused by blocked TGF-β signaling. Wnt inhibitors that target β-catenin/TCF interaction or β-catenin co-factor recruitment may represent potential therapeutic approaches for fibrosis [26]. Furthermore, varieties of modulators of the Wnt pathway with small molecular weight have been discovered, such as ICG-001, and PKF118-310 etc. The Wnt pathway inhibiter ICG-001 we used here is a small molecule with 548.63 in molecular weight [31]. Electrospinning is an adaptable method for fabrication of scaffolds [32]. The scaffolds fabricated by electrospinning exhibit high porosity and micro to nano scale topography, similar to the structure of natural ECM, and are widely used in the engineering of various tissues [33]. Here, we constructed a novel electrospun nanofiber scaffold delivering ICG-001 through the co-axial electrospinning technique. The nanofiber was composed of collagen type 1 and poly(L-lactide-co-caprolactone) (P(LLA-CL)), which morphologically and structurally mimicked the ECM of native tissue [34]. Collagen is an important extracellular matrix protein component, possessing a natural biocompatibility, the application of collagen can promote the expansion and differentiation of urothelial cells, however the mechanical properties of the collagen scaffold alone is relatively brittle and fragile. Therefore, a combination of materials was made in order to fully satisfy the mechanical properties of the tissue engineered urethra. The (P(LLA-CL)) is the copolymer of L-lactic acid and ε-caprolactone, which possesses good biocompatibility, biological degradability and mechanical properties, but because of its hydrophobic characteristic, it is not conducive to cell adhesion and proliferation. In our study, collagen and P(LLA-CL) were combined to obtain good biocompatibility and mechanical strength of the desired nanofiber [33]. Compared with the method of blending the drug with the polymer materials directly, the core-shell co-axial electrospinning could decrease the burst release and protect the drug activity in the process of fabrication [35,36]. In this study, We combined the technique of co-axial electrospinning technology with the Wnt pathway inhibitor ICG-001 to produce a functional electrospun collagen/P(LLA-CL) nanofiber scaffold, then evaluated its anti-fibrosis effect in vitro and in rabbits urethral defects model in order to provide preliminary evidence and foundation for the large animal study and clinical practice in the future. Characteristics of Scaffolds The thickness of the non-drug scaffolds was 0.75˘0.16 mm and ICG-001 ( Figure 1) delivering scaffold 0.78˘0.12 mm. The SEM figures of the scaffolds showed that both the drug delivering fibers and non-drug delivering fibers formed a structure with high interconnection and porosity. 200 fibers of each scaffold were measured, the drug delivering fiber diameter was 457˘82 nm (Figure 2A) and the fiber diameter without drug delivering was 522˘177 nm ( Figure 2B). The small intestinal submucosa (SIS, Cook medical, IN, USA), a commercial substitute material for urethroplasty, was used as a control to compare the mechanical property. The mechanical property included tensile strength ( Figure 2C), strain at break ( Figure 2D). Both scaffolds showed higher tensile strength and strain at break than SIS significantly. The non-drug and drug delivering scaffolds showed no significant difference. and mechanical strength of the desired nanofiber [33]. Compared with the method of blending the drug with the polymer materials directly, the core-shell co-axial electrospinning could decrease the burst release and protect the drug activity in the process of fabrication [35,36]. In this study, We combined the technique of co-axial electrospinning technology with the Wnt pathway inhibitor ICG-001 to produce a functional electrospun collagen/P(LLA-CL) nanofiber scaffold, then evaluated its anti-fibrosis effect in vitro and in rabbits urethral defects model in order to provide preliminary evidence and foundation for the large animal study and clinical practice in the future. Characteristics of Scaffolds The thickness of the non-drug scaffolds was 0.75 ± 0.16 mm and ICG-001 ( Figure 1) delivering scaffold 0.78 ± 0.12 mm. The SEM figures of the scaffolds showed that both the drug delivering fibers and non-drug delivering fibers formed a structure with high interconnection and porosity. 200 fibers of each scaffold were measured, the drug delivering fiber diameter was 457 ± 82 nm ( Figure 2A) and the fiber diameter without drug delivering was 522 ± 177 nm ( Figure 2B). The small intestinal submucosa (SIS, Cook medical, IN, USA), a commercial substitute material for urethroplasty, was used as a control to compare the mechanical property. The mechanical property included tensile strength ( Figure 2C), strain at break ( Figure 2D). Both scaffolds showed higher tensile strength and strain at break than SIS significantly. The non-drug and drug delivering scaffolds showed no significant difference. and mechanical strength of the desired nanofiber [33]. Compared with the method of blending the drug with the polymer materials directly, the core-shell co-axial electrospinning could decrease the burst release and protect the drug activity in the process of fabrication [35,36]. In this study, We combined the technique of co-axial electrospinning technology with the Wnt pathway inhibitor ICG-001 to produce a functional electrospun collagen/P(LLA-CL) nanofiber scaffold, then evaluated its anti-fibrosis effect in vitro and in rabbits urethral defects model in order to provide preliminary evidence and foundation for the large animal study and clinical practice in the future. Characteristics of Scaffolds The thickness of the non-drug scaffolds was 0.75 ± 0.16 mm and ICG-001 ( Figure 1) delivering scaffold 0.78 ± 0.12 mm. The SEM figures of the scaffolds showed that both the drug delivering fibers and non-drug delivering fibers formed a structure with high interconnection and porosity. 200 fibers of each scaffold were measured, the drug delivering fiber diameter was 457 ± 82 nm ( Figure 2A) and the fiber diameter without drug delivering was 522 ± 177 nm ( Figure 2B). The small intestinal submucosa (SIS, Cook medical, IN, USA), a commercial substitute material for urethroplasty, was used as a control to compare the mechanical property. The mechanical property included tensile strength ( Figure 2C), strain at break ( Figure 2D). Both scaffolds showed higher tensile strength and strain at break than SIS significantly. The non-drug and drug delivering scaffolds showed no significant difference. In Vitro Release of ICG-001 from the Scaffolds The controlled release of ICG-001 from drug delivering collagen/P(LLA-CL) scaffolds were analyzed with High-performance liquid chromatography (HPLC) (Figure 3). The release of ICG-001 was tested from day one after the scaffold was immerged in the PBS solution. The theoretical weight of ICG-001 in scaffold specimen is 0.01 mg according to the total area and weight of the electrospun scaffold. In the PBS, the release experienced two stages: the initial burst released before day three, and the continuous release from day three to day 30. On day three, the percentage in released solution was over 48% (0.048 mg). After day three, the release of ICG-001 was sustained and the curve showed a stable trend. The percentage reached 75% (0.075 mg) after day 30. Cell Isolation and Identification Successful epithelial cell ( Figure 4A) and fibroblasts ( Figure 4C) cultures were obtained from biopsy of bladder and dermal tissue used in the study. Bladder epithelial cells were isolated and expanded until enough numbers of cells were obtained. The epithelial cells showed the expression of pan Cytokeratin ( Figure 4B), which is a specific marker of epithelial cells. The fibroblasts were identified with vimentin ( Figure 4D). In Vitro Release of ICG-001 from the Scaffolds The controlled release of ICG-001 from drug delivering collagen/P(LLA-CL) scaffolds were analyzed with High-performance liquid chromatography (HPLC) (Figure 3). The release of ICG-001 was tested from day one after the scaffold was immerged in the PBS solution. The theoretical weight of ICG-001 in scaffold specimen is 0.01 mg according to the total area and weight of the electrospun scaffold. In the PBS, the release experienced two stages: the initial burst released before day three, and the continuous release from day three to day 30. On day three, the percentage in released solution was over 48% (0.048 mg). After day three, the release of ICG-001 was sustained and the curve showed a stable trend. The percentage reached 75% (0.075 mg) after day 30. In Vitro Release of ICG-001 from the Scaffolds The controlled release of ICG-001 from drug delivering collagen/P(LLA-CL) scaffolds were analyzed with High-performance liquid chromatography (HPLC) (Figure 3). The release of ICG-001 was tested from day one after the scaffold was immerged in the PBS solution. The theoretical weight of ICG-001 in scaffold specimen is 0.01 mg according to the total area and weight of the electrospun scaffold. In the PBS, the release experienced two stages: the initial burst released before day three, and the continuous release from day three to day 30. On day three, the percentage in released solution was over 48% (0.048 mg). After day three, the release of ICG-001 was sustained and the curve showed a stable trend. The percentage reached 75% (0.075 mg) after day 30. Cell Isolation and Identification Successful epithelial cell ( Figure 4A) and fibroblasts ( Figure 4C) cultures were obtained from biopsy of bladder and dermal tissue used in the study. Bladder epithelial cells were isolated and expanded until enough numbers of cells were obtained. The epithelial cells showed the expression of pan Cytokeratin ( Figure 4B), which is a specific marker of epithelial cells. The fibroblasts were identified with vimentin ( Figure 4D). Cell Isolation and Identification Successful epithelial cell ( Figure 4A) and fibroblasts ( Figure 4C) cultures were obtained from biopsy of bladder and dermal tissue used in the study. Bladder epithelial cells were isolated and expanded until enough numbers of cells were obtained. The epithelial cells showed the expression of pan Cytokeratin ( Figure 4B), which is a specific marker of epithelial cells. The fibroblasts were identified with vimentin ( Figure 4D). Cultured epithelial cells were seeded onto each scaffold with a density of 0.5 million/cm 2 . The structure with cells was cultured in D-KSFM for 1 week before being tested and implanted into the animal. Fibroblasts were cultured and passaged in six-well plates for the anti-fibrosis test. Cell Morphology and Proliferation on Scaffold On days three and seven, SEM was used to observe the growth of the cells on the scaffolds. The SEM showed that the epithelial cells attached well on both kinds of scaffolds on day three ( Figure 5A,B). The cells stretched peripherally to the pores of the scaffold and connected to each other. The cells proliferated and expanded to cover the majority of the scaffold on day seven ( Figure 5C,D). MTT assay showed a similar rate of proliferation of epithelial cells on both non-drug scaffold and drug delivering scaffold. The proliferation of each scaffold could provide a feasible environment for the cells, the relative absorption between the third and seventh day was significantly different. However, the relative absorption between each kinds of scaffold was not statistically significant which demonstrated that the ICG-001 was not toxic to the epithelial cells ( Figure 6). Cultured epithelial cells were seeded onto each scaffold with a density of 0.5 million/cm 2 . The structure with cells was cultured in D-KSFM for 1 week before being tested and implanted into the animal. Fibroblasts were cultured and passaged in six-well plates for the anti-fibrosis test. Cell Morphology and Proliferation on Scaffold On day three and seven, SEM was used to observe the growth of the cells on the scaffolds. The SEM showed that the epithelial cells attached well on both kinds of scaffolds on day three ( Figure 5A,B). The cells stretched peripherally to the pores of the scaffold and connected to each other. The cells proliferated and expanded to cover the majority of the scaffold on day seven ( Figure 5C,D). MTT assay showed a similar rate of proliferation of epithelial cells on both non-drug scaffold and drug delivering scaffold. The proliferation of each scaffold could provide a feasible environment for the cells, the relative absorption between the third and seventh day was significantly different. However, the relative absorption between each kinds of scaffold was not statistically significant which demonstrated that the ICG-001 was not toxic to the epithelial cells ( Figure 6). Immunofluorescence of Anti-Fibrosis Effect of Medium Released from Scaffold Fibroblasts were treated with TGF-β1 with or without ICG-001 for three days to evaluate the phenotype change of myofibroblasts and ECM expression. The signals of collagen type 1 and 3 was significantly reinforced in the TGF-β1 treated fibroblasts ( Figure 7A,E). Collagen type 1 and 3 expression levels were decreased in the fibroblasts treated with ICG-001 in TGF-β1 treated fibroblasts ( Figure 7B,F). The fibroblasts without TGF-β1 treatment ( Figure 7C,G) also showed decreased collagen type 1 and 3 expression levels after treatment of ICG-001 released medium ( Figure 7D,F). Immunofluorescence of Anti-Fibrosis Effect of Medium Released from Scaffold Fibroblasts were treated with TGF-β1 with or without ICG-001 for three days to evaluate the phenotype change of myofibroblasts and ECM expression. The signals of collagen type 1 and 3 was significantly reinforced in the TGF-β1 treated fibroblasts ( Figure 7A,E). Collagen type 1 and 3 expression levels were decreased in the fibroblasts treated with ICG-001 in TGF-β1 treated fibroblasts ( Figure 7B,F). The fibroblasts without TGF-β1 treatment ( Figure 7C,G) also showed decreased collagen type 1 and 3 expression levels after treatment of ICG-001 released medium ( Figure 7D,F). Real Time PCR Quantitative real-time PCR was performed at day three to evaluate the change of gene expression of fibroblasts at the mRNA level ( Figure 8). Collagen type 1 ( Figure 8A), collagen type 3 ( Figure 8B), α-smooth muscle actin (α-SMA) ( Figure 8C), Matrix metalloproteinase 1 (MMP1) ( Figure 8D), Tissue inhibitor of metalloproteinases 1 (TIMP1) ( Figure 8E) and β-catenin ( Figure 8F) were analyzed for fibroblasts treated with TGF-β1 or not. Compared with the TGF-β1 treatment group, the ICG-001 culture medium could decrease the expression of collagen type 1, 3 and α-SMA significantly and elevate the mRNA expression of MMP1 and TIMP1 significantly. The ICG-001 medium could decrease mRNA expression of collagen type 1 and elevate the MMP1 and TIMP1 significantly compared with untreated fibroblasts. Figure 8E) and β-catenin ( Figure 8F) were analyzed for fibroblasts treated with TGF-β1 or not. Compared with the TGF-β1 treatment group, the ICG-001 culture medium could decrease the expression of collagen type 1, 3 and α-SMA significantly and elevate the mRNA expression of MMP1 and TIMP1 significantly. The ICG-001 medium could decrease mRNA expression of collagen type 1 and elevate the MMP1 and TIMP1 significantly compared with untreated fibroblasts. Western Blot Western blot was performed for relative quantitative analysis of collagen type 1, collagen type 3, fibronectin and α-SMA ( Figure 9A). The quantitative results were consistent with the immunofluorescence staining and real time PCR, the relative expression of collagen type 1 ( Figure 9B), collagen type 3 ( Figure 9C), fibronectin ( Figure 9D) and α-SMA ( Figure 9E) to β-actin decreased with the adding of culture medium of ICG-001 delivering scaffold, especially in the groups of TGF-β1 treated fibroblasts. Western Blot Western blot was performed for relative quantitative analysis of collagen type 1, collagen type 3, fibronectin and α-SMA ( Figure 9A). The quantitative results were consistent with the immunofluorescence staining and real time PCR, the relative expression of collagen type 1 ( Figure 9B), collagen type 3 ( Figure 9C), fibronectin ( Figure 9D)and α-SMA ( Figure 9E) to β-actin decreased with the adding of culture medium of ICG-001 delivering scaffold, especially in the groups of TGF-β1 treated fibroblasts. Urethrography and Surgery Outcomes Twelve rabbits in two groups underwent operations and survived within three months. Five out of six rabbits in group 1 developed narrow urethral lumens according to the urethrography ( Figure 10C) and one rabbit developed a fistula at the penile skin ( Figure 10B). All the rabbits in group 2 showed unrestricted lumen ( Figure 10D) and no sign of fistula at the penile skin was found. The results demonstrated that epithelial cell seeded Col/P(LLA-CL) scaffolds were successfully used in the reconstruction of 2 cm urethral defects in rabbits models. Urethrography and Surgery Outcomes Twelve rabbits in two groups underwent operations and survived within three months. Five out of six rabbits in group 1 developed narrow urethral lumens according to the urethrography ( Figure 10C) and one rabbit developed a fistula at the penile skin ( Figure 10B). All the rabbits in group 2 showed unrestricted lumen ( Figure 10D) and no sign of fistula at the penile skin was found. The results demonstrated that epithelial cell seeded Col/P(LLA-CL) scaffolds were successfully used in the reconstruction of 2 cm urethral defects in rabbit models. Western Blot Western blot was performed for relative quantitative analysis of collagen type 1, collagen type 3, fibronectin and α-SMA ( Figure 9A). The quantitative results were consistent with the immunofluorescence staining and real time PCR, the relative expression of collagen type 1 ( Figure 9B), collagen type 3 ( Figure 9C), fibronectin ( Figure 9D)and α-SMA ( Figure 9E) to β-actin decreased with the adding of culture medium of ICG-001 delivering scaffold, especially in the groups of TGF-β1 treated fibroblasts. Urethrography and Surgery Outcomes Twelve rabbits in two groups underwent operations and survived within three months. Five out of six rabbits in group 1 developed narrow urethral lumens according to the urethrography ( Figure 10C) and one rabbit developed a fistula at the penile skin ( Figure 10B). All the rabbits in group 2 showed unrestricted lumen ( Figure 10D) and no sign of fistula at the penile skin was found. The results demonstrated that epithelial cell seeded Col/P(LLA-CL) scaffolds were successfully used in the reconstruction of 2 cm urethral defects in rabbits models. Histology and Immunohistology Results In histology test of group 1, the lumen surface formed discontinued epithelial layer according to the H&E staining ( Figure 11A) and AE1/AE3 ( Figure 11C) immunohistology image. The tissue in the urethra showed a large amount of collagen and less smooth muscle according to the Masson staining image ( Figure 11B). However, in group 2, the epithelial cells developed multiple layer epithelium ( Figure 11D,F). The tissue in the submucosa developed more smooth muscle and less collagen in the Masson image ( Figure 11E). The quantitative analysis with image J showed the significant difference of collagen ( Figure 11G), smooth muscle ( Figure 11H) and epithelium ( Figure 11I) between the two Histology and Immunohistology Results In histology test of group 1, the lumen surface formed discontinued epithelial layer according to the H&E staining ( Figure 11A) and AE1/AE3 ( Figure 11C) immunohistology image. The tissue in the urethra showed a large amount of collagen and less smooth muscle according to the Masson staining image ( Figure 11B). However, in group 2, the epithelial cells developed multiple layer epithelium ( Figure 11D,F). The tissue in the submucosa developed more smooth muscle and less collagen in the Masson staining image ( Figure 11E). The quantitative analysis with image J showed the significant difference of collagen ( Figure 11G), smooth muscle ( Figure 11H) and epithelium ( Figure 11I) between the two groups. 10 Figure 10. The surgery process and complication. The tubularized scaffolds were implanted into the urethral defects (A); A fistula developed at the penile skin in group 1 (B). The red arrow indicates the position of fistula; The representative image of retrograde urethrography after rabbit urethroplasty with the non-drug scaffold (C); and ICG-001 delivering scaffold (D); the lumen diameter of urethras are shown in (E). The blue arrow indicates the surgery position. * p < 0.05. Histology and Immunohistology Results In histology test of group 1, the lumen surface formed discontinued epithelial layer according to the H&E staining ( Figure 11A) and AE1/AE3 ( Figure 11C) immunohistology image. The tissue in the urethra showed a large amount of collagen and less smooth muscle according to the Masson staining image ( Figure 11B). However, in group 2, the epithelial cells developed multiple layer epithelium ( Figure 11D,F). The tissue in the submucosa developed more smooth muscle and less collagen in the Masson image ( Figure 11E). The quantitative analysis with image J showed the significant difference of collagen ( Figure 11G), smooth muscle ( Figure 11H) and epithelium ( Figure 11I) between the two groups. Discussion In the present study, it was demonstrated that the biocompatibility and mechanical properties of the ICG-001-delivering collagen/P(LLA-CL) electrospun scaffold was sufficient for bladder epithelial cells proliferation and could be applied to urethroplasty. The co-axial electrospun scaffold delivering ICG-001 delivered drug release in a controlled fashion and had obvious effects of antifibrosis. The ideal scaffolds could undertake many tasks such as to provide the necessary mechanical property and deliver inductive biomolecules [37,38]. Scaffolds made of nanofibrous materials with electrospinning approach have been applied for tissue engineering, both for replacement and regeneration [33,39]. Recently, great efforts have been made in the field of biomaterials in order to discover new methods to deliver therapeutic agents through the co-axial electrospinning technique [40,41]. Release of molecules from scaffolds can significantly improve the scaffold ability of directing tissue regeneration in vitro and in vivo [42,43]. As is well known, gene techniques are an efficient approach to influence cell destiny, however, such genetic technology is very difficult to apply to patients due to ethical and technical aspects [44]. Therefore, biomaterials for delivering signaling pathway inhibitors have a higher attraction for surgeons in the clinic. The advantage of core-shell co-axial electrospinning secured a controlled release of nearly 75% ICG-001 in scaffolds for as long as 30 days. According to the instruction of the manufacturer, the pharmaceutical activity of ICG-001 could be maintained three months at room temperature. It was The quantitative analysis of the histology showed the relative collagen area (G); relative smooth muscle area (H); and epithelium thickness (I); n = 10. * p < 0.05. Discussion In the present study, it was demonstrated that the biocompatibility and mechanical properties of the ICG-001-delivering collagen/P(LLA-CL) electrospun scaffold was sufficient for bladder epithelial cells proliferation and could be applied to urethroplasty. The co-axial electrospun scaffold delivering ICG-001 delivered drug release in a controlled fashion and had obvious effects of anti-fibrosis. The ideal scaffolds could undertake many tasks such as to provide the necessary mechanical property and deliver inductive biomolecules [37,38]. Scaffolds made of nanofibrous materials with electrospinning approach have been applied for tissue engineering, both for replacement and regeneration [33,39]. Recently, great efforts have been made in the field of biomaterials in order to discover new methods to deliver therapeutic agents through the co-axial electrospinning technique [40,41]. Release of molecules from scaffolds can significantly improve the scaffold ability of directing tissue regeneration in vitro and in vivo [42,43]. As well known, gene techniques are an efficient approach to influence cell destiny, however, such genetic technology is very difficult to apply to patients due to ethical and technical aspects [44]. Therefore, biomaterials for delivering signaling pathway inhibitors have a higher attraction for surgeons in the clinic. The advantage of core-shell co-axial electrospinning secured a controlled release of nearly 75% ICG-001 in scaffolds as long as 30 days. According to the instruction of the manufacturer, the pharmaceutical activity of ICG-001 could be maintained three months at room temperature. It was reported that the urethral scar formed in the first month after urethral injury [45,46], so the pharmaceutical activity could be sufficient for reducing scar formation for the in vivo study in the future. Wnt signaling has been implicated into the pathogenesis of various human disorders such as fibrotic diseases [47]. Activation of canonical Wnt signaling induces fibroblast activation with excessive ECM release, resulting in tissue fibrosis that disrupts the normal physiological tissue architecture [48]. Beyer et al. demonstrated that the use of small molecule drugs to inhibit the Wnt pathway is an effective method for the treatment of fibrosis; it could obtain a similar effect of inhibiting the TGF-β pathway, but has good cell tolerance [26]. In the SEM and MTT assay for epithelial cells on scaffolds, the proliferation rate was not influenced significantly by ICG-001, which demonstrated that the biocompatibility of ICG-001 is acceptable. Many studies on the molecular mechanisms of Wnt signaling pathway inhibitor have been performed; however, to our knowledge, there is no previous report about the application of Wnt signaling interference with regenerative medicine. We have used the culture medium released from the ICG-001 delivering scaffold to inhibit the ECM expression of fibroblasts in vitro. In the immunofluorescence studies, highest expression of collagen type 1 and 3 was noticed in the TGF-β1 treated group; when combined with ICG-001-released medium, the expression showed an obvious decline. This phenomenon was also found in the group with ICG-001-released medium alone compared with the control group. Real time PCR revealed the effects of ICG-001 at the RNA level. RNA expression of collagen type 1, collagen type 3, and α-SMA was significantly elevated in the groups with ICG-001-released medium compared with the TGF-β1 alone, and control group. α-SMA is a biomarker of myofibroblasts which demonstrated the phenotype change of fibroblasts and higher expression of the ECM gene [49]. Myofibroblasts are considered as the main cause of scar formation in many fibrosis related disease [50]. The RNA expression of both MMP1 and TIMP1 were elevated when ICG-001 was applied, the balance of both genes are important for scar formation. Although TIMP1 elevated with MMP1, the ultimate ECM genes expression was down regulated. Western blot results demonstrated that the ICG-001-released solution could inhibit the production of ECM related protein at the protein level, including α-SMA, collagen type 1, 3 and fibronectin. The intensity of western blot bands in the ICG-001 group was decreased significantly compared with the TGF-β1 treated group and control group respectively. The results were consistent with that in the fluorescence and real-time PCR. Based on in vitro study, we investigated the therapeutic effect of the ICG-001 delivering scaffold in the rabbit urethra defect model. Collagen deposition was inhibited significantly by the ICG-001 delivering scaffold according to the histology. ICG-001 could be released from the scaffold gradually after transplanting into the urethral defect to inhibit the inflammation and ECM deposition during the healing process of the urethra. In our previous meta-analysis about anti-fibrosis drugs for urethral stricture, the efficacy of various drugs were not yet verified [51]. This functional ICG-001 delivering scaffold might become a candidate for preventing the urethra from recurrent stricture. It was reported that the airway epithelial cell layer could be maintained by ICG-001 and that airway epithelial cell apoptosis was significantly decreased in bleomycin-induced animal models [29]. This character of ICG-001 might play an important role in urethral epithelial cells, thus besides the anti-fibrosis effect, the epithelial layer in experimental group was thicker than the control group. The limitation of the present study is that a more appropriate animal model is needed to evaluate the treatment outcomes of post-traumatic urethral strictures. Fabrication of Nanofibrous Scaffold Delivering ICG-001 Collagen/P(LLA-CL) scaffolds delivering ICG-001 ( Figure 1) were constructed using a co-axial electrospinning device (Donghua University, Shanghai). The solution of the core layer was 1 g collagen/P(LLA-CL) dissolved in 2, 2, 2-trifluoroethanol; then it was mixed with 1 mg ICG-001 in 60 µL DMSO solution and injected at a rate of 0.2 mL/h. The solution of the shell layer was 1 g Collagen/P(LLA-CL) dissolved in 2, 2, 2-trifluoroethanol and fed at 0.8 mL/h. During the process of scaffold fabrication, room temperature was kept within 22-25˝C, and the relative humidity was 40%-50%. A stainless dish was used to collect the nanofibers. The distance between the sprayer tip and the receiving dish was set to 15 cm and the positive voltage was 18 kV. The scaffolds were kept under vacuum at room temperature for 48 h before being used. The non-drug co-axial collagen/P(LLA-CL) electrospun nanofiber scaffold was fabricated as a control group, all the processes were same, but 60 µL DMSO without ICG-001 was added in the core solution. Scanning Electron Microscopy of Scaffolds Scanning electron microscopy (SEM, Hitachi TM-100, Tokyo, Japan) was used to observe the morphology of the scaffolds. Specimens were punched into 1.2 cm-diameter disks and cryopreserved at´80˝C for 2 h, then freeze-dried overnight and preserved in a vacuum container. The specimens of scaffolds were imaged under SEM. Nanofiber diameter was measured with 200 fibers by image analysis software Image-J. Mechanical Property Evaluation To compare the mechanical properties of different scaffolds, small intestinal submucosa (SIS), a commercial biomaterial for urethra reconstruction, was used as the control material. Tensile strength was measured by an instron tensile tester (model 5544; Norwood, MA, USA). All samples were prepared as longitudinal strips (20 mm in length and 10 mm in width). Each sample of scaffolds was fixed onto the clamps and pulled at 5 mm/minute crosshead speed until rupture. Burst pressure was measured by gradually increasing hydrostatic pressure within the scaffolds at a rate of 80 mmHg/min. Then Bluehill software (Norwood, MA, USA) was used to calculate the maximal tensile strength. In Vitro Release Test with High-Performance Liquid Chromatography The ICG-001 delivering scaffold was punched into dishes with 1.2 cm in diameter and weighed; the weight was 60 mg. They were then placed into 1.5 mL eppendorf tubes. Each tube was filled with 1 mL phosphate buffered saline (PBS) solution and sealed tightly, and then incubated in an air rotator at 100 rpm in 37˝C. At the initial time point and predetermined time intervals, 100 µL of the supernatant in the tubes was collected and fresh PBS in equal volume was added. The release of ICG-001 in the buffer was detected by high-performance liquid chromatography (HPLC) in the Biomaterials and Tissue Engineering Laboratory of Donghua University, China. The eluent used in the HPLC process is acetonitrile (Jinjingle Chemical Engineering, Shanghai, China). A standard curve was made with gradient dilution of the ICG-001 DMSO solution from 10 to 10 thousand. The releasing experiments were performed and compared with the standard curve in triplicate every three days and completed until 30 days. The data were obtained and carefully analyzed to determine the concentration of ICG-001 released from the specimens at each immersion time point. Cell Isolation and Identification All the animal experiments were in accordance with the guidelines for animal care. The animal protocol (SYXK 2011-0128) was approved by the animal ethics committee of Shanghai Sixth People's Hospital, Shanghai, China. The project identification code is 14JC1492100, which is approved from 1 September 2014 by Science and technology commission of Shanghai, China. Bladder biopsies and epithelial cell harvesting were made using 12 male New Zealand white rabbits. The rabbits were pretreated with 15 mg/kg Ketamine, 2 to 3 mg/kg xylazine and 0.75 mg/kg acepromazine intramuscularly, then they were anesthetized and maintained with 2% isoflurane. A small laparotomy incision was made above the pubic symphysis to expose the bladder. A biopsy specimen with 2ˆ2 cm was excised from the bladder wall, then the defect was closed with 3-0 polyglactin sutures in 2 layers. The rabbits were given 5 mg/kg enrofloxacin intramuscularly for 3 days after operation. The specimen was processed in a sterile condition. It was washed with PBS with 100 IU/mL penicillin and 100 µg/mL streptomycin. The epithelium was scraped from smooth muscle layer after using dispase type 2 enzyme (Roche) at 4˝C overnight. Then the epithelium was cut into small fractions and incubated in 0.25% trypsin solution for 15-30 min. The solution of cells were collected and cultured in 10 cm culture dish coated with 1% Type 1 rat tail collagen. Defined keratinocyte serum-free medium (DKSFM, Life Technologies, Carlsbad, CA, USA) with supplements was used as culture medium. Before being seeded into the scaffold, epithelial cells were identified with anti-pan cytokeratin antibody AE1/AE3 (ab27988; Abcam, Cambridge, UK). Dermal fibroblasts were used to perform the collagen inhibition assay in this study. Two cm 2 of rabbit dermal tissue at abdomen was excised from one rabbit and rinsed under sterile conditions, then it was cut into small pieces and digested with collagenase type 1. The cells were cultured in 6-well plates with DMEM supplemented with 10% fetal calf serum. Then they were identified with vimentin antibody before being used (Santa Cruz, Dallas, TX, USA). In Vitro Analysis of Epithelial Cell-Seeded Scaffolds To prevent waste of ICG-001 in the sterilizing process with 70% ethanol, the scaffolds were sterilized with ultraviolet for 2 h. The epithelial cells were seeded on the surface of the scaffolds with 5ˆ10 5 cells/cm 2 . Cells were cultured with defined keratinocyte serum free medium (DKSFM) for 7 days before being used. At days 3 and 7, cell-seeded scaffolds were rinsed with PBS to remove the non-adherent cells. Then the cells with scaffolds were fixed in 2.5% glutaraldehyde for 30 min at room temperature. Afterwards, they were dehydrated through a series of graded alcohol solutions. The drying process was conducted with the critical point dryer (Donghua university, Shanghai). The scaffolds were sputter coated with gold-palladiu (AuPd), and examined under SEM at 12 kV. MTT Assay Cell proliferation was tested quantitatively by using the MTT assay at day 1, 3 and 7. Cells on scaffold were incubated with MTT (5 mg/mL in DMEM without phenol red; Sigma-Aldrich, St. Louis, MO, USA). After 3 h of incubation, the medium was transferred into wells of a 96-well plate. The data were read at 490 nm in a synergy plate reader. Fibroblast Induction and Solution Preparation Ten thousand fibroblasts were transferred to each well of 4-well chamber slides and cultured overnight. The ICG-001 delivered scaffold was sterilized with ultraviolet for 2 h. To collect the ICG-001 medium, 2 mL of complete culture medium was used to immerse 120 mg ICG-001 delivered scaffold to get ICG-001 solution for 24 h. TGF-β1 (Life Technologies) 5 ng/mL was used to induce a phenotype change from fibroblasts to myofibroblasts and overexpress ECM according to the protocol [52]. Four groups were set in the study with adding TGF-β1, ICG-001 solution alone or simultaneously to fibroblasts. Group 1: TGF-β1; Group 2: TGF-β1 + ICG-001 solution; Group 3: untreated fibroblasts; and Group 4: ICG-001 solution. Three days after culturing, the fibroblasts were used in various tests. Immunofluorescence The primary monoclonal antibodies were anti-collagen type 1 and anti-collagen type 3 from mouse (Sigma, St. Louis, MO, USA). The cells were treated with 0.2% Triton X-100 for 10 min at room temperature and incubated with the primary antibody for 60 min at 37˝C, then the cells were rinsed with PBS 3 times and incubated with primary antibody and fluorescent labeled secondary antibody (Donkey anti-Mouse IgG (heavy + light chain) Secondary Antibody, Alexa Fluor 488 and 594 conjugate, Thermo Fisher Scientific, Waltham, MA, Country) for 30 min at 37˝C. The nuclei were stained with Fluoroshield Mounting Medium (Sigma, St. Louis, MO, USA) with DAPI. The cells were examined with fluorescence microscopy. RNA Extraction and Real Time PCR At day 3 of treating, total RNA was extracted for the quantification of collagen type 1, type 3, TIMPs, MMPs, β-catenin and α-SMA (Rneasy maxi kit, Qiagen, Valencia, Spain), cDNA was synthesized (TaqMan RT, Roche Molecular Biochemicals, Indianapolis, IN ,USA). TaqMan probes were applied for the quantification of the target genes. β-Actin was used as an endogenous quality control. Western Blot Analysis At day 3, western blot analysis was conducted to analyze the relative expression level of collagen type 1, 3, fibronectin and α-SMA in fibroblasts treated with TGF-β1 and culture medium released from the ICG-001 delivering scaffold. The lysis buffer (Radio-Immunoprecipitation Assay buffer and Phenylmethanesulfonyl fluoride, Thermal Scientific, Waltham, MA, USA) was used to extract the proteins. After running on a 6% gel, proteins were transferred to nitrocellulose membranes (Bio-rad, Hercules, CA, USA). Membranes were blocked with Tris-buffered saline with 0.1% Tween-20 (TBST) containing 5% nonfat dry milk at room temperature, then incubated with primary antibodies at 4˝C overnight and subsequently with HRP (horseradish peroxidase)-conjugated goat anti-mouse secondary antibody for 1 h at room temperature. Anti-β-actin antibody was used as a protein loading control. The results were quantified using Quantity one and showed as the relative expression to β-actin. Rabbit Urethroplasty Twelve male New Zealand white rabbits were divided into 2 groups. Six rabbits were in group 1 and treated with non-drug scaffold with epithelial cells seeded. Group 2 were treated with an ICG-001 delivering scaffold with epithelial cells seeded. After general anesthesia with intravenous injection of pentobarbital, Foley F8 silicone catheters (Suzhou, Jiangsu, China) were inserted into the urethra of 18 male rabbits. All surgeries were performed by an urologist. Briefly, the skin approximately 3 cm from the external urethral orifice was sectioned, and the urethra was dissected from the corpus cavernosum. Ventral urethral defects (mean length of 2.0 cm and width of 0.8 cm) were created in the bulbar urethra of rabbits. The scaffolds (length of 2 cm and width of 1 cm) were tubularized and sutured to form tubes, then sutured to the urethra defect with 6-0 absorbable polyglactin sutures. The 8F silicone catheter was left in the urethra and fixed to the glan of the rabbit with 6-0 absorbable sutures for 14 days postoperatively. The animals were observed twice on each day before catheters were removed; if the animal removed the catheter, another new catheter would be reinserted after anesthesia. Euthanization of the rabbits in the 2 groups after 3 months was planned. Urethrography Retrograde urethrograms were made for the animals in both groups to assess urethral caliber before animals were euthanized. Histology and Immunohistology Assessment The urethras were harvested for histology analysis. Hematoxylin and eosin stain (H&E), Masson trichrome staining and AE1/AE3 immunohistology test were conducted to identify the epithelium layer, smooth muscle and collagen. The Masson staining images were used to collagen and smooth muscle analysis, and the AE1/AE3 staining images were used for epithelial analysis. Statistical Analysis Results are expressed as mean˘standard deviation. SPSS statistical software 16.0 (Chicago, IL, USA) was applied to calculate the data by one-way analysis of variance; p < 0.05 was considered statistically significant. Conclusions We successfully constructed ICG-001 delivering collagen/P(LLA-CL) scaffolds with high mechanical properties. The in vitro study verified the biocompatibility and long term release fashion of ICG-001 in the scaffold. Immunofluorescence, PCR and Western blot results demonstrated that the ICG-001 delivering scaffolds are able to significantly inhibit ECM expression of fibroblasts. The results presented here using the rabbit urethral defect model provides a foundation for further study with potential clinical applications in the future.
2016-03-22T00:56:01.885Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "c2a1b1ee4ab2a0b4924d3a596391ac8bf09c4708", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/16/11/26050/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2a1b1ee4ab2a0b4924d3a596391ac8bf09c4708", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
5678318
pes2o/s2orc
v3-fos-license
Adverse Prognostic Impact of Bone Marrow Microvessel Density in Multiple Myeloma Background Angiogenesis is important for the proliferation and survival of multiple myeloma (MM) cells. Bone marrow (BM) microvessel density (MVD) is a useful marker of angiogenesis and is determined by immunohistochemical staining with anti-CD34 antibody. This study investigated the prognostic impact of MVD and demonstrated the relationship between MVD and previously mentioned prognostic factors in patients with MM. Methods The study included 107 patients with MM. MVD was assessed at initial diagnosis in a blinded manner by two hematopathologists who examined three CD34-positive hot spots per patient and counted the number of vessels in BM samples. Patients were divided into three groups according to MVD tertiles. Cumulative progression-free survival (PFS) and overall survival (OS) curves, calculated by using Kaplan-Meier method, were compared among the three groups. Prognostic impact of MVD was assessed by calculating Cox proportional hazard ratio (HR). Results Median MVDs in the three groups were 16.8, 33.9, and 54.7. MVDs were correlated with other prognostic factors, including β2-microglobulin concentration, plasma cell percentage in the BM, and cancer stage according to the International Staging System. Multivariate Cox regression analysis showed that high MVD was an independent predictor of PFS (HR=2.57; 95% confidence interval, 1.22-5.42; P=0.013). PFS was significantly lower in the high MVD group than in the low MVD group (P=0.025). However, no difference was observed in the OS (P=0.428). Conclusions Increased BM MVD is a marker of poor prognosis in patients newly diagnosed with MM. BM MVD should be assessed at the initial diagnosis of MM. INTRODUCTION Angiogenesis, i.e., formation of new blood vessels, plays an important role in the proliferation and survival of neoplastic cells. Increased angiogenesis is an adverse prognostic factor in hematologic malignancies, including non-Hodgkin's lymphomas and acute B-cell lymphoblastic and myeloblastic leukemias, and solid tumors [1][2][3]. Although several studies have assessed BM MVDs in Korean patients with MM, the association of MVD with patient survival and disease progression has not yet been clarified. This study investigated the prognostic impact of MVD in Korean patients newly diagnosed with MM. Study population This retrospective analysis included 107 patients (median age of 64 yr) who were newly diagnosed with symptomatic MM through a comprehensive diagnostic workup at the National Cancer Center, Goyang, Korea, between December 2001 and April 2012 [12]. Their clinical and laboratory characteristics are listed in Table 1. BM biopsy was performed in all patients at initial diagnosis, and biopsied samples were used for quantifying MVD by staining with anti-CD34 antibody. All the patients had symptoms for end-organ damage and received intensive chemotherapy. Of the 107 patients, four patients were eligible for hematopoetic stem cell transplantation while 51 were not. Initial treatment regimens were diverse because of the retrospective nature of this study. In all, 21 patients received bortezomib-based regimens, 20 patients received melphalan-based regimens, and 35 patients received thalidomide-based regimens. Thirty-one patients underwent hematopoietic stem cell transplantation (HSCT), including three patients who underwent allogeneic transplantation. PFS was defined as the time from the start of first-line treatment to disease progression or death due to any cause [13], and OS was defined as the time from the start of chemotherapy to death due to any cause. Decalcification and IHC staining of BM biopsy samples The proportion of plasma cells and overall cellularity were estimated by using the biopsied BM samples. Paraffin-embedded samples were decalcified in 10% neutral-buffered formalin (Aus- Values are presented as median (interquartile range). *P < 0.05, between patients with low MVD and those with high MVD; † Patients were classified into three groups based on the tertiles of MVD; ‡ ISS: I, β2microglobulin < 3,500 μg/L and albumin ≥ 3.5 g/dL; II, not fitting in stage I or III; and III, β2-microglobulin ≥ 5,500 μg/L. Two patients could not be staged because of the non-availability of β2-microglobulin values. Abbreviations: MVD, microvessel density; ISS, International staging system; DS, Durie-Salmon; LDH, lactate dehydrogenase; BM, bone marrow. tralian Biostain, Pty. Ltd., Traralgon, Australia) according to standard procedures. Thin-layer sections were prepared and were stained with hematoxylin and eosin (H&E) and antibodies against CD138, CD34, and immunoglobulin kappa and lambda light chains. IHC staining for CD34 was performed by using ultraView Universal DAB Detection Kit (Ventana Medical Systems Inc., Tucson, AZ, USA) on Ventana Benchmark XT platform, according to the manufacturer's instructions. The slides were immersed in citrate buffer and were boiled for 30 min in a microwave for antigen retrieval. The slides were then dewaxed, pretreated with a mild cell conditioning 1 buffer (CC1, Ventana Medical Systems Inc., Tucson, AZ, USA), incubated with 1:500 dilution of a primary antibody against CD34 (clone QBEnd10, Novocastra, Leica Biosystems, Newcastle upon Tyne, UK) for 32 min, counterstain ed, and mounted. Calculation of MVD MVD was estimated manually by two independent hematopathologists in a blinded manner with a microscope (Zeiss, Jena, Germany), as described previously but with some modifications [14]. Briefly, slides were scanned at 100 × magnification to identify areas showing conspicuously increased MVD (called 'hot spots'). Three hot spots were identified per patient, and stained vessels, including arterioles and venules, were counted in each hot spot at 400 × magnification (area covered per spot, 0.24 mm 2 ). Round CD34-positive cells showing distinct nuclei were considered hematopoietic precursors and were excluded from the analysis. Stained cells in the trabecular bone and periosteum were also excluded from the analysis. Finally, the numbers of vessels in the three hot spots were averaged (Fig. 1). Statistical analysis Correlation between MVDs determined by the two hematopathologists as well as correlations between MVDs and other prognostic factors, including anthropometric laboratory values, factors included in staging systems, and molecular parameters, assess ed at diagnosis were evaluated by using a parametric method. To demonstrate the strength of the correlation value, we considered 0.10 ≤ r < 0.30 to weak correlation, 0.30 ≤ r < 0.50 to moderate correlation, and r ≥ 0.05 to strong correlation, as previously guided [15]. Results of FISH for translocations involving IGH/ FGFR3 (t [4;14]) and IGH/MAF (t [14;16]) and deletion of 17p13.1 (TP53/17q23; MPO) (Kreatech Diagnostics, Amsterdam, The Netherlands) were included in the analysis [16]. The patients were divided into three groups on the basis of tertiles of MVD. Cumulative PFS and OS curves for each group were calculated by using Kaplan-Meier method and were compared by using log-rank test. Prognostic impact of MVD on PFS and OS was assessed by using Cox proportional hazard model. Statistical significance was set at P < 0.05. All statistical analyses were performed by using MedCalc for Windows, version 12.5 (MedCalc Software, Ostend, Belgium). Calculation, subgrouping, and interindividual comparison of MVDs Manual assessment of MVDs produced estimated mean (SD) of (4,14), and one of the four patients had t (14,16). Patients in the high MVD group had significantly higher mean serum β2-microglobulin concentration (P = 0.013), plasma cell percentage (P = 0.002), and cellularity (P < 0.001) in the BM aspirates but significantly lower hemoglobin concentration (P = 0.001) than patients in the low MVD group. In addition, patients in the high MVD group had higher cancer stage, as determined by the International Staging System (ISS) and Durie-Salmon (DS) staging, than patients in the low MVD group (Table 1). DISCUSSION In this study, 107 patients who were newly diagnosed with MM and who received intensive chemotherapy were retrospectively analyzed to assess the impact of MVD on PFS and OS and the correlation between MVD and other clinical parameters of MM. The results showed that increased angiogenesis, as determined by MVD, was significantly associated with reduced PFS. Moreover, MVD was significantly correlated with the previously determined prognostic factors such as hemoglobin concentration, β2-microglobulin concentration, ISS stage, BM plasma cell percentage, and BM cellularity. The present findings are similar to those of previous studies. In a study involving 110 patients with MM who were classified into four groups based on MVD severity, angiogenesis was found to be significantly lower in complete responders than in non-responders. Independent prognostic factors in complete responders included lowest MVD grade and serum β2-microglobulin concentration of < 3,400 ng/dL [14]. In the present study, we obtained a positive HR for patients with β2-microglobulin concentration of > 3.8 mg/dL. Further, multivariate analysis showed that high MVD was significantly associated with shorter PFS after adjustment for hemoglobin, LDH, and β2-microglobulin concentration. A study involving 88 patients newly diagnosed with MM showed that MVD obtained at initial diagnosis was correlated with PFS and OS [17]. In this study, patients with high MVD had a median PFS of 21 months, which was longer than 10.2 months observed in the present study. This discrepancy may be due to the lower median age and lower median MVD in the previous study. Although several studies have shown that angiogenesis, which is estimated by using MVD, is increased in Korean patients with MM, the correlation between MVD and other prognostic factors as well as the effects of MVD on survival have not been determined accurately. A previous study reported that MVD was weakly correlated with plasma cell percentage in BM samples [18]. Although MVD was significantly higher in patients with MM than in controls in this study, it did not significantly affect survival. Another study involving 75 patients with MM showed that age of < 65 yr, hemoglobin concentration of ≥ 8.5 g/dL, platelet count of ≥ 100,000/ μL, serum albumin level of ≥ 3.0 g/dL, serum calcium level of < 12.0 mg/dL, serum creatinine level of < 2.0 mg/ dL, serum β2-microglobulin level of < 4.0 μg/dL, and plasma cell percentage of < 30% were significantly associated with longer OS [19]. However, this study also showed that VEGF concentration and MVD were not significant prognostic factors in patients with MM. Another study involving 21 patients with MM who were treated with high-dose chemotherapy and autologous stem cell transplantation showed no significant difference in OS between patients with MVD reduction of > 50% and those with MVD reduction of < 50% [8]. Results of these studies indicated that MVD had no prognostic value and were inconsistent with the results of our study because these studies only assess ed OS. The results of the present study indicated that high MVD was a prognostic factor, especially for disease progression, in patients with MM. Therefore, high MVD can be considered as an independent predictor of poor prognosis in Korean patients with MM. MVD should be measured after routine BM biopsy at initial diagnosis. Growth of myeloma cells and survival of patients with MM are associated with increased angiogenesis in the BM microenvironment, which promotes the metastasis of myeloma cells [3,20]. These findings along with the findings of studies involving the analysis of microvessels in patients with MM emphasize the importance of angiogenesis and suggest that antiangiogenic therapy can be effective in treating MM [1,13,20,21]. However, the present study has some limitations, including its retrospective design and manual estimation of MVD. We divided the patients into three groups based on the tertiles of MVD but did not include a control group. In addition, manual counting of MVDs in BM samples is a limitation because of interobserver differences, especially while counting very small arterioles and venules and exclusion of areas with non-specific staining. Because the difference in the mean (SD) MVDs determined by the two independent hematopathologists was statistically significant, more standardized MVD assessments such as those involving computerized image analyzers should be performed in the fu-
2017-08-15T06:52:32.493Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "c5ac449487b9c6e4ca85b1614f6669aa63a4e490", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3343/alm.2015.35.6.563", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5ac449487b9c6e4ca85b1614f6669aa63a4e490", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261789776
pes2o/s2orc
v3-fos-license
A new integral-synergetic controller for direct reactive and active powers control of a dual-rotor wind system This paper proposes a new integral-synergetic controller for direct reactive and active powers control (DARPC) for a grid-connected doubly-fed induction generators (DFIGs) in dual-rotor wind power generation applications. The proposed DARPC strategy employs integral-synergetic control (ISC) to regulate the reactive and active powers of the DFIG-based variable speed dual-rotor wind turbine systems. The proposed ISC technique is the contribution of this work, where this strategy is a development of synergetic control and simplicity and robustness are the most prominent features. The main advantages of the proposed ISC-DARPC technique are ease of implement, good dynamic response, simple structure, and constant switching frequency operation. The Matlab software is used to validate the design of the ISC-DARPC technique, and the obtained results are compared with traditional DARPC. In addition, the ISC-DARPC technique is able to fully minimize ripples in both torque and active power during grid voltage imbalance or parametric changes on the DFIG. Introduction Direct active and reactive powers control (DARPC) is one of the most and largest existing methods of control for wind powers because it is a simple technique, easy and robust control compared to a field-oriented control (FOC) technique. 1This strategy belongs to the family of linear strategies that depend on switching tables in order to generate control pulses in the inverter.The DARPC strategy produces fewer ripples for torque, current, and active power than both the direct and indirect FOC strategy. 2Moreover, this strategy offers high performance with great efficacy in improving systems advantages much more than in the case of FOC strategy. 3The DARPC strategy does not need to know the mathematical form of the system under study, which makes it provide a fast dynamic response and high performance compared to FOC and vector control, as this strategy was proposed as one of the best reliable solutions in systems that work with wind power (WP). 4In general, the DARPC strategy is among the best strategies available at the moment with direct torque control (DTC).This strategy in principle and idea has the same principle as in the DTC strategy, where in the DARPC strategy both active and reactive power (Ps and Qs) are used as references. 2,3On the other hand, the traditional DARPC strategy with a lookup table is introduced to the power control of the doubly-fed induction generator (DFIG), which decouples good performances in the WP systems. 5However, the lookup table may not give satisfactory performances against parameter variations of the DFIG.The DARPC strategy with a lookup table gives more total harmonic distortion (THD) of voltage/current and more Ps and Qs ripples as a result of using two hysteresis comparators to control the power of generators.In Nafar and Mansouri, 6 the authors proposed the use of the DARPC technique applied to the DFIG-based WP system.However, applying this technique leads to large Ps and Qs ripples, THD of current, and variable switching frequency due to hysteresis controllers. 7The DARPC strategy was designed to control the Ps and Qs of the permanent magnet synchronous generator. 8ecently, several DARPC strategies law based on a DFIG are presented in literature such as neural DARPC strategy, 9 fuzzy DARPC control, 10 neurofuzzy DARPC strategy, 11 sliding mode-DARPC control, 12 DARPC strategy with classical space vector modulation (CSVM), 13 DARPC control with supertwisting sliding mode (STSM) controllers, 14 DARPC strategy with neural STSM controllers, 15 DARPC strategy based on genetic algorithm (GA), 16 DARPC strategy based on backstepping controller, 17 DARPC strategy based on terminal synergetic controller, 18 DARPC strategy based on third-order sliding mode controller, 19 DARPC strategy based on sliding mode controller (SMC), 20 and the DARPC control with neuro-fuzzy STSM controllers. 21These proposed strategies to improve the performance of DARPC are multiple and differ from each other in terms of complexity, simplicity, and ease of implementation.Also, in terms of durability and results obtained.In all of the above strategies, all dispense with two hysteresis comparators and switching table with the aim of increasing the robustness of the strategy and overcoming the defects of the traditional strategy.But the use of these strategies increases the degree of complexity, as is the case in the case of using backstepping control and passivity control, and this is not desirable.In addition, some of these strategies used are related to the mathematical form of the system, which creates problems and defects in case the system parameters change, and this is undesirable.However, the synergetic control (SC) theory remains one of the best nonlinear controls that can be proposed as the best solution to overcome the disadvantages of the DARPC, as shown in the work done in Habib. 22Compared to the SMC technique, SC technique is simple, uncluttered, and reduces chatter significantly.In Abdesselem et al., 23 the authors propose a new strategy of SC technique based on the derivative and integral of macro-variable.This new design is based on double-loop control to improve the effectiveness and robustness of the DFIG controlled by a FOC strategy.Due to the importance of the DARPC strategy, it has seen great interest among researchers, especially in the field of renewable energies, where they have tried to overcome its problems by using several strategies such as the use of nonlinear controls, smart methods, and hybrid strategies.In Habib and Nicu, 24 DARPC with SC technique has been introduced for DFIG-based dual-rotor wind turbines (DRWTs) because of its simple algorithm, robustness, and easy implementation.In Habib and Hamza, 25 the author designed the use of a DARPC strategy with a GA technique applied to the DFIG-DRWT system.Simulation shows the superiority of the designed technique.In Benbouhenni, 26 five-level neural DARPC was designed to command the DFIG-based classical wind turbine.In all, 12 sectors DARPC control structure reduced the THD of current compared to the conventional DARPC strategy.In Lin et al., 27 the DARPC control scheme was designed based on integral SMC to control a three-level inverter, where the traditional integral-proportional (IP) controllers were replaced by an integral SMC technique.The result shows the superiority of the designed technique.In Han and Ma, 28 a new adaptive-gain second-order SMC was proposed to improve the performance of the DARPC strategy of DFIG.The numerical results verified the robustness, effectiveness, and superiority of the proposed strategy.In Habib et al., 29 the author combined the SC and SMC strategies to overcome the disadvantage of the DARPC, where the combination of the two strategies showed high efficiency in improving the robustness of the DARPC of the DFIG-DRWT system, especially in reducing the Ps and Qs ripples.In addition, super twisting algorithm (STA) and SC technique in Habib and Lemdani 30 have been combined to improve current quality and overcome Ps and Qs ripples.The results showed the effectiveness of the combination of the two methods to overcome the disadvantage and problems of the DARPC of the DFIG-DRWT system. In this paper, the focus will be on the SC strategy and an attempt to give it a new form to increase its efficiency and ability to improve the characteristics and advantages of the DARPC.So, this paper is a development and modification of the works done in Habib et al., 29 Habib and Lemdani. 30To reduce the Ps and Qs ripples of the DARPC strategy, the use of a novel nonlinear technique is suggested in this work.In this work, a nonlinear technique based on the proposed integral-SC (ISC) is used to improve the quality of both the Ps and Qs of a DFIG-DRWT systems.The proposed control scheme takes into account the nonlinear nature of the variable-speed DRWT behavior, the flexibility of the drive train, and the turbulent nature of the variable-speed DRWT system.Moreover, the designed ISC-DARPC strategy law is robust against the DFIG parameter variations in the system.This work presents the fundamental aspects of the ISC-DARPC strategy and presents pertinent simulation results for variable-speed DFIG-DRWT systems.The designed ISC-DARPC is compared with a classical DARPC strategy.In this work, by applying a proposed integral-SC theorem law for a DFIG system with pulse width modulation (PWM), the inherent complexity of designing the controller is removed.The original contribution of this work is the application of the designed ISC technique to regulate the Ps and Qs of the DFIG-based variable-speed DRWT system using the two-level classical PWM technique.The numerical results validate that the ISC-DARPC strategy has very robust control and gives a minimum THD of voltage and power ripple compared to the classical DARPC strategy.This represents a new robust and ripple-free DARPC strategy for DFIGbased variable-speed DRWT.However, the work carried out in this paper is completely different from several research works such as the work done in [29]29 in terms of the strategy used, the idea, the principle, and even the results obtained, which makes the technique suggested in this paper have future horizons as a solution among the solutions that it can be used in the field of the wind power. The proposed ISC-DARPC technique controls the Ps and Qs of the variable-speed DFIG-DRWT to improve the characteristics and performance of the traditional DARPC technique and achieve the following results: Improving the dynamic response of the variablespeed DFIG-DRWT system.Improving the quality of Ps and Qs by minimizing the ripples of the current produced by the generator.Minimizing the THD value of the stator current of the variable-speed DFIG-DRWT.Increasing the robustness of the traditional DARPC technique.Increasing the power gained from the wind.This paper contains the following main sections: Section ''Introduction'' gives an introduction to this work, mentioning the contribution and the goal of this study.In section ''Model of DRWT,'' the mathematical form of the DRWT is briefly presented followed by the section ''Model of the DFIG.''In section ''Proposed ISC controller,'' the designed integral SC theory is discussed.Section ''Proposed DARPC-ISC strategy'' dealt with in minutes the designed DARPC based on the ISC technique used in the power control.In section ''Analysis and results,'' the simulated results of the designed DARPC technique based on the ISC technique were presented and then compared with those obtained by the traditional DARPC technique, and finally all the conclusions reached from this work were collected in section ''Conclusion.'' Model of DRWT The use of wind power conversion systems has increased significantly in recent years.The applied systems of the WP can be classified into fixed-speed and variable-speed turbines.The variable-speed WP are as follows: the ability to achieve maximum power conversion efficiency which ultimately increases WP production and a significant reduction in mechanical stresses.In Yahdou et al., 31 a new WP is presented, where two wind turbines are used to produce mechanical energy from wind.This new technology is called DRWT or counter-rotating wind turbine.This new technology is the use of two turbines of different capacities and is located on the same axis and one turbine formation to convert WP into mechanical energy.However, this technology gives more mechanical energy than traditional wind turbines.In Habib et al., 32 the DRWT system provides about 20% to 30% more mechanical power compared to the mechanical power output of a traditional WP system.This technology has efficiency in the case of weak winds compared to the old technology. 33DRWT system is not affected by the wind generated by the turbines in the case of wind farms compared to the classic turbines.But the disadvantage of this technology is that it is expensive, difficult to control, and contains many mechanical parts compared to traditional technology. 34To control the DRWT the main turbine is used for this purpose, where the same strategy is followed as used in the classic turbines.So, maximum power point tracking (MPPT) is the strategy used to control the DRWT.The MPPT strategy depends on the use of a PI controller, and to implement this strategy, you must first know the mathematical form of the DRWT.The proposed generation system in this work, which is based on the use of DRWT, is shown in Figure 1, where DFIG is used to convert the acquired mechanical power into electrical power.In general, this proposed system greatly helps to protect the environment and greatly reduces the area of wind farms.The total aerodynamic torque obtained is the sum of the torques for each of the secondary and main rotors. The aerodynamic torque of the auxiliary rotor (AR) is given: 31 The torque of the main rotor (MR) is given: where l AR and l MR are the tip speed ration of the AR and MR, R MR and R AR are the blade radius of the AR and MR, r is the air density, and w AR and w MR are the mechanical speed of the AR and MR.Equations (3) and (4) represent the tip speed ratios of the AR and MR, respectively: where V MR is the speed of the unified wind on MR and V 1 is the wind speed on an auxiliary rotor. Equation (5) represents the total torque of DRWT system: where T MR is the MR torque, T T is the total torque, T AR is the AR torque. The essential element for calculating the tip speed ratio is wind speed on the auxiliary and main turbines.Obtaining the wind speed on the AR is straight forward.The wind speed on the MR is given: 31 where Vx is the velocity of the disturbed wind between rotors at point x and C T the trust coefficient, which is taken to be 0.9; x is the non-dimensional distance from the auxiliary rotor disk.So, with respect to x=15 m, the value of the Vx close to the main rotor is computable (rotors are located 15 meters apart from each other). 35quation (7) represents the power coefficient of the wind turbine, where the power coefficient is related to the pitch angle (b): Model of DFIG DFIG is among the most popular generator types in the field of WP system due to its many advantages such as durability, ease of control, and low cost. 36,37To accomplish the proposed work in this paper, the mathematical form of the DFIG must be given.To do this, the Park transform is used to give the DFIG a mathematical form.The stator and rotor voltages equation of the DFIG can be written in the reference frame of Park in the following form: [38][39][40][41][42] Stator voltage and flux components: Equation (12) represents the torque of the DFIG: The power of DFIG is shown in Equation ( 13): Equation ( 14) represents the mechanical model of the DFIG: where J is the inertia, f is the viscous friction coefficient, O is the mechanical rotor speed, and T r is the load torque. Proposed ISC controller During the last few years, several nonlinear strategies have been proposed to control electrical machines.The SC strategy is one of the novel techniques of nonlinear controller.It is characterized by its simplicity of design, external disturbance rejection capabilities, and the global stability assurance of the system.The SC is a technique quite close to the SMC in the sense that it forces the system to evolve with a dynamic pre-chosen by the designer. 22This new strategy does not require the linearization of the model and explicitly uses a nonlinear model for the synthesis of the control.In Moati and Kouzi, 43 the authors suggested the use of a direct flux and torque control with the SC strategy applied to the dual stator induction motor drive, where the classical PI of speed was replaced by the SC strategy.In Qian et al., 44 the photovoltaic system was designed based on the SC strategy theory.In Ademoye, 45 the electric power system is designed based on the SC theory.SC theory and genetic algorithm are combined to control the wind turbine systems. 46A power system with a superconducting magnetic power storage system was proposed based on the SC theory. 47Yahi et al. 48developed a speed controller for an induction motor based on the SC theory.In Zhao and Wang, 49 the authors proposed the use of SMC and SC theory to control the permanent magnet synchronous motor.The use of SC technique in the field of controlling electrical machines does not eliminate the ripples of torque and current, as the problem of current quality remains low and this is undesirable. In this section, a novel idea of SC technique is presented, whereby an integral technique is used to improve the response of the SC technique.5][46][47][48][49] In works, [44][45][46][47][48][49] the classic SC technique was used to control the photovoltaic system, power system, and wind turbine system, respectively. The first step in designing an SC strategy is the formation of surfaces defined in terms of system state variables as the algebraic relationships between those variables that reflect the performances of the requirements of the system design.The equation of the surface for the SC strategy is given by: The control will make the system to operate on the manifold C(0)=0 and the dynamic evolution of the surfaces according to the equation is given by: where T is the parameter of convergence speed of surfaces. To ensure the stability of this functional equation, the function must satisfy (C)C .0 for all C 6 ¼ 0. Equation (17) represent the solution of Equation ( 16): where T is the parameter of convergence speed of surfaces in the cas C=0. The integral-SC technique is a modified SC technique to improve the characteristics and robustness of the classical SC technique, where integration is added to the classical SC strategy in this way, the increase in efficiency and strategic effectiveness is ensured in improving the performance of systems, and in particular in improving the characteristics of the DARPC.The control law of the designed ISC strategy can be defined as follows: 24 where T 1 and T 2 are time coefficients for derivative and integration parts, respectively. Figure 2 shows the proposed ISC technique.Through this figure, it can be said that the designed ISC is very simple and can be easily achieved compared to the rest of the nonlinear controls such as SMC or backstepping control.Also, this strategy does not require the mathematical form of the studied system, as it is applied directly without knowing the mathematical form or resorting to complex calculations, but only the surfaces must be known.][52][53][54][55] In this work, the designed ISC technique is applied to improve the quality of the output current from a generator placed in a DRWT system.But also to reduce the ripples for torque, current, and powers Ps and Qs of the DFIG-DRWT system. The designed controller in this section is different from the proposed strategy in work, 29 where the designed controller in Habib et al. 29 is a combination of SC and SMC techniques.So, the work done in this paper and the work done in Habib et al. 29 are different in principle, idea, and methods used for control.Table 1 represents a comparison between the strategy suggested in this work and the strategy implemented in the work. 29his table was filed based on the results of this paper and the work performed in Habib et al. 29 Proposed DARPC-ISC strategy The DARPC technique of the variable-speed DFIG-DRWT system with the application of the designed ISC theory is shown in Figure 3.In this proposed DARPC, the Ps and Qs are regulated by the ISC controllers.However, the ISC-DARPC is a nonlinear control, robust, easy to implement, and simple algorithm.The suggested technique is a modification of the traditional DARPC, where the ISC controller is used instead of the hysteresis comparators and to control the inverter the PWM is used instead of using the switching table.The PWM strategy was discussed in detail in the work, 56 where the pros and cons of using this strategy in the field of control were given.In this proposed technique, the estimation of both Ps and Qs is used, as is the case in the traditional strategy.Therefore, high-quality voltage and current measuring devices must be used.Besides, the suggested DARPC with ISC controllers minimized the Ps and Qs ripples and give the minimum THD value of stator voltage compared to the conventional DARPC.However, the proposed ISC-DARPC minimizes the value of overshoot, steady-state error (SSE), and rise time compared to many methods such as the DARPC and FOC strategies.Also, durability is among the advantages of the ISC-DARPC compared to the DARPC, as when the parameters of the machine change, the proposed ISC-DARPC technique provides better results than the DARPC.Moreover, the proposed ISC-DARPC technique offers a fast dynamic response compared to the DARPC strategy. In this proposed strategy, the MPPT strategy is used to obtain the reference value of Ps using wind speed, and a traditional MPPT based on the use of a PI regulator is used.In addition, the reference value of the Qs is set to 0 VAR. The designed ISC-DARPC in this work is a development of the classical DARPC technique to improve the quality of the power and the Ps generated by the DFIG-DRWT.In the ISC-DARPC, the PWM technique is used to simplify the system, reduce cost, and facilitate control.The proposed ISC technique is used instead of the traditional hysteresis comparators.The proposed ISC-DARPC technique preserved simplicity, robustness, and ease of implementation.Also, the ISC-DARPC improves the characteristics of the DFI-DRWT system compared to the DARPC and some other controls such as the FOC strategy. In the ISC-DARPC technique, the capacities are estimated to calculate the error in the Ps and Qs, as these errors are inputs to the ISC strategies.In addition, the equations used in estimating the Ps and Qs are the same equations used in Benbouhenni 26 and Lin et al. 27 In Table 2, the proposed ISC-DARPC technique is compared with some of the work done in Benbouhenni, 26 Lin et al., 27 Han and Ma, 28 Habib et al., 29 and Habib and Lemdani. 30In this proposed strategy, the ISC technique is used to improve the current quality and Ps/Qs ripples of the DFIG-based DRWT system and the PWM is used to control the DFIG converter.Thus, the proposed ISC-DARPC technique is different from the techniques proposed in works Benbouhenni, 26 Lin et al., 27 Han and Ma, 28 Habib et al., 29 and Habib and Lemdani. 30egarding references Yahdou et al., 31,33 even though the same type of turbine is used, the strategy used to control the generator inverter in Yahdou et al., 31,33 is that of second-order sliding mode control (SOSMC), respectively SMC strategy, which completely different from the control scheme proposed in this paper. To estimate the Ps and Qs, Equations ( 19) and ( 20) are used: with: The proposed strategy aims to control the Ps and Qs using the ISC strategy.Therefore, two ISC techniques are used to control the Ps and Qs, as there is one output and one input for each ISC controller.In addition, the input is represented by the error in the Ps and Qs, and the output is a statement of the reference value of the rotor voltage. ISC-Ps control design In the DGASPC strategy, a regulator PI is usually used in the outer loop P s to generate reference V qr * .In this work, we designed a novel nonlinear Ps regulator based on the proposed ISC controller. The Ps regulator generates the reference rotor voltage (V qr * ).The macro-variable will be chosen as: Then the derivative of it is given by: Using Equation ( 18), the mathematical form of the ISC-Ps controller can be written as: The ISC theory law must fulfil the Lyapunov technique to construct Ps controller stability.We can use the Lyapunov strategy to ensure a synoptic stability: After differentiation one gets: Thus, the inequality Equation ( 29) will ensure the stability of the closed Ps control loop. Table 2.A comparative study between the ISC-DARPC strategy and strategies proposed in Habib and Nicu, 24 Habib and Hamza, 25 Benbouhenni, 26 Lin et al., 27 Han and Ma, 28 and Yahdou et al.The graphical representation of the control law of the ISC-Ps controller is shown in Figure 4. ISC-Qs control design The Qs controller generates the reference rotor voltage V dr * .The macro-variable will be chosen as: Then the derivative of it is given by: Using equation (18), the mathematical form of the ISC-Qs controller can be written as: The structure of the control law of the ISC-Qs regulator is shown in Figure 5.The proposed ISC-DARPC strategy reduces ripples for Ps/Qs, torque, and current of the DFIG-DRWT compared to the conventional DARPC technique and some other methods such as the direct and indirect FOC strategies.Moreover, the proposed ISC-DARPC strategy gives a lower value for THD compared to the conventional DARPC technique and with some published work.Also, this proposed ISC-DARPC strategy improves the dynamic response of the Ps and Qs of the generator compared to the DARPC and this is what we will discover in the next part of the article. Table 3 shows this comparative study between the proposed and traditional DARPC strategies in terms of type controller used, degree of complexity, ease, simplicity of implementation, etc. Table 3 was completed using the simulation results obtained in this paper and those reported in other scientific works dedicated to the analysis of the DARPC strategy.Based on this table, it can be said that the designed nonlinear DARPC technique is more efficient and performant than the traditional DARPC technique, despite the difficulty of implementation that is medium for both strategies. Analysis and results The numerical results of the ISC-DARPC strategy of the variable-speed DFIG-DRWT system are compared with the conventional DARPC with a lookup table.Both strategies were tested under different tests. First test Figures 6 and 7 show the THD value of current for the conventional DARPC strategy with a lookup table and ISC-DARPC strategy, respectively.It can be observed that the THD is reduced for the ISC-DARPC (THD = 0.24%) when compared to the classical DARPC strategy with a lookup table (THD = 0.42%).So, the reduction ratio was about 42.85% compared to the DARPC, as this ratio indicates that the ISC-DARPC is better than the DARPC in terms of the quality of the current produced by the DFIG-DRWT system.In addition, the proposed strategy gave a value of the fundamental amplitude (50 Hz) of current equal to the amplitude in the case of conventional control, where the value was 1173 A and this is desirable.The simulation waveforms of the measured and reference Ps of the DFIG-DRWT system are shown in Figure 8 to compare the effectiveness of the ISC-DARPC strategy with the effectiveness of the conventional DARPC strategy with hysteresis controllers.The Ps tracks almost perfectly their reference value (P s-ref ).The amplitudes of the oscillations of the Ps are smaller and occur in a shorter period time in comparison with the oscillations obtained for the ISC-DARPC strategy (Figure 9).On the other hand, the proposed ISC-DARPC strategy provided a fast dynamic response to the Ps compared with the DARPC. For the proposed ISC-DARPC strategy and conventional DARPC strategy with a lookup table, the Qs track almost perfectly their reference value (Figure 10).Moreover, the ISC-DARPC reduced the Qs ripple compared to the DARPC with a lookup table (Figure 11).Also, with a preference for the ISC-DARPC in the dynamic response compared to the classical DARPC strategy. The waveforms of the torque of both techniques are shown in Figure 12.The amplitudes of the torque depending on the value of the load Ps and the state of the drive system.The ISC-DARPC minimized the torque compared to the conventional DARPC with a lookup table (Figure 13). The trajectory of the measured magnitude of the current is shown in Figure 14.It can be seen that the amplitudes of the currents depend on the value of the load Ps/Qs and the state of the drive system.In addition, the conventional DARPC strategy with a lookup table gives more ripple in current compared to the ISC-DARPC technique (Figure 15).The results obtained from this test are summarized in Table 4. Table 4 is a comparative study between the ISC-DARPC strategy and the DARPC strategy in terms of the ratio of ripples and dynamic response.Through this table, the ISC-DARPC reduces the ripple ratio of Qs, current, torque, and Ps by about 47.05%, 46.42%, 45.65%, and 69.33%, respectively.Moreover, the proposed ISC-DARPC strategy provided a better dynamic response than the classical DARPC technique, which indicates the robustness of the ISC-DARPC strategy compared to the DARPC strategy.Also, the proposed ISC-DARPC minimized the THD value of the current by about 42.85% compared to the classical DARPC.In the next test, we will verify the robustness of the proposed strategy, and this is if the machine parameters are changed compared to the behavior of the classical strategy. Robustness test In this part, the nominal values of R r and R s are multiplied by 2, and L r and L s are multiplied by 0.5.Simulation results are presented in Figures 16 to 25.As it is shown by these figures, these variations present an apparent effect on stator current, torque, Ps, and Qs such as the effect appears more significant for the 6, where Table 6 is a comparative study between the designed and the traditional DARPC technique in terms of the ratio of ripples of torque, current, Ps, and Qs.The ISC-DARPC provided good results and this is shown by the high ratios, where we find that the proposed ISC-DARPC technique minimized ripples compared to the classical DARPC strategy by rates ranging from 67.08%, 47.99%, 50%, and 65.07% for torque, Qs, current and Ps, respectively. Variable-speed wind test In this test, the MPPT technique is used to obtain the reference value of the Ps.Also, the reference value of the Qs is set at 0 VAR.This test aims to study the behavior of the ISC-DARPC in the case of variable wind speed compared to the classical DARPC technique.The wind speed profile used in this test in order to study the behavior of the proposed strategy is represented in Figure 26 The generator torque and current are shown in Figures 29 and 30, respectively.By the two figures, the form of torque and current is the same as the form of the Ps.The higher the value of the Ps, the higher the value of the torque and the current, and the lower the value of the Ps, the lower the value of both the current and torque.This is because the torque and current are related to the value of the Ps.Also, the ISC-DARPC reduced the ripples of both current and torque compared with the DARPC technique (Figures 31-33). The THD of the current is shown in Figures 34 and 35 for the classical and ISC-DARPC techniques, It is necessary to compare the proposed DARPC strategy with some published works around the world, and this is in terms of the value of THD and the quality of the current obtained in the electrical network.Table 8 represents the comparison between the proposed ISC-DARPC strategy in this paper with some published works.From Table 8, we find that the proposed ISC-DARPC gave a lower value for THD compared to some of the implemented controls.Accordingly, it can be concluded that the proposed ISC-DARPC is robust compared to some techniques due to the use of the designed ISC technique. In Table 9, a comparative study is carried out between the work done and published scientific works in terms of the ratios of ripple reduction, current, Ps, torque, and Qs, to find out the efficiency of the proposed strategy in improving energy quality.Through this table, the ISC-DARPC has high reduction rates compared to several proposed strategies in research work published in reportable scientific journals due to 59 DPC 2.56 Boudjema et al. 60 Fuzzy SMC 1.15 Mazen Alhato et al. 14 Super Twisting SMC 1.66 Yahdou et al. 31 SOSMC 3.13 Amrane and Chaiba 61 CSVM based on hybrid artificial intelligent control 1.14 Najib et al. 62 Two-level DTC 8.75 Three-level DTC 1.57 Quan et al. 63 Integral SMC 9.71 Multi-resonant-based SMC 3.14 Yaichi et al. 64 DPC-STA 1.66 Fayssal et al. 65 DPC using L-filter 10.79 DPC using LCL-filter 4.05 Said et al. 66 DTC 7.83 Neural DTC 3.26 Alhato et al. 67 12 sectors DPC 0.40 Ayrira et al. 68 DTC 6.70 Fuzzy DTC 2.40 Sahria et al. 69 DPC 8.87 DPC-ANN 2.91 DPC-NF 2.72 Mossa et al. 70 Predictive torque control 1.73 Predictive polar flux control 0.74 Wadawa et al. 71 PI control system 2.23 Hybrid control system 1.91 Echiheb et al. 72 Sliding-backstepping mode control 0.87 the use of proposed ISC controller, which we can say that the proposed strategy can be relied upon in controlling electric generators. Conclusion In this paper, a DARPC based on suggested ISC controllers of the DFIG fed by a PWM inverter is designed. A simple integral-synergetic active and reactive powers controller is synthesized instead of using the classical hysteresis comparators.The designed technique overcame the major disadvantages of the traditional DARPC strategy such as the high oscillations of the reactive and active powers caused by the variable switching frequency.Thanks to the designed ISC-DARPC, the reactive and active power ripples are improved and the THD value of the current is reduced.Moreover, with the introduction of the proposed ISC strategy, the robustness of the reactive and active power controllers against load disturbances is enhanced.The numerical results showed a significant improvement in dynamic characteristics (torque, reactive, and active powers).Also, the results obtained from this work are indicated in the following points: ISC-DARPC improves the effectiveness of the DARPC of the variable-speed DFIG-DRWT system.ISC-DARPC decreases the THD values of the current.ISC-DARPC is more robust compared to the traditional DARPC technique.ISC-DARPC minimizes the ripples of active power, torque, reactive power, and current compared to the traditional DARPC technique and other strategies proposed in the literature. In future work, the ISC-DARPC technique proposed in this paper will be experimentally implemented on an asynchronous generator in a wind turbine system.This strategy can also be applied to other generators, such as the multi-phase asynchronous generator. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Figure 2 . Figure 2. Structure of the ISC controller. Figure 4 . Figure 4. Structure of the ISC-Ps controller. Figure 5 . Figure 5. Structure of the ISC-Qs control. Figure 9 . Figure 9. Zoom in the Ps. Figure 11 . Figure 11.Zoom in the Qs. Figure 13 . Figure 13.Zoom in the torque. Figure 15 . Figure 15.Zoom in the current. . The results of this test are shown in Figures 27 to 32 . The Ps and Qs are shown in Figures27 and 28, respectively.Through the two figures, the Ps Figure 22 . Figure 22.Zoom in the Ps. Figure 23 . Figure 23.Zoom in the Qs. Figure 24 . Figure 24.Zoom in the torque. Figure 25 . Figure 25.Zoom in the current. Figure 34 . Figure 34.Zoom in the Qs. Figure 35 . Figure 35.Zoom in the torque. Table 1 . 29comparison of the designed strategy and the work done in Habib et al.29 Table 3 . A comparative study between the ISC-DARPC and the DARPC techniques. Table 4 . Comparative results obtained using the designed and traditional DARPC strategies Table 5 . THD value of both strategies Table 6 . Comparative ripples obtained using the designed and classical DARPC techniques.Qs are shown in Table7for both the classical and the proposed ISC-DARPC techniques.Through the table, the proposed ISC-DARPC technique minimized these ripples compared to the traditional DARPC, where the reduction ratio was 42.96%, 53.37%, 61.53%, and 88.42% for torque, Ps, current and Qs, respectively. Table 7 . Comparative ripples obtained using the proposed and traditional DARPC strategies.Figure 33.Zoom in the Ps. Table 8 . Comparison of current THD values with other strategies.
2023-09-14T15:21:35.064Z
2023-09-11T00:00:00.000
{ "year": 2024, "sha1": "a3d18defc6f28e7104e156278b70f18df2bbd126", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00202940231195117", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "6117ba04a554aca93d50e2ad48a84ed001afa84f", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
264788662
pes2o/s2orc
v3-fos-license
Revisiting Inequality and Caste in State and Social Laws: Perspectives of Manu, Phule and Ambedkar The Constitution, as a formal legal document, reflects a commitment to secure to all citizens, Equality, Justice, and Liberty, as a non-negotiable duty of the State. The nature and context of present society, however, is embedded in its socio-cultural development through civilisations. This study aims to engage with such a manifestation of state power as revealed in the text Manavdharmashastra, that marked the origin of codified social laws to derive legitimacy and establish a ‘divine’ authority to rule. Subsequently, the pioneers to critique the dysfunctions of Manu’s social laws became a subject of interrogation by social reformers like Jyotirao Phule and Dr. Bhimrao Ambedkar. Methodologically, our effort will be to weave together an intertextual analysis based on scientific observation of the case of caste subaltern, through three widely acknowledged texts— Manusmriti , Phule’s Slavery ( Gulamgiri ), and Ambedkar’s Annihilation of Caste , on ideals of society and governance, in order to present a historical legacy into the origins of social hierarchy as an institutional mechanism to perpetuate inequality among subjects. The aim is to develop an approach to evaluate the ancient political thought of Manusmriti , and probe contradictions and realism in actions, with explicit excerpts of relevant texts, to authenticate the credibility of facts and its alignment with the central thought. The article eventually attempts to suggest alternatives to secure the vision of an ideal Indian society that aims to disintegrate the institution of caste. Introduction Inequality is commonly understood as an unequal distribution of resources and opportunities.However, the underlying cause of inequality lies in 'domination' (Jodhka 2018).Hence, any analysis that attempts to question structures of inequality must necessarily be located within the "particular framework of history, culture and social configuration" (ibid.).In India, contemporary inequality, in particular, inequality among specific identity groups, is largely an outcome of historical exclusion and marginalisation, perpetuated through the institution of caste as a critical marker of social stratification.Therefore, interpreting inequality from this perspective demands that glories and illusions of religious-scriptural traditions be evaluated within the domain of academic research, to reveal its systemic and operational imperfections.This is because, the nature and context of present society is embedded in its sociocultural development through civilisations. The ancient Indian logic behind the establishment and organisation of social order has been a conscious effort to obligate disadvantages, exclusion, and marginalisation, institutionalised through the Code of Manu-a widely acknowledged work on social laws in India.The text is known for its caste and patriarchy-based approach to design a hierarchical categorization of society, that codifies conduct and actions, for instance, personal hygiene, manner of attaining knowledge, diet, marriage, interpersonal relations, and spiritual aspects, into a legally bounded system.The authoritativeness of the text may be perceived from the prerequisite of European conquest to uphold Hindu Law Code as a legal sanction, rather than a spiritual or religious narrative of colonial subjects. 1Unlike this law code, the Constitution of India, as a formal legal document, became an embodiment of an accommodative, socially sensitive, inclusive, and aspirational society.It reflects a commitment to secure to all citizens, Equality, Justice and Liberty, as a non-negotiable duty of the State. This article tries to weave together an intertextual analysis based on scientific observation of the case of caste subaltern, through three widely acknowledged texts-Laws of Manu (Manavdharamshastra), Jyotiba Phule's Slavery (Gulamgiri), and B.R. Ambedkar's Annihilation of Caste, on ideals of society and governance.These texts are significant as they present a historical legacy into the origins of social hierarchy, its influence on the nature of nineteenth century colonial India, and the responses through constitutional values.The aim is to develop an approach to evaluate ancient political thought of Manavdharmashastra, and probe contradictions and realism in actions, with explicit excerpts of relevant texts, to authenticate the credibility of facts and its alignment with the central thought. Interpreting Varna, Caste or Jati The textbook view of caste presents it as an ancient institution based on the ideas of varna, karma and dharma, most explicitly elaborated in the classic Hindu text Manusmriti (Jodhka 2018: 112).While Manusmriti does not explicitly mention the word 'caste', it governed individual conduct and social interactions based on the belief that the organisation of the Hindu social order 2 was divinely ordained through a system of hierarchy that was institutionalised on the notion of 'purity and pollution'.This was achieved by the mechanism of distinctions based on varna.The varna system established the Hindus into four mutually exclusive and hierarchically ranked categories.Beyond the four varnas were the atishudra or achhoots (the "untouchables"), 'who by virtue of being classified as the avarnas (those without a varna) occupied the lowliest position in contrast to the savarnas 3 (those with a varna)' (Deshpande 2011: 19).This intergenerational transfer of hierarchy defining one's social standing in the overall structure was inscribed in ritual terms by a codified framework, that structured almost every aspect of social and economic life of people for centuries.The second related element that naturalised a caste order was the karma doctrine.According to it, the present life of a person is a link to the infinite chain of subsequent births and rebirths, and that, the birth of each in a specific (varna) position is an outcome of their own past deeds.Therefore, the only way to improve the prospects of a better future birth was to adhere to, and perform well, the role considered appropriate for the stratum in which one was born.Finally, with regards to the concept of dharma in ancient India, it must be noted that, dharma governed the criteria of human behaviour and social duties, as adherence to it was stated to be beneficial not only for the individual, but also for the overall welfare of society at large (Meena 2005: 578-579).In the text Manusmriti, dharma has been conceptualised as a creation of 'divine power' established on the idea of religion and spirituality for the execution of 'right duties' 4 in all aspects of human life.According to this, the only 'attachment' that mankind must have, shall be the attachment towards one's dharma, for the text declares that dharma alone, guarantees realisation of the divine creator (ibid.: 579). The 'caste system'-which essentially communicates the reference to the indigenous term jati-originally started with the four-fold varna classification mentioned above.However, as is known, the operative category is no longer determined 2 While the work has a standalone focus (and, a conventional one) on Hinduism, it is important to remember that all religions (especially in South Asia) have an inherent system of social stratification, similar to the Indian caste system.In this context, Deshpande (2011) gives a brief insight on the manifestation of the system within Buddhism (pp.22-23). 3Ambedkar identified two major classes of castes based on varna division in society: Savarna and Avarna.Within Savarna, there are two classes-Dvija castes (twice born high castes) and castes of Shudra status.Similarly, the Avarna has three groups-tribes, nomadic tribes and those belonging to the category of untouchable castes.(Bagade 2012: 35) 4 According to Chapter VII verse 27 of Manusmriti, a ruler who uses his power to properly protect the caste order, will achieve all desires, wealth, and spiritual merit.On the other hand, one who misuses it for personal interests, will end up in destruction. by varna, but individual jatis.This categorisation of jatis is more commonly based on personal claims of its members regarding their respective varna affiliations.Whatever may be the contradictions in such narratives, the nature of the caste pyramid has traditionally been standardised to an imagination that is characteristic of a vast population of "lower castes" to assume the bottommost position.It is equally important to highlight, that caste divisions between the so-called "high" or "low" distinction is most often indicative of the historically subjugated "untouchable" cluster of jatis that were together identified as a specific social category in government schedule during the colonial period.These were subsequently referred to as the Scheduled Castes (and similarly, the Scheduled Tribes).It may therefore be interpreted that, any investigation into the origins of caste and subsequent transformations thereon, necessitates that the emergence of untouchability be analysed in proper perspective. The foundation of untouchability has its roots in the religious-scriptural tradition of Indian society.Ambedkar argued that 'untouchability was an infliction and not a choice' to ensure compulsory segregation (Ambedkar 1989(Ambedkar [2014]]: 5).In Untouchability and Stratification in Indian Civilisation, Shrirama (2007) presented a historical study of ancient texts to understand the phenomena of untouchability and the process of its institutionalisation within the system of Hindu social stratification in India.He has eloquently demonstrated how newer invasions gradually transformed social status based on racial differences to one based on ritual purity among the Aryan elites and the pre-Aryan settlers.According to him, "…the metaphysical doctrine of karma has provided a powerful rationalisation for inequality based on birth and made it acceptable to the wide masses" (p.49) To recall, the doctrine of karma, as articulated for the first time in the Upanishads implies that, birth in a certain position is directly linked to one's own past deeds.In order to improve later births however, it is imperative to adhere to and perform the assigned role of the varna to which the person is born (ibid.: 49).The process of establishment of a four-fold hierarchy to the institutionalisation of low status to Shudras, and the subsequent formation of untouchability, can be broadly divided based on three significant textual evidence.To begin with, Rig Veda 5 (the oldest scripture) with the composition Purushasukta is the first to mention all four ranks together with their occupations, tracing a mythical origin of each to be symbolically related to different parts of the body of the Purusha (Shrirama 2007: 57) or the 'divine creator'.In later Samhitas and Brahmanas, for instance, the Taittiriya Samhita and Aitareya Brahmana, the so-called "low" status assigned to Shudras was institutionalised.The relationship of Shudras with other three varnas was reasserted through the social laws of Manu.In this context, Manusmriti asserts that, …the dominance of priestly elites and the hierarchy based on varna was to be re-established not only through religious prescriptions but by the full might of the king and the state (through the power of punishment or dand).(ibid., 2007: 72) It is therefore, that the position of the king was instituted to 'preserve' the varna order.The text declared that, "The king has been created (to be) the protector of the castes (varna) and orders, who, all according to their rank, discharge their several duties".(Chapter VII verse 35) Similarly, the occupational division of Vaishya and Shudras was propounded in the verse, "(The king) should order a Vaisya to trade, to lend money, to cultivate the land, or to tend cattle, and a Sudra to serve the twice-born castes".(Chapter VIII verse 410) It further provided in Manusmriti that, 'a Shudra, being unable to find service with the twice-born (a term associated with the three "higher-order varnas") may engage in mechanical occupations such as handicrafts' (Chapter X verse 99-100) as their alternative duties.In any case however, it was impossible for the Shudra to be entitled for ownership of wealth or property .It is necessary to mention the fact that, (though) Manu assigns low position to the Vaishyas and Shudras (it) does not mean that he was not aware of their functional utility.In fact, he enjoins the king to ensure that the people of the Vaishya and Shudra varnas continue to perform the work prescribed for them because if these castes 'swerved from their duties, the world would be thrown into confusion'.(Shrirama 2007: 73) Given the ongoing discussion about origins of the caste system, it is essential to consolidate the extensive revelations by the most widely known ideological critique of such stigmatised classification of social identities.Beginning from the nineteenth century, the most noteworthy challenge to the institution of caste as a form of systemic structural inequality was first posed by the social reformer and thinker Jyotirao Govindrao Phule.This became an equally imperative question for Ambedkar who began to search for a possible redressal for the same nearly a century later.With time, Ambedkar became a notable critique of Manusmriti and emphasised on the noninterference of socially codified laws of Manu to the dynamics of state functions so as to attain a just and equitable social democracy that respected the dignity of all.The association as Chairperson of the Drafting Committee of the Constituent Assembly enabled him to incorporate through consensus, his core beliefs and values as an institutionalised mechanism that supported the primacy of law over individual interest or passion.Ambedkar analysed the varna-caste relation by identifying the similarities and differences between the two.According to him, Varna and caste are identical in their de jure connotation.Both connote status and occupation.Status and occupation are the two concepts which are implied both in the notion of varna as well as in the notion of caste.Varna and caste, however, differ in one important particular.Varna is not hereditary, either in status or occupation.On the other hand, caste implies a system in which status and occupation are hereditary and descend from father to son.(Mungekar 2017: 17-18) As Ambedkar's inquiry on the origins and growth of varna-caste system suggests, the evolution of varna into several castes is an evolution in the opposite direction (Bagade 2012: 25).It must be noted that, Ambedkar categorically rejected Manu as the originator of the caste system (Ambedkar 1916: 19).Nonetheless, he held that regimentation of caste identity emerged from the recognition that social status and occupation ought to be governed by the logic of hereditary succession (Mungekar 2017: 18).It is in this respect that Ambedkar contextualised religious sanctions to uphold caste hegemony and the indiscriminate degradation of Shudra and untouchable castes. As a matter of fact, understanding the term "caste" becomes essential in order to differentiate it from the term "jati".Typically, the belief is that caste translates as jati in English terminology. 6In the words of Galanter (1984), jati is "an endogamous group bearing a common name and origin, membership in which is hereditary, linked to one or more traditional occupations"7 (p. 7).It is to say, while 'jati is not visually ascriptive' (Deshpande 2011: 28), an individual may conveniently be placed under a particular jati based on the last name (surname) of the person.8Therefore, while varna ranking is visualised as a pan-Indian scheme, and castes are conceptualised as a set of regional and subregional groups, the term "jati" is representative of the local caste hierarchy.Therefore, while the article acknowledges the conceptual conflicts between varna and caste, it intends to relate both in a rather comprehensive perspective, and recognizes them as objects of individual or group identity that has an influence on inequality, exclusion and marginalisation. Dharamshastra, Knowledge and the State The classification of major literary sources for the history of India are broadly categorised under shruti (i.e.Vedas) and smriti (i.e.Dharamshastra) texts. 9The term "shastra" broadly connotes an organised compilation of 'knowledge '-"social, political, economic, religious, ethical, and aesthetic dimensions" (Sinha 2011).It is rightly noted that "the title of the work poses a problem for the readers, because the text is known by two different names, Manusmriti and Manavdharmashastra." 10 (ibid.).Nonetheless, the work is considered a "synthesis of philosophy, religion and law". Any shastra text is acclaimed as a comprehensive treatise on knowledge.While considerable analysis have been attempted by scholars on the shastra tradition through another ancient classical text Arthashastra, the manner of social conduct enforced within the domain of statecraft in the text has either remained unfamous or uncovered.The thematic contribution of this text on diplomacy and statecraft, often gains primacy over the nature of its cooperative state machinery that gave importance to institutional patronage of dvijas for an efficient functioning of the state.According to Chalam (2020: 110), "In the hierarchy of the state, the ministers, who were in general drawn from among the Brahmins, came first and then the purohits enjoyed the highest status…The vaishyas have cooperated with the king in carrying out the internal and external trade.Thus, the Dvijas had the opportunity to run the state in the past and in the present".Nonetheless, it is equally important to mention that later dharamshastra, specifically the Manavdharamshastra, borrowed theoretical concepts such as the idea of saptanga rajya-the state consisting of seven inter-related functions-from the text Arthashastra (Singh 2019).The purpose here, to include reference of Arthashastra is in synchroneity with the ideation that, privilege and domination by virtue of 'acquired knowledge' within systems of dharamshastra tradition, caused exclusion and marginalisation of some social groups. It is in this context, that the term 'spirituality' needs further analysis.A common impression of the term evokes a sense of communication of the self, with an invisible mystical power, embodied through the use and abuse of religion.This understanding of spirituality naturalizes the exercise of 'divine authority' to control individual-social conduct and ritual behavior.When such spirituality is located within the religiouscultural notion of Hindu social order, what effectively develops is the varnadharma categorization of people, as revealed in the text Manusmriti. The concept of State in ancient Indian political thought, is a complex theorisation of the institution defined in terms of its basic features that includes among others, a definite territory and a 'divine monarch', vested with authoritative and coercive capabilities.The existence of this institution as ascertained by historians and political This article uses the terms Manusmriti and Manavdharmashastra interchangeably to refer to the text in question. scientists, reveal that "vedic political organizations were pre-state social formations, and proto-states or states in Indian history first materialized in the post-Vedic period when the primary egalitarian ethos of the tribal society in the mid-Ganga valley gave way to the class-stratified society in which monarchy and aristocratic oligarchy and coercion were needed for the perpetuation of inequalities of property" 11 (Singh 2011: 10).Considering the differential treatment granted to 'divinely created unequal beings' in Manusmriti, the logic of governance, that is, the authority to make rules to regulate rights and duties, punishments, and rewards-became a natural tendency.Accordingly, this exposition helps us understand how, in order to ensure continuity of the 'divinely crafted' laws of Manu, political and institutional structures were organized to maintain a stratified social order and perpetuate inequality-of opportunity, resources and human dignity. It is observed that, "all hierarchies-and especially the inequalities of caste, class, patriarchy, etc.-were built on the claims of knowledge (both of the secular and supernatural religious variety)" (Mani 2012).Within this discussion therefore, it is intriguing to examine the 'knowledge' discourse through realms of-what constitutes 'knowledge', who 'owns' it, and the 'power of knowledge'.The focus here, however, is to interpret the domain of knowledge, independent of the Western conception, and in fact, within the framework of its Brahmanical textual construction. Structurally, a hegemonic knowledge-its constituents, realization, dissemination and enforcement-was largely restricted to religious sanctions of dharma.Therefore, knowledge of dharamshastras became a source of power to establish an intellectual domination of the brahmanas in ancient India, and thereby, essentially command spiritual adherence from the remaining varnas.The concept of dharma in ancient India implied that, acceptance of dharma became a means to regulate human behaviour as it was stated that dharma is beneficial for the welfare of both the individual and society.In this way, dharma recognised both individual behaviour and the social duties.This view made 'Dharma not only a base for spiritual and moral development but equally a base for stable and regular system' (Meena 2005: 577).Specifically, the context of dharma used in the text Manavdharamshastra is the creation of the 'divine power' established on the idea of religion and spirituality, for "the execution of right duties" in all aspects of human life.According to this notion, the only 'attachment' that mankind must have shall be the 'attachment towards one's dharma', for the text declares that "the accumulation of Dharma" alone, guarantees realization of the divine creator or the Supreme God. It will therefore be interesting to apply the vision of dharma envisaged in Manusmriti to the complexities of state functioning.In this regard, Manu is considered as the "first to systematize the science of government and administration" (Sinha 2011: 20) and the text as the propounder of the 'Divine Theory of the Origin of State' (Meena 2005;Sinha 2011).According to it, the king is a divine creation of God to protect all 11 The Nandas and the Mauryas of Magadh were the first to establish such a large-scale state.(Singh 2011) creatures."For, when these creatures, being without a king, through fear dispersed in all directions, the Lord created a king for the protection of this whole (creation)".(Chapter VII verse 3) Thus, it was assumed that, "God originated both the Dharma and state power at the same time.Due to this, Dharma made the king responsible towards the God" (Meena 2005) and the text declared that the king is free from accountability towards anybody in the world.However, it must be noted that Manusmriti validates subservience of the 'divine' king towards honour of the 'great deity' (Chapter IX verse 319)-the Brahmanas-"on account of superiority of his origin", that they be regarded as "the lord of all castes" (Chapter X verse 3). Therefore, it is conclusive to state that, constraint and coercion as tools of state power to enforce dharma, was embedded in the assigned duties of the King.According to Chapter VII verse 27, a ruler who uses his power to properly protect the caste order, will achieve all desires, wealth, and spiritual merit.On the other hand, one who misuses it for personal interests, will end up in destruction.In other words, "unless dharma upheld caste hierarchy, unless righteousness was bound to caste order, unless justice was one with danadaniti (rule of force)" (Mani 2012), the strength of "Dharma" would become insignificant.Thus, 'knowledge' as defined through the ancient textual tradition of dharmashastra, 'blurred the boundary between faith and reason, hierarchy and harmony, and their sole goal being power' (ibid.). Within this discussion it is important to highlight that, unlike textual sources of knowledge, oral forms of knowledge have traditionally been most closely associated with those commonly known as shudras according to the varnadharma system (Shepherd 2020).This was because during the pre-colonial era, the shudras were denied access to learning of ancient education.In the present age, perhaps, this has gradually transformed as a means for creative expression of their consciousness.In fact, in the anti-caste discourse, the use of new-age mediums of modern forms of entertainment has emerged as a widely popular mode of assertion-the phase of what is referred to as 'dalit cultural resistance' to caste subjugation and humiliation. Indeed, "brahmanic control over knowledge" remained the prerogative of those socially dominant within the caste structure, and "brahmanical forms of knowledge were critical in the establishment and maintenance of caste" (Mani 2015).Thus, knowledge and education in the context of ancient learning implied strategies of domination and exploitation, rather than an individual's liberation through reason and upward mobility.In this reference, John Fiske's observation on knowledge is important.According to him, "knowledge is power, and the circulation of knowledge is part of the social distribution of power" (Apple 2000: 179). Caste and Colonialism: Continuity and Change from Manavdharamshastra Theories and documentation that links Brahmanism and colonialism suggests that "caste is a colonial construction: almost a fabrication of the Population Surveys and Census Reports" (Mani 2015). 12In this regard, a brief understanding of the history of the colonial administration in India is worth revisiting.The Regulating Act, 1773 was a landmark legislation that introduced a new administrative machinery for the British East India Company (EIC), from the hitherto exclusively commercial entity, to govern the land and its people.Thus, the necessity for a comprehensive and uniform governance structure through 1773 Act, created the foundation of central administration in India.This meant that colonial rule heavily relied on textual prescriptions of both religious denominations, i.e., Hindus and Muslims-that overpowered local realities.Attempts to translate indigenous Hindu texts, including Manavdharamshastra therefore, became foundational for the evolution of the colonial judicial system to govern the Hindu population. Another aspect of the advent of colonialism on the institution of caste in India was the emergence of caste-based enumeration through conduct of official census in the nineteenth century.The Census was a direct survey of population; instead of surmising or using textual references (Samarendra 2011).This implied that individual questionnaire-based survey determined the presence of varna hierarchy, instead of interpretation of sanskrit texts.While such assessment had no uniformity in the method adopted, the purpose of the census was meant to count the population and classify it according to age, sex, religion, caste, occupation, among other categories.Such an enumeration exercise "started from census of the North-Western Provinces in 1865, and it continued to be a prominent part of the colonial census till 1931" (ibid.).It was realised gradually that the empirical caste census faced contradictions in terms of text and practice.The varna-based classification failed to adequately represent the entire population of the Indian society.Thus, probably for the first time, 'the state' (even if it was colonial), 'questioned the credibility of the propagator of this model -Manu' (ibid.).The fact that the so-called 'outcastes' or those outside the varna scheme did find mention in the caste census, the definite criteria for their identification was explicitly mentioned in the 1931 Census.The criteria to define such groups was determined by the degree of social restrictions and discrimination applicable on them.For instance, their inability to be served by barbers, tailors; inability to enter Hindu temples, and use public resources such as roads, wells or schools, became part of such criteria (Singh 1997).It was, in fact, this idea of untouchability, that restricted them from using or accessing natural and public resources (Bagade 2012: 33).Thus, colonial rule institutionalised the categories of caste-based divisions through conduct of official census. It is in this perspective that social reformers such as Jyotiba Phule and B.R. Ambedkar viewed the inability of colonial rule to correctly recognise the plight of the bottommost section of the population.This, according to them, was attributed to their use of "Brahmin spectacles" (Phule 1873) to position people within Indian social structure.Phule was convinced that the advent of British rule in India largely freed "the Shudras from the physical (bodily) thraldom (slavery)" (ibid.,:27).Nonetheless, he acknowledged the inadequacy of the British government to initiate equitable distribution of welfare to the masses, especially the neglect of primary education, which he believed to be critical for the emancipation from "mental slavery" (ibid.,) of the downtrodden.Thus, Phule's attitude towards the colonial government was as hostile as it was towards, what he referred to as 'Bhats' (Brahmins).On the other hand, Ambedkar, in his struggle against caste and untouchability, sought to awaken the identity of this social category for 'self-respect and self-esteem' (Bagade 2012: 35).He asserted that, "We must have a government in which men in power, knowing where obedience will end and resistance will begin, will not be afraid to amend the social and economic code of life."(Dr Babasaheb Ambedkar, Writings and Speeches, Vol.I: 505.) Religion, Social Laws, and the State: Locating Self, Family and the Social The relationship between spirituality and religion has already been substantiated within this article.Religion is ordinarily perceived as a 'way of life', often accommodated in everyday majority-minority political binaries.However, such a commonsensical understanding tends to overlook the influence of social construct on the economic and political dimensions of equality.Then, in order to trace the ancient logical interconnectedness between religion, social laws of Manu, and the State, it is pertinent to approach the issue from the axis of what is broadly referred to as 'governance'13 within the state-society relationship.In this regard, the interplay of governance dynamics in the ancient state essentially implied, an arrangement that influenced interactions among institutions of power, that determined individuals' choices that had an impact on both the individual and collective action.The article identifies three forms of governance-the individual or self, social relations, and kingship and administration.These are understood as interrelations and interactions of individuals between and within varnas, essentially dvijas and shudras as two distinct units.While the first two are dealt with in this section, the third pillar of kingship and administration has already been elaborated in the previous sections.For a focused analysis, the article intends to look at the question of caste-based marginalization from the perspective of occupation (livelihood) and gender (family and household).The reason for this lies in the theory of varna-sankara or mixed varnas, according to interpretation of the text Manusmriti, which declares that apart from the three dvijas-Brahmana, Kshatriya, Vaisya-and Shudra, "there is no fifth varna" (Chapter X verse 4).While it recognizes varna-sankara, two critical aspects necessary for the maintenance of social identity and to 'avoid varna-sankara' are through conduct of 'legitimate marriage' and prescribed occupational duties. Caste, Occupation and Livelihood Rules of occupational division of each varna and the economic organization of labour were an important aspect of social identity in the Manavdharamshashtra.Accordingly, the text prescribed 'Brahmana to teach the Veda, Kshatriya to protect people, and Vaisya to trade as their most 'commendable occupation'' (Chapter X verse 80).It acknowledged that hierarchies created as a result of the relationship between varna and occupation existed even in times of distress, when one is compelled to forgo his assigned means of subsistence based on varna.However, in such situations, it attempts to promote what is referred to as 'downward occupational mobility', that is, each preceding varna may perform an occupation of the succeeding varna but can never adopt the mode of life of their preceding varna.This rule was uniformly applicable to all varnas. The identification of one's caste, based on hereditary nature of occupation has been a unique feature of division of labour in India. "Division of labour as elaborated by Adam Smith and explained by Marx is a practice where the process of production is divided into different stages, like 18 sequences of pin making, and each process is perfected by one.This raises productivity.But in India, each occupation is held by a caste and the finished product is produced by the family or caste by following all the processes of caste occupation" (Chalam 2020: 13). Socially marginalized groups, including the Scheduled Castes (SCs) and certain artisan castes, have been historically characterized as involved in specific occupations that maintain their labour supply restricted to those.However, the social history of India reveals that such an 'assignment of work based on specific varna' can be located in the text Manusmriti.It is to be noted here, that the 'process of production' is equally important as the 'end-product'14 produced by such castes.For instance, in Manusmriti (Chapter X verse 99; Chapter X verse 100) knowledge and occupation of crafts ("mechanical work") has been assigned to Shudras. It is often said that the past does not remain in the past; its legacy continues to influence contemporary notions of skills/acumen attached to individuals.For example, processing of raw leather and manufacture of specific footwear, as two distinct occupations are included as a consolidated work of SCs.Indeed, lack of occupational mobility failed to improve their income, livelihood, and wellbeing conditions (Chalam 2020).Ambedkar rightly noted that, "As an economic organization Caste is therefore a harmful institution, inasmuch as it involves the subordination of man's natural powers and inclinations to the exigencies of social rules."(Ambedkar 1936: 37).This is where Marx's emphasis on the "unchangeableness of Asiatic Societies" is to be understood in context of the socio-economic character of labour in India.It is interesting that K.S. Chalam (2020) in his book, Political Economy of Caste in India, attempts to formulate what he calls as the 'Caste Mode of Production' (CMOP) as part of Marx's Asiatic Mode of Production, "as an analytical tool to understand the Indian situation". In his work, Annihilation of Caste, Ambedkar relates caste with limits to occupational mobility of individuals; "…that Caste System is not merely a division of labour.It is also a division of labourers."(Ambedkar 1936: 36).According to him, this "division of labourers" was based on the Hindu social structure that is characterized by hierarchy, rigidity, and individual efficiency and competency that depended on one's caste.The contemporary empirical findings suggested by studies conducted on the question of interlinkages between caste and occupation on food and beverages business and dominance of a particular caste among sanitation workers in a way reinforces these arguments.To elaborate, a 2013 research paper by Ashwini Deshpande and Smriti Sharma at the Delhi School of Economics, 'used data from the third and fourth rounds of the Indian Micro, Small and Medium Enterprises Survey to show that the share of SC-owned firms in the food and beverages category was much lower than the national average and the average for other social groups.The authors also found that SCs had a disproportionate ownership of leather-related industries'.Thus, historically while, 'caste divisions took place on the basis on occupations, within one occupational caste group divisions of sub-castes took place on the basis of what kind of labour/service/products provided or what technique of production employed were by particular groups/people' (Bagade 2012: 30). A paper published in 2021 categorically shows the existence of a peculiar occupational pattern among 'urban regular salaried workers aged between 15 and 65 years' using data from the 61st round of National Sample Survey (NSS), Employment and Unemployment corresponding to the year 2004-05 and Periodic Labour Force Survey (PLFS) 2017-18.Its findings reveal "that in 2017-18 SC (Scheduled Caste) workers' share in the middle-and low-level occupations was high (70.56%)compared with the HC (High Caste) (47.23%).The share was particularly high in elementary occupations,15 followed by service workers, shop and market sale workers, craft and related trade workers, plant and machinery operators, and assemblers.Conversely, the SC share in better quality occupations was low (29.43%) compared to the HC (52.77%).The better-quality occupations include legislators, senior officials and managers, professionals, technicians and associated professionals, and clerks" (Thorat et. al., 2021).According to the paper, inter-caste differences is equally significant in terms of employment rates and wage earning both in the public and the private sectors.What is worth mentioning is that not only unemployment rates among SCs is high, but "discrimination in the probability of access to employment is much higher in the private sector compared to the public sector" (ibid.).This is intriguing and offers a possible evaluation metric in policy-making, especially in context of the discourse on privatisation as a mechanism for economic restructuring, growth and development. It not only points to its logical relatedness to caste prejudices practiced in regular salaried employment, but also demonstrates that differences both in endowments factors (such as education, professional skills, work experience, and others) and due to discrimination faced in employment and wage rates in the labour market, compel greater representation of SCs in low-earning occupations in the informal sector (ibid.).The paper further indicated other factors that influence occupational attainment such as childhood influences, influences due to personal characteristics, and latent discrimination constraints occupational choice or entry among SCs.Further, on the implications of caste disparities in labour market, it suggests that legal and policy measures are necessary to ensure adequate representation of SC in their workforce to make it more inclusive and non-discriminatory.Diversity and inclusion, including perspectives on intersectionality within caste and gender, are not mere rhetoric, but instead form part of the Constitutional framework and values built in response to structural inequality as an outcome of stratification of identities based on caste. Caste, Gender and Household The law of marriage, as emphasised in the text Manusmriti, marks the beginning of Grihastha Ashram-the order of life to be followed by a householder.According to Manusmriti, it is stated that, a Shudra can only marry a Shudra woman; a Vaishya can marry any of the two; a Khastriya can marry a woman from his caste or any woman from the caste below him; while a Brahmin is eligible to marry a woman from any of the four castes (Chapter III verse 13).Within this scheme, Ambedkar observed that "low-caste women were made sexually accessible to the high-caste men" (Bagade 2012: 27) as his observation on caste and gender were made in the broad spectrum of caste hierarchy. While the text delves deep into an elaborate classification of marriages, a careful observation indicates that statements around marriage-related ritual ceremonies remain absent.Moreover, the interchangeableness in the use of words 'women' and 'wife' is significant enough to point towards the role of women as restricted to ideal 'wives' alone.It is essential to note that notions of "ideal women" within the text is representative exclusively of dvija women (wife).It declares that marital associations with a shudra women causes loss of one's varnadharma (Chapter III verse 14-19). Aspects related to household and family-roles within caste and social relationsrepresents an intrinsic predominance of a 'patriarchal authority governing social relations between men and women'.It is worth mentioning that protection of women (wife) is considered the "highest duty of all castes" (Chapter IX verse 6), as it is stated that "a woman is never fit for independence"(Chapter IX verse 3).This indicates an implicit presupposition according to which, while women (wife) may be a source of dishonour or ruin to herself and her family caused either by separation from husband, disloyalty, or even drinking liquor, but the only way she could bring honour is by duly performing the duties of a 'virtuous wife' (Chapter IX verse 27).Relatedly, the idea of "honour" within the text is based on two aspects -one, where it is overtly associated with women's chastity, and the other, where the cause of such reverence of women is presumed as necessary for the welfare of family, explicitly her male relations, i.e. "father, brothers, husbands, and brothers-in-law" (Chapter III verse 55-57). The language of the text Manusmriti views women as unworthy of respectful social dealings.It therefore employs the logic of domination and subordination to establish control over women-physical, social, and psychological-and ensure their perpetual patriarchal dependency.This dependency transcends to include the economic dimension, as it considered women as ineligible for 'ownership of wealth or property' (Chapter VIII verse 416).The only recognized inheritance right of women pertains to stri-dhana (Chapter IX verse 194)-a women's sole possession for life. Clearly, the rules concerning women not only legitimises their subjugation, but also deprives them of their access to knowledge, and restricts their self-determining, autonomous social position.In particular, the concept of gender, family and household are interwoven around ideals of "womanhood".Manusmriti was probably among the first in the series of ancient texts to have introduced certain degenerative practices, which compelled the Indian legal and constitutional machinery to introduce and influence several social policies of the state, especially those directed towards gender equality and women empowerment.To illustrate, some such practices includes child marriage (the 'child' is a girl according to Chapter IX verse 94), forbidding widow remarriage (Chapter IX verse 65), legalizing dowry (Chapter IX verse 194), restricting women's mobility to household work (Chapter IX verse 11), women's liberty to marry someone of her choice (Chapter IX verse 92), her share in father's property (Chapter IX verse 127), importance of male offspring "putra" (Chapter IX verse 137).Interestingly, the common perception towards specially -abled persons is also largely a contribution of Manusmriti, which relates bodily formations to sinful activities; with the degree of sin committed determining the level of change in physical appearance or mental abilities (Chapter XI verse 53). The above analysis therefore suggests that sanctity of regressive attitudes on the question of identities around caste or gender, are intricately linked to what Ambedkar referred to as 'rules of religion' (Ambedkar 1936) as warranted by social laws under Manavdharamashastra.He was convinced that religious reform meant, that religion itself should be grounded on doctrinal values of cooperation, dignity and worth of all, that encouraged free and just opportunity for all to participate, one that consciously discarded segregation, prejudices, and privileges. One's religiosity must necessarily be divorced from indoctrination of mind and heart, that may extend to superstition and bigotry (Phule 1873: 20).This necessitates a conscious deconstruction of mythology, traditions, and beliefs shaped by "the code of cruel and inhuman laws" (Phule 1873)-a methodological innovation initiated by reformer Phule (Bagade 2012) in nineteenth century India.In one of his most powerful writings, Gulamgiri (Slavery) published in Marathi in June 1873, Phule visualised societal divisions as a continuum of two extremes; one whose existence was defined by perpetual poverty, exploitation, and ignorance, and provision of material support to all other groups above them.The other constituted those literate castes who, through their inherited authority from religious-scriptural traditions and subsequent privileges, monopolised benefits from English education and clerical or professional employment in the British administration.Dr. Y.D. Phadke, an eminent scholar on Phule noted that, in his book Sarvajanik Satya Dharma Pustak, Jyotiba Phule warned against "persistent demands for Indianisation of the administrative services", for he was convinced that, "if accepted, (it) would lead to Brahmanisation of the services in India" (Phule 1873: 15) The interconnections between Phule and Ambedkar on the idea of religion is wellelaborated by the eminent scholar Gail Omvedt in an excerpt from her book Buddhism in India: Challenging Brahmanism and Caste.She noted that for both Phule and Ambedkar, 'Hinduism', in its present form was not a true religion and that finding a true religion implied freeing the masses from Brahmanic slavery.Just as Ambedkar's final and major book was to be The Buddha and His Dhamma, so the concluding written work of Phule's life also focused on religion-The Sarvajanik Satya Dharma Pustak, published just after his death.In it, he gave a savage critique of the Vedas, the Ramayana and Mahabharata stories, and undertook the effort to formulate a religious alternative: a true religion as universal; founded on reason and truth and rejection of superstition; anti-ritualistic; ethical; equalitarian, not recognizing caste or ethnic differences, and especially admitting the equality of women (Omvedt 2012). Towards Annihilation of Caste: Is Hatred more Powerful than Solidarity? The "system of priestcraft" (Phule 1873: 18) so established to entrench the institution of caste not only meant an unequal order, but it also perpetuated a 'psychological hatred' emanating from the unjust social order, and "the commonest rights of humanity were denied (to) the shudra-atishudras" (Phule 1873).According to Phule, "it was difficult to create a sense of nationality so long as the restriction on dining and marriage outside one's caste was observed by people belonging to different castes".(Phule 1873: 15) Interestingly, his efforts culminated in the formation of the Satyashodak Samaj (Truth Seekers' Society) in September, 1873 in Maharashtra.The organisation was a 'non-Brahmanical alternative to the then existing social reform organisations' (Harad 2021), and was founded on Phule's own ideological framework that aimed to 'deconstruct the hegemony of enslavement'.Till today, 'Satyashodak weddings resist Brahmanical rituals'-where, both the bride and bridegroom 'write their own vows', which they recite in front of guests on the wedding day (Harad 2021). For Ambedkar, it was the caste apparatus that prevented Hindus from forming a real society or nation-a thought that echoed with Phule's idea.He believed, that a society does not segregate individuals and impede collective cohesion, but caste consciousness in order to assert notions of hierarchical superiority and purity prevents solidarity among Hindus.He argued that, ..inter-dining and inter-marriage are repugnant to the beliefs and dogmas which the Hindus regard as sacred.Caste is not a physical object like a wall of bricks or a line of barbed wire which prevents the Hindus from co-mingling and which has, therefore, to be pulled down.Caste is a notion, it is a state of the mind.The destruction of Caste does not therefore mean the destruction of a physical barrier.It means a notional change.(Ambedkar 1936: 64) Any reform is a conscious attempt to initiate institutional or human behavioral transformations.In Annihilation of Caste Ambedkar declared that, caste (social aspect) prevented all reform (economic and political aspects), whether based on individual assertion or group authority.This may be contextualized in the contemporary period, with everyday cases of caste violence and atrocities.While Manusmriti acknowledges 'act of violence' as the 'worst offence', it introduced certain 'rules concerning selfdefence of twice-born men', that legalized violence done by them, if they are obstructed in "performance of their duties".This, according to it, is neither a sin, nor does it make guilty those who commit such an act (Chapter VIII verse 348-351).The commonality among all incidents is found in the 'magnitude of alleged crime' committed, that range from eating in front of upper-caste men, or owning and riding a horse, wearing a pair of royal footwear generally worn by upper-caste members, viewed as acts of resistance to caste norms and a sign of reversal of domination.The dynamics of such violence within urban spaces and among emerging nascent middle class who have benefitted from affirmative action (Chakravartty; Subramaniam 2021) are manifested differently.Instances of spatial segregation and physical violence then are either largely hidden or numerically low, compared to subtle, yet powerful forms of social ostracism, discrimination, and humiliation.International media reports in 2021 on the technology conglomerate Cisco Systems Inc., exposed realities of caste inequalities in a liberal society such as the United States.In the case, a Dalit engineer alleged that he was "ousted as beneficiaries of Indian affirmative action".On complaints to relevant authorities within the company, he was "retaliated by denying him opportunities for advancement".The Cisco case is another addition to the already existing literature of such cases in India.There exists abundant scholarly work that reflects a peculiar pattern of caste discrimination that equalizes merit with one's caste identity.Castebased affirmative action, that intended to widen opportunities for such communities to explore their capabilities through education, have exposed them to continuing realities of fierce opposition and stigmatisation of their worth and their social alienation.What exists then, are 'victims of caste-oriented psychological hatred'.The evaluation thus suggests, that traditionally asymmetrical power relations and social capital based on caste-based identities are primarily responsible for reproduction and revival of ideological faith in the hierarchical social system that supports a superiority-inferiority structure. Ambedkar's political approach to social reform was based on the Constitutional safeguards to acknowledge indifference and neglect of certain sections.His association as Chairperson of the Drafting Committee of the Constituent Assembly, enabled him to incorporate, through consensus, his core beliefs and values, as an institutionalised mechanism that supported the primacy of law over individual interest or passion.Therefore, Ambedkar's ideal of a caste-less society reflected his emphasis on equality, liberty and fraternity.For Ambedkar, to treat individuals unequally based on their 'effort', required that they must be treated equally so far as birth, family name, education, parental care, inherited wealth are concerned (Ambedkar 1936).Liberty for him meant full utilization of people's capabilities without enforcing control on their choices.However, an idea that is truly directed towards caste annihilation was his conception of fraternity.For him, fraternity implied that, There should be varied and free points of contact with other modes of association.In other words, there must be social endosmosis.This is fraternity, which is only another name for democracy.Democracy is not merely a form of Government.It is primarily a mode of associated living, of conjoint communicated experience.It is essentially an attitude of respect and reverence towards fellowmen.(Ambedkar 1936: 49) He believed that unless notions of 'collective honour' are transformed to 'honour of individual dignity', irrespective of caste identities, it was difficult to emulate in practice the three core values.This was because, according to him, effectiveness of assertionof belief, independence, and interest-depended on tolerance and unprejudiced nature of acceptance of assertion.If there was anything that withheld such acceptance, it was the sacred nature accorded to religious sanctions, 'that punished dissenters with excommunication' (Ambedkar 1936: 48), for he argued that "religious was social and religious was sacred" (Bagade 2012).Thus, "without using any force individuals are socialized by caste system and subjugated in the world of caste habits"-a form Ambedkar identified as "psycho-social regimentation of caste" (Bagade 2012: 22).Thus, …it must be recognized that the Hindus observe Caste not because they are inhuman or wrong-headed.They observe Caste because they are deeply religious.People are not wrong in observing Caste.In my view, what is wrong is their religion, which has inculcated this notion of Caste.If this is correct, then obviously the enemy, you must grapple with is not the people who observe Caste, but the Shastras which teach them this religion of Caste.(Ambedkar 1936: 64) A pertinent issue therefore, should be to question, if conformity to constitutional principles enforced through law alone can be a real mechanism for emancipation of those socially disadvantaged.The discussion on anti-discrimination law indicates that progressive legislations constitute an important part in the effort to address problems of inequality and social prejudice.However, as Edward Burke, an Irish social philosopher observed, "law can punish a single solitary recalcitrant criminal.It can never operate against a whole body of people who are determined to defy it.Social conscience is the only safeguard of rights.If social conscience is such that it recognizes the rights which the law chooses to enact, the rights will be safe and secure."Further, while priorities of modern governments to equality and liberty can be addressed to some extent through its social policies, the fact is, that fraternity can neither be legislated nor can it be cultivated within a policy framework.Then, it becomes imperative to explore alternatives for disintegration of the ideology of caste and how it governs within state and society.The article attempts to identify three such alternatives.First, the observation that caste identities tend to mobilize masses politically, implicitly assumes that political participation can be a mechanism to counter dominant traditions of caste-based inequality.However, this must be premised on concerted action by what Phule called "a united collective of the oppressed to counter social forces of caste Hindus" (Phule 1873).Second, a rediscovery of the institutional foundations of religion.In this context, Ambedkar was convinced that religion must be grounded on doctrinal values of cooperation, dignity and worth of all, and one that encourages free and just opportunity of participation to every being.As systems of belief, religion must consciously discard inequality, segregation, and prejudices.Third, a greater role of pedagogy in education for a moral empowerment of young minds-a teachinglearning methodology that demonstrates virtues of equality, fraternity, and justice among others, as noble qualities worthy of conscious nurturing.Broadly, it involves an assimilation of sociological and psychological approaches to develop their consciousness, and humane sensibilities that denies violation of individual dignity based on complex socio-religious norms. Conclusion Civilizations evolve through efforts to change.This becomes true despite continuous and rather regressive resistance.The text Manavdharamshastra is a unique combination of society and law-a contrast to democratic ideals of equality, liberty, fraternity, and justice.As representative of the State's divine power, it established the Hindu social order that marked origin of the use and abuse of codified social laws, to derive legitimacy and perpetuate inequality among subjects.It demonstrated how a traditionally unequal distribution of rights, privileges, and dignity manifests itself in the contemporary age, as varied forms of inequality-social, economic, and political.Subsequently, the pioneers to critique the dysfunctions of Manu's social laws became a subject of interrogation by social reformers like Jyotirao Phule and Dr. Bhimrao Ambedkar.Phule believed that the marginalised experience of injustice, deprivation and humiliation, transcended their everyday episodic social and political life, and was a feature of structural hierarchy based on 'superiority of birth' and access to opportunity and resources.This eventually validates the existing researches, that such asymmetries in equal opportunities, created a foundational impact on their access to education, health, and employment, that gradually widened both material and moral degradation, which together constituted a significant marker of policy intervention since Independence.The nineteenth century challenge to the institution of caste as a form of systemic structural inequality posed by Phule, became an equally imperative question that Ambedkar sought to address a century later.Ambedkar too became a notable critique of Manusmriti and emphasised on the non-interference of socially codified laws of Manu to the dynamics of state functions, to attain a just and equitable social democracy, that respected the dignity of all. A key theme throughout the article has been to highlight that the purpose of power is not only to demand social control, subordination and exercise restraint on immoral conduct, but also to introduce and nurture social change and transformation through ethical and political values in policy and practices that are based on a larger understanding of the inherent societal structure.
2023-11-01T15:03:52.885Z
2023-10-30T00:00:00.000
{ "year": 2023, "sha1": "f95af13d009095f4c77fbffda1583b10be657ec1", "oa_license": "CCBY", "oa_url": "https://journals.library.brandeis.edu/index.php/caste/article/download/502/254", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "748634ead7a5023292666b7fb738f3cade916de9", "s2fieldsofstudy": [ "Law", "Sociology" ], "extfieldsofstudy": [] }
14918238
pes2o/s2orc
v3-fos-license
Overexpression of the GmGAL2 Gene Accelerates Flowering in Arabidopsis A soybean MADS box gene GmGAL2 (Glycine max AGAMOUS Like 2), a homolog of AGL11/STK, was investigated in transgenic Arabidopsis lines. Ectopic expression of GmGAL2 in Arabidopsis enhanced flowering, under both long-day and short-day conditions, by promoting expression of key flowering genes, CONSTANS (CO) and FLOWERING LOCUS T (FT), and lowering expression of floral inhibiter FLOWERING LOCUS C (FLC). Moreover, frequency of silique pod set was also lower in transgenic compared to control Arabidopsis plants. RT-PCR results revealed that GmGAL2 was primarily expressed in the flowers and pods of soybean plants, GmGAL2 expressed higher in SD than LD in soybean. Introduction MADS box genes are found in both plants and animals and encode transcription factors that contain a highly conserved DNA-binding domain, which can regulate reproductive and vegetative developments (Alvarez-Buylla et al. 2000). In Arabidopsis thaliana, the MADS box gene family comprises more than 100 members, which can be grouped into five subfamilies, namely MIKC, Ma, Mb, Mg, and Md (Parenicova et al. 2003). The K box protein-protein interaction domain in MADS box proteins mediates the heterodimerization of MIKC-type MADS proteins that is necessary for the protein's function. MADS box genes control the identity of the apex meristem and floral organs, and the development of shoot, leaf, root, flower, and fruit (De Bodt et al. 2003;Foo et al. 2006;Irish 2003;Kater et al. 2006;Messenguy and Dubois 2003;Rijpkema et al. 2007;Robles and Pelaz 2005;Saedler and Huijser 1993). In addition, the MADS box proteins also regulate flowering time. For example, SUPPRESSOR OF OVEREXPRESSION OF CON-STANS (SOC1) is an integrator of different flowering pathways and can promote flowering (Kim et al. 2001;Lee et al. 2004;Moon et al. 2003;Samach et al. 2000;Sheldon et al. 1999), whereas FLOWERING LOCUS C (FLC) acts as a repressor of flowering (Hepworth et al. 2002;Poduska et al. 2003;Rouse et al. 2002;Swarup et al. 1999). Some MADS box proteins have multiple functions. AP1, AGL24, and SVP act redundantly to control the identity of the floral meristem and to repress expression of class B, C, and E genes (Gregis et al. 2009). A MADS box gene AGL12 regulating root development and flowering was identified (Tapia-Lopez et al. 2008). Seedstick/Agamous like 11 (STK/AGL11) is a key gene that controls ovule identity in Arabidopsis. In situ hybridizations revealed that AGL11 RNA accumulated only in developing ovules and associated placental tissues; no AGL11 RNA was detected in other floral organs during earlier or later stages of flower development (Rounsley et al. 1995). Ectopic expression of the STK/AGL11 gene was sufficient to induce the transformation of sepals into carpeloid organs bearing ovules (Favaro et al. 2006;Pinyopich et al. 2003); it also resulted in the presence of curved rosette leaves and bracts, and the conversion of sepals into carpeloid organs that could develop mature ovules (Favaro et al. 2006). STK sequences are highly conserved among dicots and monocots; however, their functions are not always the same (Colombo et al. 1997;Lopez-Dee et al. 1999;Skipper et al. 2006). FBP11, an AGL11 homolog in petunia, regulates ovule development; ectopic expression of FBP11 induced the formation of ovules on the sepals and petals in petunia Colombo et al. 1997Colombo et al. , 1995. Ectopic expression of LlMADS2, an AGL11 homolog from lily (Lilium longiflorum), caused the conversion of sepals and petals to carpel-and stamen-like structures in transgenic Arabidopsis plants (Tzeng et al. 2003). Interestingly, heterologous, ectopic expression of AGL11 in Arabidopsis could not induce ectopic ovule formation. Ectopic expression of OsMADS13, an AGL11 homolog from rice, failed to induce ectopic ovule formation as did FBP11 in Arabidopsis (Favaro et al. 2006). These results indicate that AGL11 might function in a species-dependent mode. AGL11 also plays a role in flowering regulation. Overexpression of LlMADS2 caused early flowering in lily (Tzeng et al. 2003). Such pleiotropic phenotypes are widely present in the plant kingdom. The blue light receptor, Cryptochrome (CRY) 2, has been shown to be a regulator of flowering and growth of the hypocotyl (El-Din El-Assal et al. 2003;Guo et al. 1998), which also affects fruit length, ovule number per fruit, and percentage of unfertilized ovules (Guo et al. 1998). The time to flower affects yield of soybean. As a shortday plant, flowering of soybean is sensitive to day length, which makes soybean an important model plant for photoperiod research (Zhang et al. 2008). We cloned and analyzed several MADS box genes from soybean by Rapid Amplification of cDNA ends (RACE). One of these soybean MADS box genes, GmGAL2 is a homolog of AGL11/STK. We found that soybean GAL2 affects flowering time in Arabidopsis when overexpressed. RNA Preparation and Gene Cloning For gene cloning RNA preparation, mixable shoot apical meristem of soybean on different stages (unifoliate, the first trifoliate, the second trifoliate, and the third trifoliate) were harvested in short-day conditions (8 h/16 h, light/dark). Different soybean organs were sampled at different stages in short-day conditions (8 h/16 h, light/dark), for example when each new leaf (unifoliate or trifoliate) expanded fully, for GAL2 expression analysis. Ten-day-old Arabidopsis seedlings in long-day conditions (16 h/8 h, light/dark, light was provided from 8:00 am to 24:00 pm everyday) were harvested at 11:00 am. Soybean flowers and immature pods were sampled in long-day conditions (16 h/8 h, light/dark, light was provided from 8:00 am to 24:00 pm everyday) and short-day conditions (8 h/16 h, light/dark, light was provided from 8:00 am to 16:00 pm everyday). RNA was prepared with Trizol (Invitrogen) and reversed transcribed to cDNA with M-MLV RT (Fermentas). We detected a set of ESTs that showed high sequence similarity to AGL11 (The Gene Index Project, http://compbio. dfci.harvard.edu/tgi/). Among them, AW705451 covered the 5′ terminal sequence of a candidate gene. Based on the sequences of AW705451, specific primers for RACE of the full length of candidate gene were designed: forward primer GAL2-F1 and reverse primer GAL2-R1 (Table 1). The reverse primer contained part of the adapter primer that was employed as a primer for reverse transcription, leading to higher efficiency. The resultant clones were sequenced to confirm their sequences. RT-PCR and Quantitative PCR RT-PCR was performed with the primers shown in Table 1. Quantitative PCR was carried out by ABI StepOne according to the manufacturer's instructions (ABI). Total cDNA (100 ng per reaction) was used as the template for RT-PCR. PCR products were analyzed using 1.2% agarose gel electrophoresis. All experiments were replicated in triplicate. The expression level of the GmUBQ gene was used as an internal control to normalize and calculate relative expression levels of genes tested using ImageJ software (http://rsb.info.nih.gov/ij/). Constructing Expression Vectors and Plant Transformation Construction of the expression vector was based on Gateway technology. The ORF of the GAL2 gene was cloned into Entry clone pDONR201 by BP clonase (Invitrogen) and transferred to destination vector by LR clonase (Invitrogen). The resultant vector was a binary vector in which GAL2 was driven by the CaMV 35S promoter. It was transferred into Arabidopsis plants Ler with floral dipping approach mediated by Agrobacterium strain GV3101 90RK. Phylogenetic Analysis A phylogenetic analysis of protein sequences was carried out using the amino acid sequence alignment generated by CLUSTAL-W. A neighbor-joining tree was built using the software of MEGA version 3.1. Support for the tree was assessed using the bootstrap method with 1,000 bootstrap replicates. The numbers at each node represent the bootstrap support (percentage). Cloning the GAL2 Gene We used the sequence of AGL11 (At4g09960) as a query in a tBLASTn sequence search (http://www.ncbi.nlm.nih.gov/). We detected a set of ESTs that showed high sequence similarity with AGL11 (The Gene Index Project, http:// compbio.dfci.harvard.edu/tgi/). Among them, AW705451 covered the 5′ terminal sequence of a candidate gene. Based on the sequence of AW705451, specific RACE primers for the full-length candidate gene were designed: forward primer GAL2-F1 and reverse primer GAL2-R1 (Table 1). The reverse primer was part of an adapter primer that was employed as a reverse transcription primer for higher efficiency. Using the RACE approach, we cloned a MADS box gene, GAL2 (Glycine max Agamous Like 2). A BLAST analysis against the database of TAIR8 (http://www. arabidopsis.org/Blast) indicated that GAL2 is homologous to Arabidopsis AGL11 (Fig. 1a). Phylogenetic analysis (www. ncbi.nlm.nih.gov/blast) also suggests that GAL2 belonged to the dicot family of AGL11 (Fig. 1b). GAL2 protein sequence had the highest similarity (95% identity) to LjAGL11 and it had 75.2% identity with AtAGL11. GAL2 has relatively lower sequence similarity with the AGL11 homologs in monocots. Like other MADS box proteins, GAL2 has a MADS box (residues 3-57) and a K box (residues 74-177) at the N-terminus that are key to the MADS box functions. A bipartite nuclear localization signal (residues 9-26) was found within the MADS box region (Fig. 1a, http://myhits. isb-sib.ch/cgi-bin/motif_scan). GAL2 Promotes Flowering in Arabidopsis To study the function of GAL2 in plant development, we overexpressed GAL2 via the CaMV 35S promoter in Arabidopsis plants. The transgenic plants were analyzed by RT-PCR ( Fig. 2e and f) and quantitative PCR (Fig. 2g) to confirm the presence and expression level of the transferred gene. Three transgenic lines showed GAL2 expression. Transgenic plants expressing GAL2 flower earlier than the wild-type control; each line showed a different level of expression. It appears that the level of GAL2 expression correlates with the flowering time (Fig. 2d) and the total number of leaves (Fig. 2c). The acceleration of flowering in the transgenic plants expressing GAL2 was more pronounced in short-day conditions than in long-day conditions ( Fig. 2a and b). These results suggest that the activity of GAL2 was partially dependent on photoperiod in Arabidopsis. We next examined the level of expression of different flowering time genes in GAL2-overexpressing plants, including FLOWERING LOCUS T (FT), SUPPRESSOR OF OVEREXPRESSION OF CONSTANS (SOC1), CONSTANS (CO), APETALA1 (AP1), and LEAFY (LFY). Figure 3a shows that expressions of FT, SOC1, CO, AP1, and LFY increased in transgenic lines, while the expression of flowering suppressor, FLC was repressed (Fig. 3). GmGAL2 overexpression in Arabidopsis induces flowering and leads to elevated levels of CO and FT mRNA. Expression of SOC1, AP1, and LFY is also increased, presumably as a consequence of higher FT transcription, but it is also possible that overexpression in Arabidopsis of STK or GmGAL2 leads to artificial effects. GAL2 Affects the Development of Different Organs in Arabidopsis GAL2 also affected the development of different organs, just as OsMADS13 did (Favaro et al. 2002a). In transgenic plants, all the leaves are small and curl upwards, and this phenotype is much stronger in long-day conditions than in short-day condition (Figs. 2a and b, 4b and c). Flowers are smaller and shorter than that in the wild-type (Fig. 4d and e). In particular, the petals and sepals are shorter and do not cover the pistil completely, so that the filaments are exposed, but the style is longer than the wild-type (Fig. 4d). The number of seeds in each silique is reduced because the mature silique is shorter (Fig. 4g) than that of wild-type plants (Fig. 4f). Moreover, overexpression of GAL2 has some additional effects on plant development. As shown in Fig. 4g, the siliques in plants overexpressing GAL2 have persistent petals. However, carpel-and stamen-like organs, and ovule-like structures, were not found on any flower organs in the transgenic flowers. It is conceivable that GAL2 Fig. 1 Alignments of protein sequences of soybean GAL2 and its homologs in Arabidopsis thaliana and Lotus japonicus. The red box denotes the MADS box, the pink box highlights the nuclear localization signal in the MADS box, and the green box shows the F box. The species abbreviations are: At (Arabidopsis thaliana); CUM10 (Cucumis sativus); EgMADS1 (Elaeis guineensis); EgMADS1 (Eustoma grandiflorum); FBP7 (Petunia x hybrida); FBP11 (Petunia x hybrida); GhMADS-2 (Gossypium hirsutum); HoMADS1 (Hyacinthus orientalis); LjAGL11 (Lotus japonicus); LlMADS1 (Lilium longiflorum); LlMADS2 (Lilium longiflorum); LcMADS1 (Litchi chinensis); OsMADS13 (Oryza sativa); SlAGL11 (Solanum lycopersicum); VvMADS5 (Vitis vinifera); ZAG2 (Zea mays); and ZMM1 (Zea mays) cloned from soybean might play additional roles in Arabidopsis development other than regulation of flowering time. GAL2 is Mainly Expressed in Flowers and Pods in Soybean The analysis of expression pattern of GAL2 in soybean showed that GAL2 is constitutively expressed in all organs and all developmental stages, but higher levels of expres-sion were found in flowers and pods than in vegetative organs (Fig. 5). This observation is consistent with its hypothesized role in organ development in soybean. GmGAL2 expressed higher in SD than LD in soybean. The analysis of expression pattern of GAL2 in soybean KN18 in short-day and long-day conditions showed GmGAL2 expressed higher in SD than LD in Soybean (Fig. 3b). Discussion We cloned a STK/AGL11 homolog from soybean, GAL2 (Glycine max Agamous Like 2). Its sequence shows high similarity to AGL11 genes in many plants and it can be classed into the AGL11 group of dicots. The sequence of GAL2 is most similar to an AGL11 protein of a Legume plant (Lotus corniculatus, with 95% identity), while it only has 75.2% similarity to AGL11 of a Cruciferae plant (A. thaliana). The data suggests that AGL11 might originate from the same ancestor but has evolved independently in dicots, monocots, long-day plants, and short-day plants. Thus, AGL11 might have different functions between longday plants and short-day plants, although they share some conserved functions. The sequence of GAL2 shows typical characters of MADS box proteins, such as a highly conserved MADS box at the N-terminus, a nuclear localization signal in the MADS box, and a K box in the middle of the MADS box. These data suggest that GAL2 is a MADS box gene and a putative STK/AGL11 homolog in soybean. Like its homologs in other plants, ectopic expression of GAL2 affects Arabidopsis development, but its behavior is different to that of any other AGL11 homolog. As does its homolog in lily, overexpression of GAL2 promotes flowering in Arabidopsis in both long-day conditions and shortday conditions; however, GAL2 might not be involved in the regulation of circadian clock, because ox-GmGAL2 does not seem to affect the circadian clock in Arabidopsis (data not shown). GAL2 does not affect the morphology of flower organs, including ovules and sepals, although it can cause smaller-sized flower organs. Soybean GAL2 has additional functions in the development of flower organs. Flowers and siliques are smaller than those of wild-type plants, and persistent petals are obvious on the siliques (Fig. 4). In soybean, GAL2 expresses at a higher level in flower organs and pods than in vegetative organs (Fig. 5). This suggests that GAL2 is mainly involved in regulation of flowers and pods, in addition to its possible function in all the stages of soybean development. From the expression patterns of flowering genes, GAL2 participates in multiple pathways to enhance Arabidopsis flowering. In GAL2 transgenic Arabidopsis, flowering activators, such as CO, FT, SOC1, LFY, and AP1, are upregulated, while the flowering repressors (FLC) is downregulated. Despite the low level of expression in vegetative stages, GAL2 might also function in the development of vegetative organs. The phenotype of the leaves (curly) of transgenic Arabidopsis supports that hypothesis (Fig. 4b, c). The effects of AGL11 and its homologs on leaf development are dependent on species. The phenotype of curly leaves was observed in the bract leaves around flowers in Arabidopsis plants ectopically-expressing rice OsMADS13 (Favaro et al. 2002). The expression of AGL11 from Arabidopsis resulted in the presence of curved rosette leaves and bracts in transgenic Arabidopsis plants (Favaro et al. 2002). All the leaves have curly phenotype in plants overexpressing GAL2 ( Fig. 2a and b). Therefore, GAL2's function in leaf development is not the same as that of OsMADS13, even though both rice and soybean are short-day plants. The effect of GAL2 on leaf development might be in a photoperiod-dependent manner because the long-day condition enhances GAL2 function ( Fig. 2a and b). In summary, GAL2 is a homolog of A. thaliana AGL11 in soybean. It has similar, but not identical, functions on plant development compared with its homolog in other plants.
2017-08-02T23:32:38.609Z
2010-03-30T00:00:00.000
{ "year": 2010, "sha1": "dbb22c473e540605433378cbdc4b9d64ee6d7058", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11105-010-0201-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dbb22c473e540605433378cbdc4b9d64ee6d7058", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
238709902
pes2o/s2orc
v3-fos-license
Theorizing Institutional Entrepreneuring: Arborescent and rhizomatic assembling A growing body of research has cataloged the myriad actors involved in tackling persistent institutional problems. Yet we lack a theoretical toolkit for explicitly conceptualizing and comparing diverse modes of institutional entrepreneuring—the processes whereby actors are created and equipped for institutional action—capable of ameliorating grand challenges. Drawing on assemblage theory, we articulate two ideal-typical modes of assembling actorhood: arborescent and rhizomatic. We differentiate each mode along four principles: association, combination, division, and population. Building on our theorization, we propound an arborescent-rhizomatic space comprising clusters of arborescent, rhizomatic, and hybrid actorhood. To explore the generativity of our framework, we revisit selected research at the intersection of institutional entrepreneurship and grand challenges. We close by articulating how our concept of assembling actorhood reorients research toward institutional entrepreneuring and contributes to the application of assemblage theory within organization studies. principles constitutive of each mode: association, combination, division, and population. Whereas arborescent assembling relies on hierarchy and homogeneous multiplicities and is composed of dualisms, segmented according to taken-for-granted breaks, and propagated through genealogy and imitation, rhizomatic assembling entails centerless multiplicities comprising disparate connections between heterogeneous elements, which may break off and start up from any point, and populate through contagion and epidemics. Building on these two ideal-type modes of assembling and their distinguishing principles, we reconsider previous research at the intersection of institutional entrepreneurship and grand challenges. Given their systemic and interdependent nature, grand challenges might evince actorhood that reflects either extreme of arborescent or rhizomatic modes of assembling, or some mixture of the two. Reading broadly but selectively, we problematize rather than merely summarize prior findings (e.g., Alvesson & Sandberg, 2020), with the ultimate goal of opening up new understandings of processes of institutional entrepreneuring. We close by discussing how our concept of assembling reorients research in terms of institutional entrepreneuring specifically, and facilitates the application of assemblage theory within organization studies more generally. Notably, our assemblage theoretic model enables scholarship on institutional entrepreneurship to move past current dichotomies grounded in seemingly irreconcilable onto-epistemological assumptions (Garud et al., 2007;Hardy & Maguire, 2017) and reorients it towards institutional entrepreneuring. Whereas scholars have depicted these competing understandings as fundamentally different explanations, we posit that they are merely descriptions of different phenomena to be explained. Given its focus on modes of assembling actorhood, our approach provides an inherently processual foundation from which to theorize institutional entrepreneuring, without proscribing the configurations actorhood might take. Whether such assembling processes tend toward more arborescent or rhizomatic modes is an open question, one that can be both imagined and investigated using our conceptual framework. Assemblage Theory and Actorhood Over the past three decades, scholars have investigated myriad forms of institutional entrepreneurship (for recent reviews, see Hardy & Maguire, 2017;Micelotta et al., 2017). However, this work increasingly has been bifurcated into seemingly incommensurable perspectives, such as between actor-centric and process-centric accounts (Hardy & Maguire, 2008 and intentional versus emergent understandings of agency (Granqvist & Gustafsson, 2016). More recently, a new stream of research on grand challenges has emerged (Ferraro et al., 2015;George, Howard-Grenville, Joshi, & Tihanyi, 2016). As with research on institutional entrepreneurship, actors are a central focus. However, rather than balkanizing into one camp or another, grand challenges researchers appear to have elided these problems by taking a more ad hoc approach to theorizing actorhood (e.g., see the diversity of approaches in George et al., 2016). One downside is an impaired ability to integrate and generalize findings from this growing body of research. In both literatures, what has so far eluded scholars is a conceptual approach for theorizing institutional entrepreneuring that is flexible enough to accommodate different configurations of actorhood, expansive enough to address the complexities, uncertainties and evaluativities endemic to grand challenges, yet incisive enough to enable comparison and generalization across studies. To address this onto-epistemological shortcoming (Alvesson & Sandberg, 2011), we turn to assemblage theory (Deleuze & Guattari, 1975, 1976, 1980, 1986, 1987 to theorize the assembling processes whereby actorhood is configured within particular historical contexts (Nail, 2017). Although assemblage theory is increasingly drawn upon in organization and management research, scholars have overlooked its potential contributions to the conceptualization of actorhood. For instance, some scholars have built on conceptualizations of assemblage offered by Callon or DeLanda, which diverge from Deleuze and Guattari's and ignore the rhizome-tree distinction that runs through A Thousand Plateaus (Buchanan, 2015). Hence, we opted to directly read Deleuze and Guattari, distilling from them two ideal-type modes of assembling actorhood: arborescent and rhizomatic. Deleuze and Guattari's concept of assemblage The concept of assemblage is the "general logic" running through A Thousand Plateaus (Nail, 2017). Unfortunately, the English word assemblage does not adequately capture the meaning of the original term agencement, which refers to the action of matching or fitting together a set of components (agencer). Thus, contrary to what is connoted by the English term, Deleuze and Guattari's original terminology denotes a "process of arranging, organizing, fitting together," not the static arrangement resulting from such a process (Wise, 2011, p. 91). In other words, assemblage theory might be more aptly called assembling theory, in which attention is squarely on the processes of "assembling agency" (Bowden, 2020). Critical to Deleuze and Guattari's concept of assemblage is the notion of multiplicity. "The book's fundamental claim is that things in general are, at bottom, assembled multiplicities as opposed to substances. Indeed, the particular elements of assembled multiplicities are also more or less loosely assembled multiplicities, all the way down, as it were" (Bowden, 2020, p. 386). As one consequence, assemblages are not unities; instead, they comprise many distinct interconnected components, including both content (i.e., pragmatic components such as bodies, tools, actions, etc.) and expression (i.e., semiotic components such as signs, utterances, etc.). Because these components retain their autonomy, they can "take flight" from one assemblage and join another. For example, human, horse, and bow comprise the mounted archer assemblage (Deleuze & Guattari, 1987), but each of these components can be found in other assemblages. In a more complex example, the electric power grid assemblage includes not only human bodies, machines, wires, coal, fire, electrons, and electromagnetic fields, but computer programs, legislation, and human desires (Bennett, 2005). What defines an assemblage are the relationships between components that shape what it is and what it is capable of doing (Wise, 2011), and these are always a matter of processes of assembling. Given these dynamics, the possibility of change is present in any assemblage; no assemblage is permanent (Wise, 2011). Each assemblage is unique, with its own history of formation (Nail, 2017). In Deleuze and Guattari's (1987) language, an assemblage's identity is defined by its territory, which it marks, claims, carves out or seizes from the strata-the physicochemical, organic, and alloplastic layers or milieus on which it sits. Put differently, every assemblage is composed of decoded fragments borrowed from its milieus. Thus, "the first concrete rule for assemblages is to discover what territoriality they envelop" (p. 503). Yet, every assemblage contains within it processes of change-"deterritorialization"-that destabilize and transform it. These open an assemblage up, allowing it to connect to other assemblages and expand its territory, but also expose it to loss, contraction, decay, and even death. From an assemblage perspective, actorhood is far from an exclusively human endeavor. Instead, actorhood emerges from interactions between many interconnected components, which include non-human actors of all sorts, from the laws of physical matter and the biological processes of organisms to bureaucracies, theories, and machines. Thus, in a very real sense, for Deleuze and Guattari, there is no actorhood apart from an assemblage, and conversely there is no assemblage without actors that bring it about (Nail, 2017). They are defined by their capacities to affect and be affected, and the compositions they can or cannot make. Finally, we see compatibilities between assemblage theory and institutional theory, particularly the latter's more phenomenological and constructivist variants (Jepperson, 1991;Meyer & Vaara, 2020). Deleuze and Guattari's ontology is one of immanence (Bowden, 2020;Nail, 2017) in which experience is seen not "as a relation between a subject who senses and an object that is sensed," but rather "as being prior to subjects and objects . . . a subjectless and objectless field of experience" (Lawlor, 2017, p. 62). This is a position that comports well with recent developments in institutional theory (Friedland, 2013;Mutch, 2018), and its early foundations (Gehman, 2021;Meyer, 2008). Assemblage ontology: The foundation of arborescent and rhizomatic assembling We have on numerous occasions encountered all kinds of differences between two types of multiplicities: metric and nonmetric; extensive and qualitative; centered and acentered; arborescent and rhizomatic; numerical and flat; dimensional and directional; of masses and of packs; of magnitude and of distance; of breaks and of frequency; striated and smooth. (Deleuze & Guattari, 1987, p. 484) In addition to developing the concept of assemblage, in A Thousand Plateaus, Deleuze and Guattari introduced two ideal types: the rhizome and the tree. In botany, a rhizome is a type of plant stem that grows horizontally underground and is composed of many nodes, each of which can sprout a shoot. The shoots grow out of the ground, becoming the visible part of the plant, while the rhizome itself is often hidden from human sight. Examples include ferns, spider plants, ginger, bamboo, and lotus flowers, as well many invasive plants. Building on this imagery, Deleuze and Guattari appropriated the term to designate a certain type of assemblage that functions as a centerless, ever-changing, and unpredictable network composed of interconnected, heterogeneous elements (Chia, 1999;Linstead & Thanem, 2007). Throughout A Thousand Plateaus, Deleuze and Guattari contrasted the rhizome with a tree: "a rhizome as subterranean stem is absolutely different from roots and radicles" (Deleuze & Guattari, 1987, p. 6). Whereas the rhizome structure often remains invisible, the tree structure, or arborescence, is central to how we think and describe the world (Adkins, 2015). "The tree imposes the verb 'to be' but the fabric of the rhizome is the conjunction, 'and ... and ... and ...'" (Deleuze & Guattari, 1987, p. 25). Indeed, the distinction between these two ideal types is a recurring theme throughout A Thousand Plateaus. Building on this fundamental distinction, in the remainder of this section, we differentiate between the tree and the rhizome and show how they constitute two ideal-typical modes of assembling actorhood: arborescent and rhizomatic. Through a close reading of Deleuze and Guattari's (1987, p. 7) attempt to "enumerate certain approximate characteristics of the rhizome," we identify four principles along which to analytically distinguish arborescent and rhizomatic assembling: association, combination, division, and population (see Table 1). Principle of association. First, arborescent and rhizomatic assembling can be distinguished according to the principle of association. A tree embodies stable and hierarchal organization; it "plots a point, fixes an order" (Deleuze & Guattari, 1987, p. 7). Thus, arborescent assembling is built upon hierarchical structures that connect similar elements (Adkins, 2015). Logic, biology, and linguistics all rely on trees. Taxonomic systems are arborescent. For example, in biology, all organisms are organized into a "tree of life" according to their species. More complex species branch out from simpler ones, their common ancestors. Likewise, in linguistics, languages branch out from common ancestors through processes of mutation. More generally, arborescent assembling reduces complexities through codification and grammatical operations. The stability and order of arborescent assembling is achieved through a homogenizing process that elevates one particularity while subordinating other differences. In biology, for instance, a large number of species can be categorized under the root concept "mammal" or "vertebrate" despite significant differences between them. More generally, arborescent assembling "simplif[ies], or at the very least tame[s] hugely complex or proliferating systems" (Glezos, 2012, p. 164). Importantly, in arborescent assembling, hierarchy pre-exists the individual, and transmission proceeds through preestablished channels: "An element only receives information from a higher unit, and only receives a subjective affection along preestablished paths" (Deleuze & Guattari, 1987, p. 16). For instance, in biology, genetic code is passed down from ancestor to descendant. By comparison, "the rhizome connects any point to any other point, and its traits are not necessarily linked to traits of the same nature" (Deleuze & Guattari, 1987, p. 21). For instance, while fabric is arborescent, composed of warp and woof, felt is rhizomatic, comprising multiple and random connections between fibers. Moreover, rhizomatic assembling creates relations between heterogeneous elements with no fixed structure and no single center. The Internet is an example of rhizomatic assembling. More generally, Glezos (2012) proposed that rhizomes subvert traditional hierarchies. Transversal lines cut across the normal order from local to national to international; top-down initiatives co-exist alongside allegiances between disparate groups. Compared with arborescent assembling, rhizomatic assembling is not a reduction to linguistic universals, but instead involves a throng of dialects. Finally, the internal organization of rhizomatic assembling is in constant flux because it ceaselessly establishes new connections between elements. Beyond connecting heterogeneous entities, rhizomatic assembling enables novel transmissions and translations of information. Viruses follow this rhizomatic pattern. Deleuze and Guattari (1987, p. 10) noted that some viruses "can take flight, move into the cells of an entirely different species," thereby transporting genetic material, influencing evolution, and creating transversal genetic links between species not otherwise connected; thus, "our viruses cause us to form a rhizome with other animals." Viewed in light of the Covid-19 pandemic, their comment first published in 1980 is eerily prescient: The difference is that contagion, epidemic, involves terms that are entirely heterogeneous: for example, a human being, an animal, and a bacterium, a virus, a molecule, a microorganism... These combinations are neither genetic nor structural; they are interkingdoms, unnatural participations. (Deleuze & Guattari, 1987, pp. 241-242) Principle of combination. Second, arborescent and rhizomatic assembling produce multiplicities that combine many elements, but these multiplicities are inherently different. Arborescent assembling creates multiplicities that subsume the many under the one through "coding" moves such as labeling, framing, and categorizing (Adkins, 2015). Categories and labels homogenize and simplify, but also dichotomize, creating distinctions which must in turn be dialectically brought together or unified according to some higher principle (Holland, 2013). "The notion of unity appears only when there is a power takeover in the multiplicity by the signifier or a corresponding subjectification proceeding" (Deleuze & Guattari, 1987, p. 8). "Coding" is essential to arborescent assembling in the sense that it depends on the ascription of identities to create multiplicities that will "hold their shape." Consequently, the assemblages that arborescent assembling produces are often "overcoded": their components have fixed and specific meanings or statuses, and are subject to rigid rules (Deleuze & Guattari, 1987). By comparison, rhizomatic assembling builds "true" multiplicities having "neither subject nor object, only determinations, magnitudes, and dimensions that cannot increase in number without the multiplicity changing in nature" (Deleuze & Guattari, 1987, p. 8). Differences are allowed to persist and co-exist; they might be dynamically negotiated, but they are never erased. In this way, rhizomatic assembling yields "not a discrete, static unity, but something constantly entering into and breaking off combinations with other multiplicities" (Adkins, 2015, p. 26). The assemblages resulting from rhizomatic assembling amount to nothing more than their elements and the connections between them. No "supplementary dimensions" hold them together. For instance, Glezos (2012, p. 174) described transnational activism as rhizomatic; it is "not just a process of different agents coming together in recognition of an implicit unity which precedes them . . . or a coded negotiation between pre-existing local unities with established and essential identities and interests." Rather, it is a process of becoming: "Transnational activism does not just build new movements, it also builds new actors" (Glezos, 2012, p. 174), and such ongoing interactions change identities and evoke new perceptions. Principle of division. Third, arborescent and rhizomatic assembling can be distinguished according to the principle of division. Arborescent assembling strives for neat divisions along "natural" boundaries and separations between their constituent parts or segments, which are ordered in nesting hierarchies of importance. In this way, arborescent assembling works with "discrete and atomistic units" which "are in principle separable" (Adkins, 2015, p. 27). Deleuze and Guattari (1987, pp. 9, 198) referred to these as "oversignifying breaks," in which power is a function of taken-forgranted roles, scripts, and categorizations. When unexpected disruptions occur (i.e., breaks appear at "unnatural" locations), arborescent assembling requires that these disruptions be resolved at a higher level in the hierarchy for the assembling process to recover and continue. Mimicry and imitation are strategies commonly employed in arborescent assembling to bridge seemingly irreconcilable differences between assemblages. Deleuze and Guattari used the unusual relationship between a particular type of orchid and a wasp as an example. Certain orchids can emit the same pheromones as females of a particular wasp species in addition to having external parts that resemble those females in appearance. Consequently, males of the wasp species attempt to mate with those parts, thereby pollinating the orchid. One interpretation is that this relationship constitutes arborescent assembling: "it could be said that the orchid imitates the wasp, reproducing its image in a signifying fashion" (Deleuze & Guattari, 1987, p. 10). However, Deleuze and Guattari (1987, p. 10) rejected this interpretation, averring instead that "something else entirely is going on: not imitation at all but a capture of code." Rhizomatic assembling, on the other hand, operates with neither natural separations nor clear boundaries; it refuses to carve nature at its joints (Adkins, 2015). Instead, rhizomatic assembling segments by "asignifying ruptures." Thus, a rhizome "may be broken, shattered at a given spot, but it will start up again on one of its old lines, or on new lines" (Deleuze & Guattari, 1987, p. 9). Whereas arborescent assembling tends toward territorialization (i.e., processes producing defined identities, routinized behaviors, and fixed boundaries), rhizomatic assembling tends toward deterritorialization (i.e., processes that result in more ambiguous identities, non-routinized behavior, and porous boundaries). Deterritorialization can be seen as threatening, because it is destabilizing, but it is also generative, since it offers an escape from rigid repetition, as exemplified by improvisation in jazz (Holland, 2013). Such "lines of flight"-Deleuze and Guattari's term for escapes from fixed routines-open new possibilities, allowing rhizomatic assembling to expand, create new connections, and develop new repertoires, thereby fostering innovation and resilience. According to Taguchi (2016, p. 45): "This means actively engaging in a practice of estrangement to get away from takenfor-granted and common sense significations." Thus, Deleuze and Guattari concluded that the relationship between orchid and wasp is actually an example of rhizomatic assembling. Each is deterritorialized by the interaction as it becomes part of the other. And yet, the orchid is also reterritorialized; its reproduction is ensured. Principle of population. Fourth, arborescent and rhizomatic assembling differ with regard to the principle of population. Arborescent assembling is founded on a genetic axis and grows and expands by following a plan or blueprint-that is, by applying or reproducing something readymade. Describing this phenomenon, Deleuze and Guattari used the term "decalcomania," as in the act of applying a decal or sticker (Adkins, 2015). Arborescent assembling populates by means of transferring existing pictures to other surfaces, or by making a tracing, reflecting a fundamentally self-referential or representationalist approach (Watson, 2013). To illustrate, Deleuze and Guattari (1987) referred to the work of psychoanalysts such as Klein and Freud, who took the rich psychological experiences of their patients and reduced them to familiar categories or "molar unities:" "the father, the penis, the vagina, Castration with a capital C..." (p. 27, emphasis in the original). Here, as with arborescent assembling more generally, the psychoanalyst relies on alleged "competence." Deleuze and Guattari critiqued this "decalcomania" as limiting: "Klein and Freud only have three stickers, . . . mother, father, and child. Whatever map the patients draw the analysts insist that these stickers be placed over the top so the pictures always come out the same, as Oedipus" (Adkins, 2015, pp. 29-30). In other words, tracings always follow pregiven structures and predefined pathways (Bowden, 2020). Rhizomatic assembling, on the other hand, "make[s] a map, not a tracing" (Deleuze & Guattari, 1987, p. 12). "A map is always contingent and partial, always drawn for some purpose, and omitting that which is, at that point, considered irrelevant" (Glezos, 2012, p. 177). Mapping is a performance, a singularity, and yet, alternative mappings of the same territory are possible, any of which may prove useful. There is no single entry or exit to the map, no one structure or generative order: "one of the most important characteristics of the rhizome is that it always has multiple entryways" (Deleuze & Guattari, 1987, p. 12). Whereas tracings are sedentary, maps are nomadic. In place of heredity or mimesis, rhizomes propagate by epidemic and contagion. Such rhizomatic population depends on experimentation (Watson, 2013); the ever-changing outcome is emergent, and cannot be known a priori (Bowden, 2020). This also means that a rhizome "has no beginning or end; it is always in the middle, between things, interbeing, intermezzo" (Deleuze & Guattari, 1987, p. 25). A rhizome is always becoming. Assembling actorhood: Between the tree and the rhizome No sooner do we note a simple opposition between the two kinds of space than we must indicate a much more complex difference by virtue of which the successive terms of the oppositions fail to coincide entirely. And no sooner have we done that than we must remind ourselves that the two spaces in fact exist only in mixture: smooth space is constantly being translated, transversed into a striated space; striated space is constantly being reversed, returned to a smooth space. (Deleuze & Guattari, 1987, p. 474) Although Deleuze and Guattari (1987, p. 13) acknowledged that they had "reverted to a simple dualism" by theorizing binary oppositions, they also went out of their way to periodically break them down, subvert them, or dance away from them. As is evident above, dualisms are associated with arborescence: "binary logic is the spiritual reality of the root-tree." Yet, they were clearly dissatisfied with arborescence: "We're tired of trees. We should stop believing in trees, roots, and radicles. They've made us suffer too much" (Deleuze & Guattari, 1987, p. 15). Whereas contrasting ideal types is clearly useful for theorizing, doing so also poses the danger of oversimplification. Deleuze and Guattari (1987) identified three problems of dualism. First, differences are too complex to neatly fit into a binary opposition, and therefore their theoretical ideal types are not in fact mutually exclusive opposites. Second, when looking for ideal types in real phenomena, one often finds mixtures, or "rhizome-root assemblages" (p. 15). Finally, phenomena more closely associated with one ideal type can transition (or be translated) to more closely resemble another: "There exist tree or root structures in rhizomes; conversely, a tree branch or root division may begin to burgeon into a rhizome" (p. 15). We agree with Deleuze and Guattari on these points. Although we have theorized two opposing ideal-type modes of assembling, these modes need not result in the assembly of pure types of actorhood; they also can produce a variety of combinations, or hybrids. Furthermore, actorhood is dynamic and can transition from being more rhizome-like to being more tree-like. Critically, an assemblage ontology provides a means of theorizing two modes of assembling actorhood. Actorhood in the Arborescent-Rhizomatic Space In this section, we explore the potential utility of our conceptual framework by examining the extent to which prior research at the intersection of institutional entrepreneurship and grand challenges manifests these four principles. Our primary goal in doing so is to explore the relevance of rhizomatic and arborescent assembling to scholarly understandings of institutional entrepreneuring. In particular, we theorize an arborescent-rhizomatic space wherein actorhood may be assembled in arborescent, rhizomatic, or hybrid configurations (see Figure 1). We then apply the ideal-type principles to selected articles at the intersection of institutional entrepreneurship and grand challenges. We categorize studies manifesting at least three rhizomatic principles as illustrating rhizomatic actorhood. Similarly, we categorize studies manifesting at least three arborescent principles as illustrations of arborescent actorhood. Studies that manifest a mix of principles are categorized as hybrid actorhood. Our evaluation of prior work brings into sharp relief the capacity for an assemblage ontology to explain different types of actorhood (e.g., centralized versus distributed). In doing so, we expand the interpretive space for understanding institutional entrepreneuring by showing that it is not restricted to either rhizomatic or arborescent assembling, but may manifest in different admixtures of the two. This expanded interpretive space affords researchers a toolkit for grappling with institutional entrepreneuring in a flexible and unified way. Arborescent actorhood In a number of prior studies, scholars have theorized institutional entrepreneuring in terms of arborescent assembling. For example, Battilana and Dorado's (2010) study of two Bolivian microfinance organizations-BancoSol and Los Andes-clearly evidences all four arborescent principles. Through their activities, these two organizations were exposed to a community and a financial logic, each with distinct goals, management principles, and target populations. Members experienced a tension between the two logics, which they felt compelled to resolve, illustrating arborescent dualism. Instead of connecting and sustaining differences, interactions within the assembling processes were aimed at synthesizing the two logics into a higher-order microfinance logic by selecting, balancing, or combining elements from each. Central to this approach was the creation of a common organizational identity based on operational excellence which discouraged the formation of subgroup identities aligned with either the community or the finance logic. Furthermore, oversignifying breaks, another feature of arborescent assembling, were evident in the way the two logics were embedded in the education and professional experiences of employees, producing the threat of fracturing. Battilana and Dorado (2010) argued that these logics are always on the verge of breaking each other down when combined, because employees who exclusively align with either logic struggle to overcome their oversignified understandings. The solution devised by one organization was to transcend both logics at the higher level of human resource practices and staffing policies by hiring employees with little experience, who are therefore not deeply embedded in either logic (what they call tabula rasa), and "stamping" the organization's identity on them through socialization practices. This reflects the arborescent principle of decalcomania (or tracing). More generally, among the studies we reviewed that evidenced arborescent assembling, oversignifying breaks were particularly prominent. Whether it was sustainable forestry (Zietsma & Lawrence, 2010) or business response to climate change (Wright & Nyberg, 2017), the idea of "breaks" from the status quo appeared feasible only when signified in terms of familiar categories. Interestingly, in prior studies manifesting one rhizomatic and three arborescent principles, the rhizomatic principle of connection and heterogeneity was evident. This is likely because addressing grand challenges, by its very nature, requires combining heterogeneous elements, such that even within arborescent assembling, heterogeneous connections become necessary. Mair and Hehenberger's (2014) study of the emergence of venture philanthropy as a new institutional model within the established field of traditional philanthropy provides a vivid illustration of all four principles of rhizomatic assembling. Whereas traditional philanthropy involves making gifts or grants to organizations that address social problems, venture philanthropy emphasizes holding recipient organizations accountable by establishing specific terms of reciprocity between the giving and receiving entities. Mair and Hehenberger's (2014) core thesis is that the emergence of venture philanthropy can be explained by assembling that connects a variety of people and organizations with disparate traits through "frontstage" and "backstage" events, which accords with the rhizomatic principle of connection. In their case, frontstage events such as conferences brought together organizations such as "foundations, private equity firms, private banks, and universities" that "cut across silos" of traditional and venture philanthropy (p. 1186). In contrast with arborescent hierarchy, these organizations had equal status as members of the European Venture Philanthropy Association (EVPA), a "broad church" that encompassed many entities. In turn, this diversity facilitated widespread adoption of venture philanthropy models. Rhizomatic actorhood Furthermore, Mair and Hehenberger's (2014) study illustrates rhizomatic multiplicity by showing that contradictory institutional models of venture and traditional philanthropy can co-exist in a state other than competition or a fragile truce (e.g., Battilana & Dorado, 2010;Marquis & Lounsbury, 2007). They demonstrated how institutional entrepreneurs created safe "backstage" spaces for deconstructing the venture philanthropy model, allowing the discussion to move from ideology (why) to practices (how). Participants debated the adoption of venture philanthropy practices by sharing their experiences, "driving the transition toward mutualistic relationship" (Mair & Hehenberger, 2014, p. 1188. Mair and Hehenberger's (2014) study also captures rhizomatic assembling via asignifying rupture. As venture philanthropy began to gain traction in Europe, breakdowns occurred between the two institutional models. Amid conflict and confrontation, the EVPA was established in 2004 to organize annual conferences in open spaces accessible to all. Participants soon realized that venture philanthropy practices needed refinement. Akin to a rhizome starting on new lines after a rupture, these workshops helped resolve conflicts over contested practices, bolstering the establishment of the venture philanthropy model. Each new shoot (i.e., venture philanthropy practice), while challenging the traditional philanthropy model, injected new energy into the process and contributed to the institutionalization of the venture philanthropy model. Finally, Mair and Hehenberger's (2014) study illuminates rhizomatic cartography by showing how events can become relational spaces that lack a generative order and are instead emergent in response to ongoing challenges in the field. For example, once the frontstage space of conferences no longer enabled actors to understand and adopt specific venture philanthropy practices, smaller "backstage" workshops emerged. More generally, across studies, rhizomatic assembling was not a feature of a single actor or a monolithic group of actors; rather, it emerged from the assembling of individuals, organizations, events, and ideologies from divergent fields that came together to address grand challenges. These entities retained their autonomy and distinctiveness in the process of rhizomatic assembling. Furthermore, grand challenges contexts such as market-building, governing institutions such as professional associations, and fields such as venture philanthropy may lend themselves to asignifying rupture such that conflicts do not endure along taken-for-granted boundaries. In such circumstances, actorhood need not dualistically pit one logic against another or old against new. Even when studies depict arborescent dualism, such as between local and national logics (Marquis & Lounsbury, 2007) or between stakeholders and an organization (Ferraro & Beunza, 2018), they evoke rhizomatic features with regard to the other three principles. Connection in rhizomatic assembling through, for example, dialogue (Ferraro & Beunza, 2018) or envisioning a common fate (Ansari, Wijen, & Gray, 2013), can be employed to address such dualisms. Other studies instead highlight arborescent decalcomania, such as when neoliberal principles undergird initiatives such as forest certifications (Bartley, 2007), or when responsibility frames are invoked to address the problem of conflict minerals (Reinecke & Ansari, 2016). Hybrid actorhood To illustrate the qualities of hybrid actorhood, we turn to Etzion and Ferraro's (2010) study of the Global Reporting Initiative (GRI) from the mid-1990s to 2010. Actorhood in this study embodies two arborescent principles (dualism and decalcomania) and two rhizomatic ones (connection and asignifying rupture). The rhizomatic principle of connection is evident in the GRI's early history. Assembling took place as the GRI pushed to establish sustainability reporting standards in concert with other organizations such as NGOs, chambers of commerce, investors, labor organizations, research institutes, etc., thereby "embodying the emergent field that it was attempting to steer" (p. 1096). The GRI achieved its goal of engaging in participative decision making to facilitate agreement among this diverse set of stakeholders. This agreement was important for GRI reporting to be accepted by actors in the field. The formation of the GRI also reflects the rhizomatic principle of asignifying rupture. The Coalition for Environmentally Responsible Economies (now, Ceres), a multisector NGO, played a central role in institutionalizing non-financial reporting. Ceres initially advocated for social and environmental reporting in the late 1990s, noting the reporting disparities across organizations. Seeing this heterogeneity as an opportunity, Ceres launched the GRI as a standard for reporting on nonfinancial metrics, thereby transferring its mandate to the GRI as an offshoot and creating a new line. Conversely, arborescent dualism is reflected in how actors involved in assembling processes conceived of the relationship between financial reporting and GRI reporting. Etzion and Ferraro (2010) described the institutionalization of GRI reporting as relying on analogy. For GRI reporting to take hold, it had to be made analogous to financial reporting, and later distinguished as a separate concept. Two separate, dualistic entities-financial and non-financial-had to be framed as analogous to each other. Finally, Etzion and Ferraro's (2010) study illuminates arborescent decalcomania, in the evolution of the materiality, transparency, and completeness of reporting guidelines from 1999 to 2006. A clear structure that could be traced and re-traced across organizations was important in establishing GRI reporting. For example, the authors described how BP created a materiality matrix to ensure that reported data had material meaning for stakeholders. BP's materiality matrix identified materiality based on the "level of external concern" and "potential impact on BP's ability to deliver strategy" (p. 1103). The structure and order afforded by clear guidelines was important for institutionalizing GRI reporting. A different kind of hybridity is evident in Dentoni, Pascucci, and Gartner's (2018) comparison of identity-persisting and identity-shifting community-based enterprises (CBEs). Hybrid assembling in this case included the CBEs, their stakeholders, and the routines developed to engage in distributed experimentation and make sense of emerging epiphanies. One set of CBEs, which largely reflected arborescent principles, engaged solely with positive epiphanies or exhibited only limited engagement with negative epiphanies, leading to identity persistence as they reorganized around old identities. Another set of CBEs, which largely adhered to rhizomatic principles, engaged with both positive and negative epiphanies, thereby facilitating identity shifts that led them to reorganize their practices around new identities. Importantly, hybridity in the actorhood of all CBEs stemmed from actions aimed at distributed experimentation which fostered two identity pathways: one that transformed identities and one that sustained them. In other words, both types of CBEs exhibited hybrid actorhood. Hybrid actorhood is also manifested when assembling processes exhibit both sides of the same principle. For instance, Dorado (2013) showed that the entry of many organizations with different preferences for hierarchy and connection into the microfinance field made institutional entrepreneurship at the field level possible. Whereas BancoSol was launched by a cross-section of tightknit elites from the community and the finance and banking industries, Los Andes was established by individuals from diverse backgrounds who drew on personal social networks to support the new venture. At the field level, microfinance assembling emerged through a combination of rhizomatic (i.e., connection) and arborescent (i.e., hierarchy) principles. Discussion Understanding actorhood has been a perennial concern of organization and management scholars (Hwang & Colyvas, 2020;Meyer, 2010). The need to better theorize types of actorhood is especially evident in prior literature on institutional entrepreneurship (Battilana et al., 2009;Hardy & Maguire, 2008, and more recently, in the burgeoning literature on grand challenges (Ferraro et al., 2015;George et al., 2016). To address this need, we have built on Deleuze and Guattari's (1987) landmark work to articulate two ideal-typical modes of assembling actorhood: arborescent and rhizomatic. Applying this framework to research at the intersection of institutional entrepreneurship and grand challenges enabled us to explore both the applicability and the generativity of our conceptual approach. Specifically, we have identified an arborescent-rhizomatic space by decomposing the actorhood evident in prior work according to the four principles in our framework. Below, we discuss how our work contributes to the revitalization of research on institutional entrepreneuring. We close by examining how our work contributes to the application of assemblage theory within organization studies. Revitalizing institutional entrepreneuring Our first contribution relates to longstanding interest in institutional entrepreneurship (Garud et al., 2007;Hardy & Maguire, 2017;Pacheco et al., 2010), and more recent suggestions to reorient this research in the direction of institutional entrepreneuring (Hjorth & Reay, 2018). Some 25 years ago, Holm (1995, p. 398) asked: "How can actors change institutions if their actions, intentions, and rationality are all conditioned by the very institution they wish to change?" Such provocations, together with DiMaggio's (1988, p. 14) concept of the institutional entrepreneur, sparked a string of widely cited studies (e.g., Garud, Jain, & Kumaraswamy, 2002;Maguire, Hardy, & Lawrence, 2004), as well as a highly influential special issue (Garud et al., 2007). However, the once vibrant work in this area seems to have plateaued. For instance, few notable studies were published between Hardy and Maguire's reviews in 2008 and 2017. Recent studies have arguably revealed more incremental insights by examining previously overlooked settings or particular stages of institutional change processes (Canales, 2016;Qureshi, Kistruck, & Bhatt, 2016). In our view, this outcome is at least partly due to the growing reification of apparently incommensurable dualisms (Hardy & Maguire, 2008;Granqvist & Gustafsson, 2016;Pacheco et al., 2010). Work on institutional entrepreneurship thus has ossified into different ontological camps, and in the process, become somewhat stifled. In an effort to revitalize this important scholarly conversation, we have proposed an approach to theorizing actorhood that emphasizes the processes whereby actors are created and equipped for institutional action. Building on a set of principles derived from Deleuze and Guattari's work, we have identified how arborescent and rhizomatic modes of assembling make possible diverse configurations of actorhood. In a "differentiating move that clears a new direction" (Steyaert & Hjorth, 2003, p. 7), we have endeavored to open an arborescent-rhizomatic space that enables new understandings of the notion of institutional entrepreneuring (Hermes & Mainela, 2015;Hjorth & Reay, 2018). In formulating the distinction between arborescent and rhizomatic modes of assembling, our point is not to simply replace prior dualisms with a new one. Rather, our aim has been to "employ a dualism of models only in order to arrive at a process that challenges all models . . . We arrive at the magic formula we seek-pluralism = monism" (Deleuze & Guattari, 1987, p. 20). In this regard, our assemblage theoretic model of actorhood problematizes the onto-epistemological assumptions (Alvesson & Sandberg, 2011) sustaining prior dualisms. Whereas scholars have depicted these competing understandings as ontological differences, we posit them as merely phenomenal descriptions on which researchers have based their analyses. Rather than pre-committing analysts to a single type of actorhood, our assemblage perspective leaves open the question of an actor's ontology. Consequently, seemingly fundamental distinctions (e.g., actor-process, rationalnonrational) are reformulated as merely phenomenal and lose their edge. This is a "mode of individuation very different from that of a person, subject, thing, or substance. . . . Climate, wind, season, hour are not of another nature than the things, animals, or people that populate them, sleep and awaken within them" (Deleuze & Guattari, 1987, pp. 261, 263). Concomitantly, a single ontology provides a foundation from which to explain diverse types or configurations of actorhood. Key here is an understanding of what an assemblage can do, its active and passive effects, and the compositions it can or cannot make. By revisiting prior work through the lens of assemblage theory, we have provided an expanded interpretive space for institutional scholars to fully embrace the processes of institutional entrepreneuring. Doing so is different from process-centric accounts, which are posited as the opposite of actor-centric accounts (Hardy & Maguire, 2017). Assemblage theory understands actorhood as at once immanent and the result of ongoing assembling processes. It thus allows investigation of both processes and the configurations that define actorhood's essential features. Extending this insight, it becomes clear that institutional entrepreneuring itself depends on actorhood; actorhood is not only an outcome of assembling processes, but also its milieu or medium. For example, viewing institutional entrepreneuring "as an emergent outcome of activities of diverse, spatially dispersed actors" (Hardy & Maguire, 2017, p. 274), might be explained in arborescent terms as a matter of hierarchy and homogeneity, or in rhizomatic terms as entailing lines of flight between heterogeneous elements. At the same time, focusing on configurations of actorhood, their capacity for reflexivity or their cognitive and emotional compositions could be explained by arborescent principles such as oversignification. Conversely, rhizomatic principles of heterogeneity and connection are evident in accounts of institutional entrepreneuring that emphasize building connections to alternate institutions within the milieus they inhabit. Reformulating the ontological into the phenomenal has important implications for institutional theory broadly, and institutional entrepreneuring more specifically. For instance, consider the socalled paradox of embedded agency that has animated considerable scholarly debate (Garud et al., 2007;Holm, 1995). Some scholars have insisted that any study of institutional change "must give full explanatory weight to agency and conceive the full gamut of human action" (Delbridge & Edwards, 2013, p. 928). According to this view, institutional entrepreneurs are defined by interests, goals, reflexivity, and coercive capacity (Mutch, 2007). This perspective contrasts starkly with those who hold that "social realities are hierarchically structured and at least partly independent of individual actors at any given time" (Modell, Vinnari, & Lukka, 2017, p. 64). Assemblage theory informs this debate in two ways. First, it allows researchers to theorize actorhood in ways that are unique to each phenomenon and research problem (e.g., Hwang & Colyvas, 2020). Embeddedness can be seen as a feature of actorhood resulting from its mode of assembling. For example, more arborescent institutional entrepreneuring, characterized by imitation, oversignification, and hierarchy, could produce highly embedded actors. On the other hand, less embedded actorhood may be the result of rhizomatic assembling, marked by contagion, lines of flight, and heterogeneity. In this regard, our assemblage theoretic typology can help researchers "lengthen, prolong, and relay the line of flight" (Deleuze & Guattari, 1987, p. 11), thereby moving to new territories in theorizing beyond the paradox of embedded agency. Second, assemblage theory's highly relational and material underpinnings decenter the interests, goals, and reflexivity of human actors. Such a corrective harkens back to Friedland and Alford's (1991, p. 240) original admonition to bring society back in: "One cannot derive a theory of society from the historic individuality that those institutional transformations created. The transhistorical individual cannot have ontological priority." Instead, it is necessary to specify "the institutional bases of individual and organizational identities, interests and actions." An assemblage perspective enables such theorization, one capable of encompassing not only human actors and the institutions they inhabit, but also myriad other elements. It is through the totality of these interactions and interrelations that actorhood emerges. Assemblage theory in organization studies Our research contributes to the growing interest in assemblage theory within organization studies (Carton, 2020;D'Adderio & Pollock, 2014;Glaser, 2017;Glaser et al., 2021;Orlikowski & Scott, 2008). Despite increasing awareness and selected use of the assemblage concept, few scholars have engaged with what is arguably Deleuze and Guattari's more profound contribution: the distinction between arborescent and rhizomatic assembling. For instance, although the term rhizome has been mentioned a handful of times in Organization Studies, it has always been in passing (e.g., Linstead & Thanem, 2007;Parker, 2017;Välikangas & Carlsen, 2020). Looking more broadly, the few scholars who have engaged with these concepts typically have done so metaphorically. For instance, Chia (1999, p. 210) proposed a "rhizomic" model of organizational change, contrasting it "against the dominant evolutionary, contextualist, and punctuated equilibrium models of change." Steyaert (2007) proposed taking a radical processual approach to studying entrepreneurship, and suggested doing so based on a Deleuze-inspired "rhizomatic logic." Kornberger, Rhodes, and ten Bos (2006, p. 66) contrasted arborescent and rhizomatic approaches to organizing, describing the arborescent approach as "one where 'all roads lead back to Rome' and where Rome is inevitably the 'top' of the organization." Offering the most sustained engagement we could find among organization scholars, Wood and Ferlie (2003) described "the organization of health care knowledge as non-linear, rhizomic communication" (p. 47) as opposed to "more widely assumed mechanisms of linear connectionism" (p. 58). Finally, stepping a bit further afield, others have built on Deleuze and Guattari's ideas to explore phenomena such as the Occupy Wall Street movement (Barthold, Dunne, & Harvie, 2018) and the rise of subversive forms of strategy (Munro & Thanem, 2018). While appreciating these prior attempts, we contribute by going beyond the metaphorical use of the rhizome. Instead, we take seriously Deleuze and Guattari's formulation of rhizomatic assemblages, which occupy a central place in their philosophical project, and which they repeatedly contrasted with arborescent assemblages. Our close reading of their work has enabled us to delineate two ideal-typical assembling processes-arborescent and rhizomatic-thereby offering an entirely novel theoretical typology for conceptualizing actorhood. Scholars have highlighted the importance of developing such typologies, which can foster rich theorizing by providing ways to organize and distinguish between complex phenomena (Doty & Glick, 1994). They also challenge simple cause-and-effect relations and instead allow for clusters and configurations which are flexible and potentially equifinal (Delbridge & Fiss, 2013). In line with Cornelissen's (2017) insights, the typology we have developed is deeply embedded in assemblage theory, and this strong philosophical embedding is central to our theoretical contribution. As anyone who has ever read A Thousand Plateaus can attest, it is a dense text. In this regard, a key contribution of our typology is its synthesis of four overarching principles-association, combination, division, and population-along which assembling may be conceptualized and analyzed. Although it may be tempting to view rhizomatic assembling in overwhelmingly positive terms, we have endeavored to treat both modes symmetrically. Especially notable here are Deleuze and Guattari's (1987) insights on rhizomatic assembling, compactly summarized in the quip: "the vampire does not filiate, it infects" (pp. 241-242). Whereas arborescent assembling entails propagation by means of filiation, heredity, and sexual reproduction, rhizomatic assembling occurs via epidemics, contagion, famine, and catastrophes. This is not a process of translation between codes, but "side-communication," a surplus value of code (p. 53). As the Covid-19 pandemic has shown, rhizomatic assembling has its downsides, too (see also Kuronen & Huhtinen, 2017 on "the rhizome of jihad"). We believe the framework we have developed provides a rich theoretical toolkit for investigating such issues, and perhaps not a moment too soon. For starters, Deleuze and Guattari (1987) described alliances and pacts as mirroring the form epidemics take in society. Hunting, war, occupation, and crime are all examples of rhizomatic assembling. So too are "minoritarian" groups, as well as groups that are oppressed, prohibited, or "on the fringe of recognized institutions" (p. 247). Such "monsters" constitute a rupture with central institutions. Rather than evolution, Deleuze and Guattari emphasized involution as the mechanism whereby "form is constantly being dissolved, freeing times and speeds" (p. 267). Involution is not regressive, but creative: "To involve is to form a block that runs its own line 'between' the term in play and beneath assignable relations" (p. 239). Although we lack the space to fully explore these observations, an assemblage ontology has much to contribute to academic conversations in areas such as criminal organizations (Cederström & Fleming, 2016;Vaccaro & Palazzo, 2015), stigmatization (Lashley & Pollock, 2019;Zhang, Wang, Toubiana, & Greenwood, 2020), and categories (Bowker & Star, 1999;Garud, Gehman, & Karnøe, 2010). Conclusion In Instincts and Institutions, first published in 1955, Deleuze (2004, pp. 19-20) observed: Every individual experience presupposes . . . the existence of a milieu in which that experience is conducted, a species-specific milieu or an institutional milieu. . . . Institution sends us back to a social activity that is constitutive of models of which we are not conscious, and which are not explained either by tendencies or by utility. In other words, institution precedes actors, and actors are always within institutions. Decades later, Deleuze and Guattari expounded these ideas and grounded them within an original social ontology. Drawing on this work, we have distinguished two ideal-typical modes of assembling actorhood and differentiated them in terms of four principles: association, combination, division, and population. Applying this framework to prior research at the intersection of grand challenges and institutional entrepreneurship, we have identified an arborescent-rhizomatic space in which clusters of arborescent, rhizomatic, and hybrid actorhood exist. As prior work makes evident, grand challenges manifest within an institutional matrix (Gehman, Lounsbury, & Greenwood, 2016). And yet, despite a central focus on actors, more integrated and programmatic insights have proved elusive thus far. Thus, grand challenges research constitutes a revelatory context for bringing to life our conceptualization of rhizomatic and arborescent assemblages. Our theoretical toolkit provides new possibilities for conceptualizing and comparing these modes of actorhood and their assembling processes. Looking ahead, we see opportunities for further applications of assemblage theory in organization studies and the potential to stimulate new insights on institutional entrepreneuring.
2021-09-25T16:06:58.129Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "f12a17010c06b29384b7b810c92f22328c9474df", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/01708406211044893", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "0486eb6775d2621e8b9ad91e293238a1ffe3130e", "s2fieldsofstudy": [ "Sociology", "Business" ], "extfieldsofstudy": [ "Sociology" ] }
249476460
pes2o/s2orc
v3-fos-license
A Glitch in the Matrix: The Role of Extracellular Matrix Remodeling in Opioid Use Disorder Opioid use disorder (OUD) and deaths from drug overdoses have reached unprecedented levels. Given the enormous impact of the opioid crisis on public health, a more thorough, in-depth understanding of the consequences of opioids on the brain is required to develop novel interventions and pharmacological therapeutics. In the brain, the effects of opioids are far reaching, from genes to cells, synapses, circuits, and ultimately behavior. Accumulating evidence implicates a primary role for the extracellular matrix (ECM) in opioid-induced plasticity of synapses and circuits, and the development of dependence and addiction to opioids. As a network of proteins and polysaccharides, including cell adhesion molecules, proteases, and perineuronal nets, the ECM is intimately involved in both the formation and structural support of synapses. In the human brain, recent findings support an association between altered ECM signaling and OUD, particularly within the cortical and striatal circuits involved in cognition, reward, and craving. Furthermore, the ECM signaling proteins, including matrix metalloproteinases and proteoglycans, are directly involved in opioid seeking, craving, and relapse behaviors in rodent opioid models. Both the impact of opioids on the ECM and the role of ECM signaling proteins in opioid use disorder, may, in part, depend on biological sex. Here, we highlight the current evidence supporting sex-specific roles for ECM signaling proteins in the brain and their associations with OUD. We emphasize knowledge gaps and future directions to further investigate the potential of the ECM as a therapeutic target for the treatment of OUD. INTRODUCTION In the United States, rates of opioid use disorder (OUD) and deaths from overdose have continued to climb over recent years, particularly in adolescents and young adults. Accompanying a rise in deaths from drug overdose has been a steady increase in the number of people diagnosed with OUD. Current estimates reflect more than 3 million people have OUD, with an estimated 200,000 new diagnoses annually. Despite the enormous public health impact of OUD, we lack a basic understanding of the neurobiological mechanisms that contribute to OUD and the associated health consequences. OUD is a chronic, relapsing brain disease that can be managed by long-term medical interventions and maintenance therapies such as methadone or buprenorphine. Yet, ∼90-95% of people with OUD relapse despite treatment, as cravings and other challenges such as protracted withdrawal, persist for weeks, months, and years (Smyth et al., 2010;Kadam et al., 2017;Montiel Ishino et al., 2020). Discovering new interventions and therapeutics for the treatment of OUD will require massive, parallel efforts, across multiple clinical and basic research domains. A critical effort will be necessary to further define the diverse array of consequences of chronic opioid use on the brain and body, along with an indepth investigation into the cellular and molecular mechanisms in the brain involved in opioid reward, craving, and relapse. Opioids lead to long-lasting changes in gene transcription, protein signaling, receptor activity, synaptic morphology and plasticity, as well as neural circuit function that contribute to the development of addiction (Hearing, 2019;Li et al., 2019;Madayag et al., 2019;Song et al., 2019;Valentinova et al., 2019;Koob, 2020;Jiang et al., 2021;Seney et al., 2021;Tavares et al., 2021;Trieu et al., 2022;Xue et al., 2022). A major class of signaling proteins involved in opioid-induced neural plasticity, include cell adhesion molecules (CAMs), matrix metalloproteinases (MMPs), and proteoglycans, and these proteins provide structural support to neurons, astrocytes, microglia in the formation of the extracellular matrix (ECM) and perineuronal nets (PNNs). ECM signaling proteins are involved in neurotransmission, synaptic plasticity, and vascular integrity in the brain. Over the recent decade, the ECM has become a focus as a major contributor to long-lasting neuroadaptations accompanying various processes, including learning, stress, and opioid use disorder. ROLE FOR EXTRACELLULAR MATRIX SIGNALING PROTEINS IN OPIOID USE DISORDER In the brain, the ECM is critical in the regulation of synaptic function, blood-brain barrier integrity, and cell-to-cell communication. The scaffolding of the ECM comprises polysaccharides and glycoproteins that provide the necessary structure to support communication between neurons, astrocytes, and microglia, and helps facilitate both the formation of new synapses and tuning of synaptic functions (Dityatev et al., 2010;Ferrer-Ferrer and Dityatev, 2018). In particular, the ECM signaling proteins, MMPs, are implicated in opioid reward and addiction (Ishiguro et al., 2006). MMPs are multifunctional proteases involved in a variety of cellular pathways and processes including inflammation, cell migration, and angiogenesis (Visse and Nagase, 2003). Opioids likely augment the activity of MMPs in the brain, substantially remodeling the ECM, potentially leading to opioidinduced changes in astrocyte-neuronal communication, synaptic plasticity, and trafficking of excitatory receptors (Figure 1; Michaluk et al., 2009Michaluk et al., , 2011Huntley, 2012). For example, opioids lead to increased expression of both MMP-2 and MMP-9 in cell lines (Gach et al., 2011), and notably, in the rodent (Chioma et al., 2021), and human (Kovatsi et al., 2013;Seney et al., 2021) brain. Both MMP-2 and MMP-9-dependent signaling may be important for opioid-induced degradation in the integrity of the blood-brain barrier and an increase in neuroinflammation associated with OUD in the human brain (Huntley, 2012;Dal-Pizzol et al., 2013;Song et al., 2015;Rempe et al., 2016;Hannocks et al., 2019;Seney et al., 2021;Akol et al., 2022). Indeed, OUD is associated with alterations in ECM signaling and dopaminergic, GABAergic, and opioidergic neurotransmission, along with increased neuroinflammation in the human dorsolateral prefrontal cortex and nucleus accumbens (Seney et al., 2021), major neural substrates for cognition, impulsivity, and reward. Consistent with this, intravenous selfadministration of heroin leads to elevated activity of MMP-2 and MMP-9 in the nucleus accumbens of both male and female rats (Chioma et al., 2021). Notably, MMP activity returns to below baseline levels following extinction of heroin self-administration behavior (Chioma et al., 2021). Opioid-induced increases in MMP activity are preferential to dendritic spines of dopamine receptor 1-expressing (D1+) medium spiny neurons (Chioma et al., 2021). In D1+ medium spiny neurons, MMP-9 activity seems to be acutely upregulated by heroin, returning to control levels after the removal of the drug and/or drug-cue (Chioma et al., 2021). As one of the major cell types in the nucleus accumbens that regulates drug reward-related behaviors, D1+ medium spiny neurons and associated MMP activity may serve as a key mechanism in the response to both opioid-induced and context-dependent neural plasticity (Smith et al., 2014). In mice, opioid administration also increases MMP-9 activity to modulate dopaminergic neurotransmission from the ventral tegmental area to nucleus accumbens (Nakamoto et al., 2012). Changes in MMP-2 and MMP-9 have been found in the blood from people being treated for morphine dependency (Najafi et al., 2018). While MMP-2 activity is increased in the serum of morphine-dependent patients, MMP-9 activity is decreased (Najafi et al., 2018). Other studies report elevated MMP-9 in blood from patients with OUD during opioid withdrawal (Salarian et al., 2018). Interestingly, both studies suggest MMP-9 reflects a possible treatment response, as the expression and activity of the MMP-9 are reduced by methadone therapy (Salarian et al., 2018) and other treatments (Najafi et al., 2018). Changes in MMP expression in the blood of patients being treated for opioid dependency and addiction may reflect functional alterations in the central nervous system that are critical in the development of tolerance and physical dependence. For example, MMP-9 is increased in the brain and spinal cord of mice administered morphine across multiple days and contributes to the development of morphine tolerance for nociception (Nakamoto et al., 2012) and physical dependence (Liu et al., 2010). Pharmacological blockade of MMP activity or knockout of MMP-9 prevents the development of morphine tolerance for nociception (Nakamoto et al., 2012). Morphineinduced upregulation of MMP-2 and MMP-9 production has been indicated in ECM maintenance, particularly as it FIGURE 1 | Synaptic morphology and function are regulated by ECM signaling proteins and microglia. The components of the ECM lie proximal to brain capillaries and vessels, condensed as PNNs around cell bodies, including neurons, astrocytes, and microglia, along with synapses and dendrites of neurons. ECM components are also distributed amongst cells of the brain within the parenchyma. Hyaluronan is primarily located in the neural interstitial matrix of the parenchyma. Hyaluronan is involved in the regulation of inflammation and myelination in the brain, including remyelination after insult or injury. Opioids lead to an increase in neuroimmune activation by microglia and other immune cell types in the brain. An induction of immune activation in the brain can lead to increased expression and activity of tPAs, MMPs, CAMs, and Collagen IV (Webersinke et al., 1992;Roberts et al., 2018). Augmented activity of these ECM signaling proteins remodels the ECM, with consequences on dendritic spine morphology, including the reduction of spine number in key regions associated with OUD (e.g., prefrontal cortex and nucleus accumbens). CAMs, cell adhesion molecules; CS-GAGs, chondroitin sulfate glycosaminoglycans; ECM, extracellular matrix; MMPs, matrix metalloproteinases; NF-κB, nuclear factor kappa B; OUD, opioid use disorder; PNNs, perineuronal nets; TIMP, tissue inhibitor of metalloprotease; TIMP1, TIMP metallopeptidase inhibitor 1; TLR2, toll-like receptor 2; tPA, type plasminogen activator. Figure created using BioRender. pertains to type IV collagen degradation and recycling (Gach et al., 2012). Specifically, opioid-induced alterations in MMP-2 activity are driven by the nitric oxide/nitric oxide synthase (NO/NOS) system, which in turn is regulated by receptor families independent of the µ-opioid receptor, thereby indicating a need for further research in opioid receptor crosstalk and subsequent downstream signaling cascades. Of note, NO/NOSrelated mechanisms are involved in opioid-induced inhibition of MMP-9 activity in an opioid-receptor-dependent manner (Gach et al., 2012). Therefore, ECM protein levels in the context of opioid use are tightly regulated by mechanisms dependent and independent of opioid receptor activity and are intertwined with the NO/NOS system. Taken together, these findings suggest that increases in MMP-2 and MMP-9 expression following opioid administration may be critical for behavioral tolerance and dependence as well as drug-and context-induced neural plasticity. Future studies should examine MMP-2 and MMP-9 in preclinical addiction model behaviors to examine their validity as potential therapeutic targets. A subset of MMPs, including MMP-2 and MMP-9, are activated by the serine protease tissue-type plasminogen activator (tPA), a key regulator of drug-induced synaptic plasticity and remodeling in major reward pathways of the brain (Calabresi et al., 2000;Sternlicht and Werb, 2001;Samson and Medcalf, 2006). Opioid administration leads to increases in tPA levels in the prefrontal cortex, hippocampus, and nucleus accumbens (Nagai et al., 2004). Importantly, the increases in tPA and MMPs during opioid administration are critical for the development of opioid tolerance (Yan et al., 2007;Nakamoto et al., 2012). tPA is also involved in locomotor sensitization to morphine (Bahi and Dreyer, 2008) and regulates the acquisition and maintenance of morphine self-administration behaviors (Yan et al., 2007), presumably via the modulation of dopamine neurotransmission in the striatum (Nagai et al., 2004;Yan et al., 2007). While increases in tPA and MMP are consistently found following opioid administration (Figure 1), the specific roles of tPA and MMP in opioid seeking, craving, and relapse behaviors, as related to OUD are unknown, requiring more studies into the potential crosstalk between tPA and MMP pathways in brain and behavioral plasticity associated with chronic opioid use. Another class of ECM proteins called cell adhesion molecules (CAMs) may be involved in opioid reward-related behaviors and OUD. CAMs facilitate interactions between the ECM and various cell types in the brain. CAMs bind to other cell adhesion proteins and neighboring neurons to regulate neuronal growth, synaptic plasticity, and function. In the brain, some of the more common CAMs include neural CAM (NCAM), and the Cadherin family, including cadherin-2 (CDH-2) (Polanco et al., 2021). In the hippocampus, knockdown of neural CAMs (NCAMs) decreases the formation of conditioned place preference to morphine (Ishiguro et al., 2006). Following a lethal dose of heroin, levels of NCAMs are increased in the hippocampus of postmortem brains from people with heroin addiction, which positively correlate with blood levels of heroin at the time of death (Weber et al., 2006). Levels of CDH-2 in peripheral plasma have been shown to be a potential biomarker for methadone treatment outcome, correlating with treatment success (Liu et al., 2020), while hippocampal RNA expression of CDH-2 is increased following oxycodone self-administration . However, this effect is specific to adult, but not adolescent mice, suggesting developmental stage may moderate the role of CAMs in opioid self-administration. Thus, opioids may lead to rapid increases in CAMs in a dose-and age-dependent manner in the brain, although whether CAMs directly contribute to neuroadaptations associated with OUD is still unknown, as these changes could be due to the acute effects of opioids. Future studies should examine the specific nature of CAM interactions concerning opioid use and relapse, with a specific focus on NCAMs and CDH-2 as potential biomarkers of opioid use. AN INTERPLAY BETWEEN THE EXTRACELLULAR MATRIX, MICROGLIA, AND NEUROINFLAMMATION IN OPIOID USE DISORDER The ECM, in conjunction with microglia and astrocytes, is integral in both pro-and anti-inflammatory responses in the brain. Several lines of evidence link pro-inflammatory cytokine signaling and microglial activity to susceptibility to opioid craving and reward processing (Bland et al., 2009;Hofford et al., 2019). Consistent with this, a recent study from our research group identified significant alterations in transcripts enriched for neuroinflammatory and ECM signaling in the dorsolateral prefrontal cortex and nucleus accumbens of people with OUD (Seney et al., 2021). For example, transcripts that are upregulated in both brain regions of people with OUD are enriched for tumor necrosis factor alpha (TNF-α) signaling via positive regulation of nuclear factor kappa B (NF-κB) (Seney et al., 2021). This finding further supports NF-κBdependent activation of pro-inflammatory TNF-α signaling associated with OUD. While neuroinflammation may play a distinct role in OUD, of particular importance is the impact of neuroinflammatory cytokine signaling on ECM remodeling activity. In human and rodent brain, chondroitin sulfate glycosaminoglycans (CS-GAGs) accumulate around the synapse in response to inflammation (Li et al., 2013) and may be increased following chronic opioid use in human brain (Seney et al., 2021). Indeed, the CS-GAG pathway is enriched in both the dorsolateral prefrontal cortex and nucleus accumbens of people with OUD (Seney et al., 2021). This poses the possibility that opioids and/or withdrawal from opioids leads to the aggregation of CS-GAGs at the synapses of neurons in regions involved in cognition and reward processing in response to alterations in the homeostatic regulation of inflammatory activity (Figure 1). Other factors involved in ECM signaling may also contribute to opioid reward-related behaviors and could be associated with OUD. For example, both TIMP metallopeptidase inhibitor 1 (TIMP1) and toll-like receptor 2 (TLR2) are involved in the remodeling of the ECM via inhibition of MMPs (Visse and Nagase, 2003;Ries, 2014) and were recently identified as hub genes (i.e., highly connected gene) within gene networks in the nucleus accumbens that were specifically associated with OUD (Seney et al., 2021). Possibly, chronic opioid use accompanied by periods of withdrawal induce the release of pro-inflammatory cytokines, in turn activating TLR2 and TIMP1, leading to remodeling of the ECM and altering synaptic plasticity and function. Activation of pro-inflammatory cascades by opioids are likely regulated by microglia, as cell type specific enrichment of markers demonstrate a potential primary role for microglia associated with OUD in the dorsolateral prefrontal cortex and nucleus accumbens (Seney et al., 2021). Notably, the same study found enrichment of integrin signaling pathways in OUD, suggesting integrins could be involved in the migration of microglia and/or the adherence of the ECM to microglia and neurons. Collectively, these findings provide strong support for the involvement of the ECM and microglia-dependent neuroinflammation (Shen et al., 2022) in OUD. Future studies combining new single nuclei sequencing technologies with histochemical approaches will be critical for further investigating the potential role of microglia and other cell types in inflammation and ECM remodeling related to OUD in the human brain. Other studies provide additional support for an important, functional role of microglia in OUD. Pharmacological inhibition of microglia (e.g., via AV411 compound, minocycline, or ibudilast) in rodent models reduces opioid seeking and rewardrelated behaviors and attenuates the subjective measures of opioid withdrawal in humans (Hutchinson et al., 2008Bland et al., 2009;Schwarz et al., 2011;Arezoomandan et al., 2016;Cooper et al., 2016;Pan et al., 2016). Reactivity of microglia to opioids may depend on "off-target" binding of opioid metabolites (e.g., morphine-3-glucuronide) to the toll-like receptor 4 (TLR4), initiating intracellular cascades involved in pro-inflammatory cytokine release and activation of the canonical NF-κB pathway that regulate opioid reward-related and analgesia behaviors (Zhang et al., 2017(Zhang et al., , 2020Green et al., 2022). For example, opioidinduced hyperalgesia and the development of tolerance depends on the release of the cytokine interleukin-33 (IL-33) by astrocytes in the brain and spinal cord (Hu et al., 2021). The release of IL-33 into the extracellular space activates astrocytes and microglia through NF-κB-dependent signaling (Molofsky et al., 2015), and is recently identified as a modulator of microglial-dependent degradation of the ECM (Nguyen et al., 2020). Therefore, IL-33 is one molecular intermediary by which microglia induce ECM remodeling and promote synaptic plasticity in an experiencedependent manner. Other pro-inflammatory cytokines may also be involved in opioid-induced synaptic and behavioral plasticity, including TNF and interferon alpha (Wang et al., 2018;Hofford et al., 2019;Seney et al., 2021). In addition to cytokines, specific substrates of the ECM are involved in microglial activation and neuronal functions related to opioid actions. The formation and integrity of perisynaptic ECM scaffolds and PNNs are regulated by microglia (Crapser et al., 2020a(Crapser et al., ,b, 2021Strackeljan et al., 2021). PNNs are located proximal to both neurons and glial cells, and in some cortical regions of the brain form dense nets that surround GABAergic interneurons (Kosaka and Heizmann, 1989;Brückner et al., 1993;Sorg et al., 2016;Jorgensen, 2021). In rats, PNNs are significantly reduced in the medial prefrontal cortex and nucleus accumbens following extinction from heroin operant selfadministration behavior (Van den Oever et al., 2010). Specifically, the ECM proteins, tenascin-R (TNR) and brevican (BCAN) are downregulated during heroin abstinence, yet are upregulated in the medial prefrontal cortex and nucleus accumbens in response to cue-induced reinstatement (Van den Oever et al., 2010). TNR and BCAN are preferentially expressed in PNNs that surround GABAergic interneurons in the medial prefrontal cortex and nucleus accumbens. Following reinstatement heroin-seeking behavior, these GABAergic interneurons displayed elevated spiking activity and enhanced inhibition of pyramidal neurons in the medial prefrontal cortex (Van den Oever et al., 2010). Therefore, TNR and BCAN may be key proteins of the ECM signaling pathways involved in the function of PNNs that modulate GABAergic cell activity during opioid reward-related behaviors, long-term abstinence from opioids, and potentially involved in opioid craving and relapse (Van den Oever et al., 2010;Xue et al., 2014;Favuzzi et al., 2017;Roura-Martínez et al., 2020). Taken together, there is a complex interplay between microglia, ECM, and neuroinflammation, and further studies examining these interactions related to OUD could be valuable for identifying new approaches for developing effective therapeutics. FUTURE INVESTIGATIONS INTO BIOLOGICAL SEX AS A POTENTIAL MEDIATOR OF EXTRACELLULAR MATRIX REMODELING AND SYNAPTIC PLASTICITY IN OPIOID USE DISORDER Susceptibility to OUD and the severity of the related symptoms are the result of a complex interplay of biological and psychosocial factors. Earlier studies describe sex-specific differences in the frequency of use of opioids and the prevalence of clinical diagnoses of OUD. For example, higher rates of opioid use and OUD have been reported in men compared to women (Lee and Ho, 2013), although women may have accelerated progression from initial use to dependence (Kosten et al., 1993;Brady and Randall, 1999). Additionally, risk and frequency of opioid overdoses and propensity to use heroin has been described in men compared to women, while women may be more likely to misuse prescription opioids (Parlier-Ahmad et al., 2021). Comorbid psychiatric disorders, such as major depression, are also more prevalent in women compared to men diagnosed with OUD (Parlier-Ahmad et al., 2021). Preclinical rodent models of opioid-related behaviors support sex-specific effects in opioid seeking, craving, and relapse, along with opioid withdrawal. Indeed, female rats tend to acquire morphine or heroin self-administration behavior quicker and display higher motivation to self-administer opioids compared to males (Lynch and Carroll, 1999;Cicero et al., 2003;Roth et al., 2004). Oxycodone self-administration is also significantly greater in female than male rats for both oral (Sharp et al., 2021) and intravenous self-administration (Kimbrough et al., 2020). Further, female rats exhibit higher sensitivity to the rewarding effects of morphine at far lower doses (Karami and Zarrindast, 2008). During withdrawal from opioids, female and male rats exhibit similarly elevated somatic symptoms (i.e., foot licks, grooming, and writhing) for nearly 48 h following opioid cessation (Gipson et al., 2021). Only increased body temperature was specific to female rats during withdrawal relative to males (Gipson et al., 2021). However, other studies report exaggerated somatic opioid withdrawal symptoms in both the severity and duration in male rodents (Cicero et al., 2002;Diaz et al., 2005). Activity of mu and/or kappa opioid receptors may also be involved in sex-specific effects of opioids (Barrett et al., 2002;Negus et al., 2002). Given this evidence highlighting sex as a critical factor in OUD and opioid actions, more studies are required to investigate the sex-specific cellular and molecular mechanisms involved in opioid reward, treatment response to opioids, and the development of dependence and tolerance to opioids. Few studies have directly examined the role of sex in ECM signaling in response to opioids and associated with OUD. Sexual dimorphism in ECM signaling pathways that regulate synaptic plasticity and neuroinflammation has been found in fish, birds, mice, and humans. In zebrafish, gene expression patterns for genes associated with the production of ECM signaling proteins are overrepresented in males (Wong et al., 2014), while sex differences in the number and formation of PNNs are observed in zebra finches (Cornez et al., 2015). Sex specific transcriptomic differences are found in mouse sensory neurons, specifically in genes related to neurotransmission, inflammation, and ECM reorganization, suggesting potential sex differences in susceptibility to neuroinflammation and ECM (Mecklenburg et al., 2020;Batzdorf et al., 2022) and OUD (Cahill and Taylor, 2017;Jang et al., 2020). In humans, differences in ECM signaling markers and remodeling are also found in blood serum depending on age and sex, irrespective of disease-related factors (Kehlet et al., 2018). While these few studies on sexspecific effects in ECM are sparse, they are particularly relevant considering the sexual heterogeneity in OUD. CONCLUSION In this review, we highlight the current understanding of the interactions between ECM signaling, neuroinflammation, and synaptic plasticity, as it contributes to opioid seeking, craving, and relapse behaviors. Overall, there is a need for additional research investigating the potential role for biological sex at the intersection of ECM signaling and remodeling, synaptic plasticity, neuroinflammation, and opioids. Targeting specific ECM signaling proteins (e.g., MMPs and CAMs) during opioid administration and/or withdrawal could be a viable therapeutic approach. Preclinical models of opioid self-administration, opioid tolerance, and withdrawal, as well as pain and analgesia, provide tractable approaches that can provide depth into the potential roles of the ECM in opioid-related neurobiology and behavior. The inclusion of sex as a biological variable in these studies should aid in the discovery of novel therapeutic targets for the treatment of opioid dependence and OUD, while also supporting more inclusive options for interventions and therapeutics. AUTHOR CONTRIBUTIONS MHR, BRW, MKK, CDB, and RWL conducted a review of the literature and wrote the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the NHLBI T32HL007224 to MHR, and NIDA R01DA051390 and NHLBI R01HL150423 to RWL.
2022-06-09T13:30:05.159Z
2022-06-09T00:00:00.000
{ "year": 2022, "sha1": "53d03fadbddfcac3dff20981d5d98cfd88070322", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "53d03fadbddfcac3dff20981d5d98cfd88070322", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
238770799
pes2o/s2orc
v3-fos-license
Global Dynamics and Optimal Design of a New Highly Efficient Nonlinear Energy Sink Based on Magnetic-Elastic Impacts with Negative Stiffness A new highly efficient elastic-impact bistable nonlinear energy sink (EI-BNES) based on magnetic-elastic impacts with negative stiffness and bistability is proposed and optimized through global dynamical analysis. The EI-BNES has better robustness and higher energy dissipation rates with nearly more than 96.5\% for broadband impulsive excitations than the traditional cubic NESs and single bistable NESs. The structure of negative stiffness impacts is realized by reasonable layout of permanent ring magnets and springs. A two-degree-of-freedom (two-DOF) elastic-impact system is established to describe the coupled nonlinear interaction between the main structure and the attached EI-BNES. A global Melnikov reduction analysis (GMRA) is proposed to study global dynamics and homoclinic bifurcations of the reduced two-dimensional subsystem, which is used to explain the mechanism of nonlinear targeted energy transfer (TET) and detect the threshold of impulsive amplitudes of EI-BNES for in-well and compound motions between in-well and cross-well resonance responses. A special type of saddle-center equilibrium points is also found in the non-smooth system of the EI-BNES and can be used to effectively increase the energy dissipation rates. The optimal design criterion of the tuned EI-BNES for better dissipation performance is also first discussed based on the GMRA and numerical techniques for calculating the Melnikov function of the non-smooth systems. The effectiveness of the analytical GMRA is also verified by numerical simulations. Introduction Structural vibration problems are common in mechanics, aerospace, civil engineering and other fields. They generally have negative effects on mechanical operations and structural responses. For example, chatter instability which may occur in the process of tool turning often results in the decline of machining quality [1], the engines fixed on the wings of an aircraft are prone to produce unsatisfactory vibrations and noise during operation, which leads to the reduction of flight comfort [2], and excessive vibrations of building structures threaten people's life and property safety under the actions of wind, earthquake or other dynamic loads [3]. Hence, how to absorb vibrations efficiently and dissipate vibration energy quickly to reduce the disadvantageous effects of vibrations is a hot issue in the field of vibration control. In order to solve the problems of structural vibrations effectively, Frahm [4] has first proposed the concept of dynamic vibration absorbers, it is also known as a tuned vibration absorber (TVA). The dynamic vibration absorber connects a small mass to a main structure by a linear spring with appropriate stiffness. Furthermore, the other relevant linear dissipation devices have also been studied in [5][6][7][8]. The linear vibration absorbers are demonstrated to be effective for vibration reduction only at a priori natural frequency of the main structure. Hence, how to employ nonlinearity in dissipation devices to expand vibration suppression bandwidth of frequencies and improve robustness has always attracted the attention of researchers. In the mid-20th century, Roberson [9] has connected a small mass by springs with linear and cubic nonlinear stiffness to a single-degree-of-freedom (SDOF) main structure under sinusoidal excitations and found the nonlinear vibration absorber has a wider vibration reduction frequency bandwidth than the TVA. Furthermore, Gendelman et al. [10] and Vakakis et al. [11] have proposed a class of passive dissipation structures with cubic nonlinear stiffness and linear damping and found the impulsive excitation energy of the main structure can be transferred to the additional light weight structure in one direction and locally dissipated by the damping. This phenomenon is called TET and these nonlinear attachments are named as NESs. Since then, more and more scholars have carried out a lot of studies on designs, dynamical analysis, experiments and applications of NESs, and ones can refer to the monograph [12] and the recent comprehensive reviews [13][14] for the latest research progress. It is well known that the cubic NESs have low energy dissipation rates under ultra-low and ultra-high input impulsive excitations [15][16]. In order to further improve the vibration reduction performances of NESs and broaden impulse magnitude, many scholars had begun to discuss NESs with the characteristic of bistable since 2014. By compressing linear springs attached to the main structure to realize the negative linear and nonlinear stiffness, AL-Shudeifat [17] has proposed a bistable NES and showed that the bistable NES has more efficient vibration absorption performances than the cubic NESs under impulsive loads. In addition, the detailed dynamics of bistable NESs with negative linear stiffness and cubic stiffness under impulse excitations have been respectively studied by an analytical method in [18] and numerical techniques employed to study the strongly modulated and chaotic cross-well oscillations in [19]. On this basis, Romeo et al. [20] have furthermore studied the transient and chaotic low-energy transfers in a bistable NES coupled to a linear oscillator. Habib and Romeo [21] have established a tuned bistable NES (TBNES) connected to one of two coupled symmetric linear oscillators and shown that the absorption capability of the TBNES is higher than the TMD and the purely cubic NES for a wide range of impulsive loads. Qiu et al. [22] have employed the complexification-averaging method to study the TET and optimal design of a bistable NES with negative linear stiffness and cubic stiffness under periodic excitations. Yang et al. [23] have studied the enhanced TET for adaptive vibration suppression of fluid conveying pipeline. In additional, Yao et al. [24][25] have respectively studied the NESs for rotor systems with bi-stable and multi-stable characteristics. Many scholars have also considered the advantage of fast energy consumption under impacts and obtained many studies about NESs with impact characteristic. In the study of structural protections in earthquake, a vibro-impact nonlinear energy sink (VI-NES) has been proposed in [26] and verified that it can transfer the seismic energy of the main structure passively to the additional nonlinear attachment. In addition, Lee et al. [27] have studied the periodic orbits and frequency energy orbits of elastic and inelastic VI-NESs and shown that the VI-NESs have the advantage of being able to absorb and dissipate most of the energy in the main structure in a short enough time scale. The dynamics of VI-NESs have also been studied analytically by means of the multi-scale method and power series expansion in [28]. AL-Shudiefat et al. [29] have proposed an asymmetric non-smooth NES and verified the highefficiency performances of unilateral VI-NES under impulse excitations. Furthermore, Gendelman et al. [30] have studied the dynamics and energy transfer of a periodically forced VI-NES and shown that the strongly modulated response contains the random distribution of resonant periodic and non-resonant motions. Gourc et al. [31] have built a new type of VI-NES and obtained that the periodic forced VI-NES is effective for TET. A two-DOF parallel VI-NES under periodic and transient excitations has been studied in [32] to show that the vibration control is more effective when the VI-NES is activated with two impacts per-cycle. A new VI-NES combined with cubic stiffness and bilateral barriers has been analyzed by harmonic balance method in [33]. Qiu et al. [34] have presented a VI-NES optimal criteria for vibration control under periodic and transient excitations and verified that the parallel VI-NES designed by this standard can achieve the best energy transfer in a certain frequency range. Considering the current developments of NESs, it is possible to combine the double advantages of bistability and impact to design a new highly efficient NES. Furthermore, how to develop analytical methods for optimal design of NESs and realization of highly efficient TET has also important value of engineering applications. So, a new more efficient EI-BNES combining the characteristics of impacts and bistability is first considered to design by magneticelastic negative impacts in this paper. Based on the geometric explanation of the Hamiltonian dynamics in terms of the slow manifolds of the dynamics, a new analytical GMRA to enhance TET in the damped systems will be further presented. The highlights are listed as follows. Firstly, the EI-BNES has better robustness and higher energy dissipation rates than the traditional cubic NESs and single bistable NESs. Furthermore, the structure of negative stiffness impacts is realized by reasonable layout of permanent ring magnets and springs and the main process and results are shown in detail in the Theorem 1. In addition, the GMRA is proposed to study global dynamics and homoclinic bifurcations of the reduced two-dimensional subsystem. The special saddle-center equilibrium points which can be used to effectively increase the energy dissipation rates are also found in the non-smooth impacts system of the EI-BNES. Finally, Melnikov function is calculated by numerical method and first used to discuss the optimization of the EI-BNES system. The effectiveness of the analytical GMRA is also verified by numerical simulations. The structures of this paper are presented as follows. In the second section, the new highly efficient EI-BNES is proposed. Furthermore, the energy dissipation rates and dynamic analysis of the EI-BNES and the realization of the negative stiffness impacts are discussed. In the third section, the GMRA is proposed to study the mechanism of TET and homoclinic bifurcations of the reduced two-dimensional subsystem. In the final section, the Melnikov function is solved by the combination of implicit functions and numerical methods and used to detect the chaotic thresholds of the homoclinic bifurcations of the system. Furthermore, the Melnikov function and the chaotic threshold curve are used to optimize the EI-BNES model and the effectiveness of the GMRA for optimal designs is verified by numerical simulations. Model of the EI-BNES In this section, the new highly efficient EI-BNES is inspired by a bilateral elastic impacts oscillator studied in [35]. The model under investigation consists of a linear oscillator (LO) with mass m 1 , a c 1 -viscous damper, a k p -spring and a mass m 2 attached to the mass m 1 as the EI-BNES. The mass m 2 is vertically connected with a pair of symmetric linear k 1 -springs and slides on a thin rod which is fixed on the main structure m 1 and has the viscous damping with coefficient c 2 . When the mass m 2 is in the center of the rod, the k 1 -springs are compressed and its length is l at this time. Furthermore, when the elastic impacts between the m 2 and the k 1 -springs happen instantaneously, the k 1 -springs are in the original state without compressed or stretched, and their original length of the k 1 -springs is denoted as L. The two ends of the thin bar are respectively wound with k 2 -linear symmetric springs. The model is shown in Figure 1. x for m 2 2 x for m When an impulsive excitation X is acted on the main oscillator m 1 , the mass m 2 will move from the left stable state. If the moving relative distance of the main oscillator m 1 and the mass m 2 reaches a certain value a or −a, the elastic impact happens between the mass m 2 and the k 2 -spring at this time. When the mass m 2 moves to the center of the rod, the mass m 2 is in an unstable state because of the k 1 -springs are in compression state. In additional, the bistability will also appear because of the influence of the springs with stiffness coefficients k 1 and k 2 . In this process, it is assumed that the mass m 2 will not impact with the leftmost or rightmost ends of the thin bar, namely, there will be no rigid impacts. When the mass m 2 slides on the thin rod, the viscous damping with the coefficient c 2 is also applied here. The coordinate origin is selected at the center of the EI-BNES frame shown in the Fig. 1. According to the Newton's second laws and the geometric configuration of the k 1 -springs, the dynamical equations of the mass m 1 and the lightweight m 2 with a piecewise-smooth restoring force are described as follows: (1) where x 1 and x 2 are the displacements of the main oscillator m 1 and the mass m 2 , a = √ L 2 − l 2 , the dots denote derivations with respect to the time t. Energy dissipation rates of the EI-BNES For impulsive excitations applied to the LO with different magnitude X, namely, the initial velocity of the LO iṡ x 1 (0) = X, the energy dissipation rates E NES (t) of the EI-BNES can be analytically obtained on the time internal [0, t] in the following form It is well known that achieving higher dissipation rates in a short time under broadband impulsive excitations is an important index to evaluate whether NESs have better vibration reduction effects. Therefore, based on energy absorption efficiency, energy absorption speed and range of impulsive excitations, the vibration reduction effect of the EI-BNES proposed in this paper is evaluated and the parameters k p = 1, c 1 = 0.001, c 2 = 0.01, k 1 = 0.2, L = 1, l = 0.9, m 1 = 1, m 2 = 0.05 are chosen in the following numerical simulations. The energy dissipation rates E NES (t) of the EI-BNES under different impulsive inputs X applied to the LO for t = 100s and t = 200s are first studied. The energy dissipation rates E NES (t) of the EI-BNES will be discussed when the stiffness coefficient k 2 is respectively selected as positive, zero and negative. Where k 2 = −0.05, k 2 = 0 and k 2 = 0.05 are respectively chosen to study the energy dissipation rates E NES (t) within 100s and 200s under different energy impulsive excitations X, the results of energy dissipation are shown in Figure 2(a) and Figure 2(b), respectively. When k 2 = −0.05, the energy dissipation rates of the EI-BNES are significantly higher than the cases k 2 = 0 and k 2 = 0.05 under different energy impulsive excitations X in the Fig. 2. When the different impulsive excitations X of the LO are in low level inputs, namely, X = 0 ∼ 0.4, the negative k 2 can maintain a higher dissipation rates than the positive stiffness and zero stiffness. The maximum difference of the dissipation rates among negative stiffness, positive stiffness and zero stiffness is even as high as about 40%. When the impulsive excitations X of the LO are in medium inputs, namely, X = 0.4 ∼ 0.7, the dissipation rates curves are almost the same under the three different stiffness conditions. However, the dissipation rates curve under negative k 2 is still slightly better than the other two cases. When the different impulsive excitations X of the LO are in high energy level, namely, X = 0.7 ∼ 1, the dissipation rates curves under the three conditions all begin to decrease, but the reduced amplitudes of dissipation rates curves in the cases of positive and zero stiffness are obviously higher than the case of negative stiffness. For the range of low impulsive inputs, we can also obviously observe the high amplitudes and frequency variation of the dissipation rates curves of the EI-BNES under the positive and zero stiffness, but the dissipation rates curve under the negative stiffness has slight change which shows the great robustness of the EI-BNES for different level impulsive inputs. Comparing the Fig. 2(a) with the Fig. 2(b), when the stiffness coefficients k 2 are chosen as positive or zero and the impulsive excitations X = 0 ∼ 0.4, the energy dissipation rates E NES (t) for t = 200s are significantly higher than the dissipation rates E NES (t) for t = 100s, the maximal amplitude difference of the dissipation rates can even reach about 15% within the two time interval. It can be seen that when the time t is slightly larger than 100s, the value of E NES (t) is still getting bigger, that is to say, there is still dissipation phenomena in the EI-BNES when the coefficient k 2 is positive or zero. However, the dissipation rates of the EI-BNES only slightly change between t = 100s and t = 200s when the k 2 is negative which means both the EI-BNES and the LO have almost reached the stable states at t = 100s and the presented EI-BNES with negative stiffness can even complete the vibration reduction in a shorter time. In order to observe clearly the level of dissipation rates in the presented EI-BNES with negative stiffness, the energy dissipation rates for k 2 = −0.05 within 100s under different impulsive excitations X are displayed in Figure 3, which shows initial input energy at more than 96.5% is dissipated by the damped EI-BNES. How to realize the negative stiffness impacts from physical models will be discussed in the next subsection, in fact, it is easy to explain the reason of the higher energy dissipations within shorter times in physical sense. When the mass m 2 and the k 2 -spring impact instantaneously, the k 2 -spring with negative stiffness will give the mass m 2 a force in the same direction as its own motion and cause a larger velocity difference between the LO and the EI-BNES. Therefore, according to the equation (3), we can easily get that there indeed has a higher dissipation rate in the EI-BNES under this case. Next, the changes of energy dissipation rates E NES (t) with time t will be analysed by respectively taking X = 0.2, X = 0.5, X = 0.8 as the representative in low, medium and high impulsive regions. The stiffness coefficient k 2 = −0.05 and the other parameters are the same as above. The changes of dissipation rates E NES (t) with time t are presented in Figure 4. For X = 0.2, X = 0.5 and X = 0.8, the energy dissipation rates of the EI-BNES respectively tend to be stable at about t = 90s, t = 50s and t = 60s in the Fig. 4. The energy dissipation rates E NES (t) of the three conditions are all nearly 100%. That is to say, most of the energy of the model has been dissipated by the damped EI-BNES, which directly shows one way TET from the LO to the EI-BNES. Moreover, when the energy impulsive excitation X is in the medium level, the energy dissipation process is faster to be completed in less than one minute. Time(s) Next, what k 2 -value can make the EI-BNES has better energy dissipation effect on the whole under the impulsive internal X = 0 ∼ 1 will be discussed. The variation range k 2 = −0.08 ∼ −0.01 will be set the other parameters are the same as above. The relationships among different impulsive excitations X, energy dissipation rates E NES (t) and different stiffness coefficients k 2 for t = 100s are shown in Figure 5(a) and Figure 5(b). It is found that the energy dissipation effect of the EI-BNES is better at k 2 = −0.05 which is shown in the red line for different impulsive excitations X applied to the LO in the Fig. 5(a). Therefore, how to develop analytical methods to reveal the TET mechanism and optimize the tuned EI-BNES to achieve the highest dissipation rates and the fastest dissipation process is a challenging topic. These are directly related to the presented GMRA and model optimization which will be discussed in detail in part 4.2. Design of the model with negative stiffness impacts. According to the EI-BNES in section 2.1 and the analysis of energy dissipation rates in section 2.2, it is found the dissipation rates of the EI-BNES can be all up to 96.5% and the EI-BNES completes the vibration reduction process within 1 minute when the coefficient k 2 = −0.05N/m and the impulsive inputs X = 0 ∼ 1m/s. How to realize the negative stiffness impacts in the real physical model is a new problem. In this paper, the realization of the negative stiffness impacts is discussed in detail and it is inspired by the design of a magnetic vibration absorber with tunable stiffness presented in [36]. The specific realization model is shown in Figure 6. The mass m 2 is vertically connected with a pair of symmetric linear k 1 -springs and it slides on the thin rod. A vicious c 2 -damping is also considered in the process of sliding. At the left (or right) side of the mass m 2 , a pair of symmetrical linear k 3 -springs are connected with a wood block ignored size and mass. The wood block passes through the thin rod and it can also slide on the thin rod. When the mass m 2 is in the center of the rod, the k 1 -springs are compressed to l-lengths. Furthermore, when the elastic impacts between the m 2 and the wood block happen instantaneously, the k 1 -springs and the k 3 -springs are both in their original state without compressed or stretched, and their original lengths of the k 1 -springs and the k 3 -springs are respectively denoted as L and l. The four corners of the frame are embedded three permanent magnets with paramagnetic poles. At the leftmost and rightmost sides of the rod are both inlaid a permanent magnet. In addition, two sides of the mass m 2 are respectively inlaid with a permanent magnet. The magnetic poles of the permanent ring magnets at all positions are shown in the Fig. 6 (black and white show different magnetic poles). The width of the permanent magnet is L * and the horizontal distance from the center of the three permanent magnets to the center of the mass m 2 is R. The distance between the center of the permanent magnet on the leftmost (rightmost) side of the bar and the center of the three permanent magnets embedded in the upper or lower corner of the leftmost (rightmost) end of the EI-BNES frame is d. The height of the permanent magnet is D 1 , the width of the rod is D 2 and the distance between the moving mass m 2 and the origin of the mass m 2 is x, where, r = R + L * . The mass m 2 will move from the left stable state under a certain impulsive excitation. If the moving distance x of the mass m 2 reaches a certain constant a or −a, the elastic impact happens between the mass m 2 and the wood block at this time. When the mass m 2 moves to the center of the rod, the mass m 2 is in an unstable state because of the k 1 -springs are in compression state. In additional, the bistability will also appear because of the influence of the springs with stiffness coefficients k 1 and k 3 and the magnetic force. When the mass m 2 compresses the wood block to the left (right), the permanent magnet embedded in the leftmost side (rightmost side) of the frame of the EI-BNES will act on the permanent magnet embedded on the left (right) side of the mass m 2 . In this process, it is assumed that the wood block will not impact with the permanent magnets at the leftmost and rightmost sides of the thin rod. The coordinate origin is selected at the center of the frame of this model shown in the Fig. 6. According to the Newton's second laws and geometric analysis, the restoring force induced by the permanent magnets and the k 3 -springs is obtained as follows: where K 1 and K 3 are the stiffness coefficients of the permanent magnets forces presented in [36]. The derivation of the negative stiffness coefficients Theorem 1. The aforementioned restoring force f (x) can be obviously written as if the stiffness coefficient of the k 3 -spring is taken as k 3 = −K 3 l 2 for K 3 < 0. Proof: The case x ≥ a will be only discussed such that the equation (4) becomes The Taylor expansion of the third items of the equation (6) is carried out at x = a and the following equation can be obtained If K 3 < 0 and the following equation is considered Then, f (x) can be approximately rewritten as follows: The case for x ≤ a can be discussed similarly, and the final restoring force is given by Remark 1. The Theorem 1 has shown that if the stiffness coefficients K 1 and K 3 are both negative, that is to say, K 1 < 0 and K 3 < 0, then the negative stiffness impacts can be realized. Next, whether the negative stiffness coefficients K 1 and K 3 can be obtained by adjusting the parameters of the model will be discussed. According to the above analysis, it is easy to find that the repulsive force on the left side permanent magnet of the mass m 2 can be adjusted by the length r. Similarly, the attractive force on it can be adjusted by the length R and d. Because of r = R + L * and L * is a fixed value, so the modifications of r and d are only needed to consider to adjust the coefficients K 1 and K 3 such that K 1 < 0 and K 3 < 0. Next, the numerical simulations of looking for the magnetic force coefficients K 1 and K 3 under different parameters r and d will be considered. Among them, the parameters L * , D 1 , D 2 , m r , m c , u 0 and m * are shown in Table 1. u 0 is the void permeability, m r is the magnetic moment of repulsive magnet, m c is the magnetic moment of attraction magnet, m * is the magnetic moment of the permanent magnet inlaid on the mass m 2 . The variable conditions of the magnetic coefficients K 1 and K 3 with r and d are shown in Figure 7(a) and Figure 7(b), respectively. Fig. 7. The coefficients K 1 and K 3 can both be two conditions which are positive, negative. Furthermore, the parameters K 1 and K 3 can both be taken relatively large negative values. That is to say, k 3 as a relatively small positive number can be obtained because of the relation k 3 = −K 3 L 2 1 . In this case, the value k 3 is also in line with other stiffness coefficients of springs. Even more fortunate is that K 1 = −0.05N/m can be obtained which is agreement with the discussion of negative stiffness impacts in section 2.2. The concrete parameters K 1 and K 3 under the two conditions are shown in Table 2. According to the above analysis, the cubic term generated by magnetic force and the cubic term produced by the k 3 -springs can be eliminated, so only the linear term generated by magnetic force is left. When r = 0.171m, d = 0.049m, K 1 = −0.05N/m, combining with the discussion of motion equations in section 2.3, the model can realize negative stiffness impacts. Therefore, the feasibility of the negative stiffness impacts of the EI-BNES is fully verified. Dynamic analysis of the EI-BNES Next, the time history of the LO and the EI-BNES, the phase portraits of the EI-BNES, and the corresponding frequency spectrums will be presented with the changes of the input impulsive loads to show the internal resonant capture of the nonlinear TET from the LO to the EI-BNES. The parameters L = 1, l = 0.9, m 1 = 1, m 2 = 0.05, k 2 = −0.05, k p = 1, c 1 = 0.001, c 2 = 0.01, k 1 = 0.2 are selected. The impulsive excitations X are chosen as X = 0.05, X = 0.2, X = 0.5, X = 0.8, respectively. Numerical simulations for 200s with the aforementioned four initial impulsive conditions are respectively shown from Figure 8 to Figure 10. In the Fig. 8(a), the LO and the EI-BNES almost stop vibration at about 60s and 100s, respectively, and they both tend to their equilibrium positions at the end. It is also found that the EI-BNES produces alternating in-well and cross-well oscillations in 0 ∼ 40s and vibrates around one of the two equilibrium positions in the following stage in the Fig. 8(a) and the Fig. 8(b). Moreover, the EI-BNES can generally complete the energy dissipate process for no more than 60s which sufficiently implies that the process of vibration reduction of the EI-BNES is fast. According to the Fig. 8(c) and the Fig. 8(d), the wavelet spectrums show that the 1: 1 resonance and the low-frequency resonance of the EI-BNES are excited which increases the dissipation rates of the EI-BNES. In the Fig. 9(a), the LO and the EI-BNES almost stop vibration at about 40s and 60s, respectively, and they both tend to their respective stable positions. In addition, it is seen that the EI-BNES generates large amplitude cross-well oscillation within about 0 ∼ 30s in the Fig. 9(a) and Fig. 9(b). According to the corresponding spectrums Fig. 9(c) and the Fig. 9(d), the time-frequency analysis indicates the LO and EI-BNES have the same resonance frequency (1:1 resonance capture occurs between the LO and EI-BNES). In the Fig. 10(a) and the Fig. 10(b), the LO and the EI-BNES almost stop vibration at about 60s and 90s, respectively, and they also both tend to their respective stable positions. In additional, it is seen that the EI-BNES generates large amplitude cross-well oscillation within about 0 ∼ 60s. The corresponding spectrums in the Fig. 10(c) and the Fig. 10(d) also reflect that the LO and the EI-BNES have the same resonance frequency (1:1 resonance capture occurs between the LO and EI-BNES). Dimension reduction of the motion equations The Melnikov method for non-smooth systems is an important analytical method [38][39][40][41][42][43][44] used to analyze the global chaos of piecewise-smooth systems. Hence, based on this mechanism, it is necessary to reduce the dimensions of the EI-BNES from four to two and analyse the sufficient conditions to induce transient chaos by means of the Melnikov function of non-smooth systems. Firstly, let Furthermore, the following equation is considered Substituting the equations (11) and (12) into the equations (1) and (2), the following equations can be obtained |µ| ≤ a, |µ| > a, where the derivatives with respect to τ. Next, the terms with ε which are regarded the perturbation parts of the equations (13) and (14) are discarded, then, the following equations can be got |µ| ≤ a,ẍ |µ| > a,ẍ The analytic solution ofẍ 1 + x 1 = 0 is easy to be obtained as follows: Substituting the equation (17) into the second equation of the equations (15) and (16), the following results can be obtained |µ| ≤ a,μ |µ| > a,μ Let then, the equations (18) and (19) can become |µ| ≤ a,μ |µ| > a,μ Furthermore, because of then, the equations (21) and (22) become |µ| ≤ a, The corresponding Hamiltonian functions of the equations (24) and (25) can be obtained as follows: If we consider H(µ, ν) = 0, the following equations can be got Furthermore, if H(µ 0 , 0) = 0 is considered, the following equation can be obtained where µ 0 > 0 can be obtained by finding the root of the equation (28) and (µ 0 , 0) is the intersection point of the right half homoclinic orbit with the µ axis. Next, the potential energy function and homoclinic orbits are numerically simulated to observe the dynamics of the reduced system. In Figure. 11(a), the potential energy has the local maximum at u = 0, a global homoclinic bifurcation will be produced, which implies a transition from in-well oscillation to cross-well oscillation. Further, the potential energy about k 2 increases with the decrease of k 2 . In Figure. Numerical solutions of the Melnikov function The four dimensional system (1) and (2) have been respectively reduced to the two dimensional system (24) and (25). According to the analysis of Melnikov function [42], the following result can be got Considering the parity of the trigonometric functions, the equation (29) where The detailed derivation process of the equation (30) is shown in Appendix A. According to the Melnikov's theorem, the following equations can be obtained Hence, the following equation can be obtained from the equation (32) Because of the equation (33) can become Optimization of the EI-BNES dynamical model The EI-BNES is mainly considered the characteristic of high dissipation rates. It is noticed that the dissipation rates E NES (t) relate to whether the EI-BNES happens transient chaotic cross-well motion occured near homoclinic orbit when the impulsive excitations X are in low impulsive excitations. Furthermore, it is found that there is a certain relationship about chaotic thresholds between I and A in the equation (35). It is easy to find that A is the impulsive excitations X of the LO and is given by the equation (17). Once the parameters of the EI-BNES are selected, I will be fixed according to the equation (35). At this time, if A is larger than I, then the chaos motions will occur. That is to say, the EI-BNES will have chaotic motions under any impulsive excitations X > I. It has been found that I 1 and I 2 mainly relate to k 1 , k 2 , and U mainly relates to k 1 in the equation (35). This means that I is mainly determined by k 1 and k 2 , hence, the relationship between k 1 and k 2 is first needed to find to better optimize the EI-BNES in order to improve its effect of vibration absorption. Analysis of the relationship between k 1 and k 2 Next, we will consider the EI-BNES to be a single nonlinear oscillator to detect the relationship between k 1 and k 2 for bifurcations. The non-smooth conservative system describing the motion of the EI-BNES can be obtained as follows: where the piecewise-smooth restoring force f (x) is For the convenience of discussion, the values k 2 = −0.05, L = 1, l = 0.9 are fixed in this section. The restoring force f (x) for different k 1 is illustrated in Figure 12 in order to detect the bifurcation value k 1 . It can be seen from the Fig. 12(a) and the Fig. 12(b) that k 1 = 0.025 (the yellow line) and k 1 ≈ 0.13 (the blue line) respectively are the bifurcation values such that the numbers of the zero points for the restoring force f (x) change from three to five and from five to three with the changes of k 1 . It is noted that the zero points x = 0, x = a and x = −a always exist for any k 1 and there exist two thresholds of k 1 such that the other two zero points will emerge or disappear, which will be detected for the accurate boundary of k 1 by the following analytical technique. By observing the behavior of the critical yellow curve shown in the the Fig. 12(a), the critical vale of k 1 for the numbers of zero points changing from three to five can be obtained by studying the limit of the restoring forcing f (x) as t → ±∞. When the case x > a is considered (x < a can be discussed in a similar way), the restoring forcing f (x) can be written as and its limit lim x→+∞ f (x) =Ã exists if and only if k 1 = −0.5k 2 , which is the bifurcation value such that the numbers of zero points change from three to five. The bifurcation value is the same as the numerical simulation in Fig. 12(a) for k 1 = 0.025 assigned. By observing the behavior of the critical blue curve shown in the the Fig. 12(b), the critical vale of k 1 for the numbers of zero points changing from five to three can be obtained by letting the right derivative f ′ + (a) = 0 or the left derivative f ′ − (−a) = 0 as follows: which can get the bifurcation value such that the numbers of zero points of the restoring forcing f (x) change from five to three. It is also noted that the critical value k * 1 ≈ 0.13 is also the same as the numerical simulation in Fig. 12(b). Next, the changes of stability of these equilibrium points and the global dynamics of the equation (36) will be analysed and depicted in Figure 13. Before further analysis, we give a new definition of saddle-center equilibrium points and the corresponding unique bifurcations of non-smooth systems. In the Fig. 13(a), the red solid lines represent the center equilibrium points, the black dotted lines represent the saddle-type equilibrium points, and the red and black intersecting lines represent the saddle-center equilibrium points. Furthermore, the EI-BNES has bifurcations not only about the number of equilibrium points, but also about the stability and the types of the equilibrium points. There are three equilibrium points when 0 ≤ k 1 ≤ −0.5k 2 (A area), five equilibrium points when −0.5k 2 ≤ k 1 ≤ k * 1 (B area) and three equilibrium points when k 1 > k * 1 (C area). It is easy to observe that saddle-center splitting bifurcation occurs at k 1 = −0.5k 2 and saddle-center and central convergence bifurcation occurs at k 1 = k * 1 . Combining the actual physical model, the bifurcations of these equilibrium points and their corresponding dynamics will be analysed in detail through phase portraits by selecting some specific values in the three areas, which are respectively displayed from Figure 13(b) to Figure 13(f). The concrete assignments for k 1 are k 1 = 0.02 in the area A, k 1 = 0.05 and k 1 = 0.08 in the area B, k 1 = 0.13 and k 1 = 0.16 in the area C. It is easy to find that (0, 0) always exists as the saddle-type equilibrium point and (±a, 0) are the saddle-center type equilibrium points for k 1 < k * 1 . Two other center equilibrium points are split at the bifurcation value k 1 = −0.5k 2 . They are close to the saddle-center equilibrium points (±a, 0) with the increase of the parameter k 1 and finally collide the (±a, 0) and merge into center equilibrium points at (±a, 0) for k 1 = k * 1 . The Fig. 13(b) displays a representative phase portrait for a given k 1 = 0.02 satisfying k 1 < −0.5k 2 . There exist one saddle-type and two saddle-center equilibria such that the EI-BNES shows strong instability and sensitive dependence on initial conditions. The main reason is the negative stiffness characteristics of the structure due to the double role of impacts with negative stiffness and bistability. Once the movement displacement of the mass m 2 satisfies |x| > a, the speed and the displacement of the mass m 2 will diverge to infinite so that the mass m 2 has the risk of hitting the leftmost or rightmost fixed boundary of the frame. However, this case should be avoided in our design of EI-BNES. Hence, we must assure k 1 > −0.5k 2 in the following discuss. The Fig. 13(c) and Fig. 13(d) respectively display two representative complicated phase portraits for two given k 1 = 0.05 and k 1 = 0.08 satisfying −0.5k 2 < k 1 < k * 1 . There exist one saddle at (0, 0), two semi-stable saddlecenter equilibria at (±a, 0) and two stable states as center-type equilibria located in the region |x| > a. A pair of homoclinic orbits noted as type I connecting the origin (0, 0) to itself transversally cross two switching manifolds Σ ± = {(±a, y)|∀y ∈ R} as an isoenergetic separatrix to display large amplitude cross well motions outside it. There also exist a pair of symmetry connecting orbits noted as type II in the region |x| > a respectively homoclinic to two semi-stable saddle-center equilibria (±a, 0). The region between the type-I and type-II homoclinic orbits are full of periodic orbits traversing one of the switching manifolds Σ ± with any period and the area is getting larger and larger as k 1 increasing. There are also full of in-well periodic motions with arbitrary frequency located inside of the type-II homoclinic orbit. However, the area occupied by these periodic orbits satisfying |x| > a gradually shrinks to (±a, 0) as the increasing of k 1 to k * 1 . These periodic orbits with arbitrary frequency provide the geometrical structure of multifrequency internal resonance with the main structure, which has huge potential to increase the energy dissipation rates by internal resonance capture and obtain better energy dissipation effect as a passive EI-BNES. Furthermore, the instability of the saddle-center type equilibrium points at (±a, 0) will cause rapid increase in the velocity of the EI-BNES, which can be employed to acquire energy dissipation rates of the EI-BNES. The Fig. 13(e) and Fig. 13(f) respectively display two representative piecewise bi-stable phase portraits for two given k 1 = 0.13 and k 1 = 0.16 satisfying k 1 > k * 1 . There exists a saddle (0, 0), two center equilibria (±a, 0) such that the EI-BNES shows stability at x = ±a. There are two piecewise-smooth homoclinic orbits connecting (0, 0) to itself that are symmetric about the y-axis. Moreover, the interiors of the homoclinic orbits are filled with periodic orbits with arbitrary frequency circling around the center (±a, 0). In the same way, these periodic orbits can also increase the energy dissipation rates of the EI-BNES because of the potential multi-frequency internal resonate capture with the main structure LO. Considering the actual physical model, the stable states of the mass m 2 occur at x = a and x = −a, which further illustrates that the effect of k 1 plays a major role when the stiffness coefficient satisfies k 1 > k * 1 even if the negative stiffness k 2 works. Numerical simulations of the optimized EI-BNES Next, according to the relationship k 1 > −0.5k 2 obtained from the above analysis, the values k 1 and k 2 will be modified to detect what conditions can make the EI-BNES have the better effects of energy dissipation. Since the energy dissipation in the section 2 is already shown to be better at k 1 = 0.2 when k 2 = −0.05, this set of parameters will be used to verify the effectiveness of the above proposed GMRA. Furthermore, k 2 = −0.15 and k 2 = −0.23 are respectively selected to obtain the optimized values of k 1 to match with k 2 such that the EI-BNES has better effects of energy dissipation. The thresholds of chaotic oscillations between A and k 1 for the EI-BNES are shown in Figure 14 when k 2 are selected for k 2 = −0.05, k 2 = −0.15, k 2 = −0.23, respectively. The Fig. 14 shows the threshold function A of k 1 can achieve a minimum for some assigned k 2 , implying that the Melnikov function itself will be more sensitive to certain spring coefficient ranges than to others. In particular, A respectively reach the minimum at k 1 = 0.18, k 1 = 0.31, k 1 = 0.41 when k 2 = −0.05, k 2 = −0.15, k 2 = −0.23 are assigned, implying that the Melnikov function will be most influenced by the specific spring coefficients k 1 such that the EI-BNES will be most susceptible to occur homoclinic bifurcations and therefore more likely to have crosswell oscillations, which is beneficial to the effects of energy dissipation. For k 2 = −0.05 the minimum A is obtained at k 1 = 0.18 which happens to be almost equal to the value k 1 = 0.2 known to achieve the best effect of energy dissipation. Therefore, the proposed GMRA and Melnikov function may become an effective analytical method for structural optimization of the EI-BNES in order to obtain higher energy dissipation effects. In additional, A respectively obtain the minimum at k 1 = 0.31 and k 1 = 0.41 when k 2 = −0.15 and k 2 = −0.23. The increasing of the k 2 -stiffness value will result in a decreased amplitude threshold for homoclinic bifurcation. Next, the effects of energy dissipation for k 2 = −0.15, k 1 = 0.31 and k 2 = −0.23, k 1 = 0.41 is compared with the value k 2 = −0.05, k 1 = 0.18 and is simulated in Figure 15. It can be observed that the energy dissipation rates are high in the three cases. In the case of medium impulse excitations X, the effects of energy dissipation are better when k 1 = 0.41, k 2 = −0.23. In the cases of low and high impulse excitations X, the effects of energy dissipation are better when k 1 = 0.18, k 2 = −0.05. Hence, once the value k 2 is selected, we can use GMRA to detect the optimal range of k 1 . The achieve of the better effects of vibration reduction is also needed to consider to choose appropriate parameters of k 1 and k 2 according to the actual impulsive excitation X of the environment. Conclusions In this paper, a new highly efficient EI-BNES based on magnetic-elastic impacts with negative stiffness and bistability is proposed and realized by reasonable layout of permanent ring magnets and springs. Global dynamics and optimal design of the EI-BNES are studied emphatically in detail. A new GMRA is proposed to study global dynamics and homoclinic bifurcations of the reduced two-dimensional subsystem to explain the mechanism of TET and detect the chaos thresholds of the EI-BNES. A special type of saddle-center equilibrium points is also found in the non-smooth system of the EI-BNES and can be used to effectively increase the energy dissipation rates.Finally, the optimal design criterion of the EI-BNES for better dissipation performance is also first discussed and simulated based on the GMRA and the Melnikov function of the non-smooth systems. The effectiveness of the analytical GMRA is also verified by numerical simulations.
2021-09-27T20:26:04.101Z
2021-08-05T00:00:00.000
{ "year": 2021, "sha1": "2b739f58fcbee1842ffab4bdf8ae44894c2e9a46", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-698793/v1.pdf?c=1631901847000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "64d5bde4853652a315e0b8ceafb8e11249e73439", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
222165671
pes2o/s2orc
v3-fos-license
Blockchain in Healthcare: Insights on COVID-19 The SARS-CoV2 pandemic has impacted risk management globally. Blockchain has been increasingly applied to healthcare management, as a strategic tool to strengthen operative protocols and to create the proper basis for an efficient and effective evidence-based decisional process. We aim to validate blockchain in healthcare, and to suggest a trace-route for a COVID19-safe clinical practice. The use of blockchain in combination with artificial intelligence systems allows the creation of a generalizable predictive system that could contribute to the containment of pandemic risk on national territory. A SWOT analysis of the adoption of a blockchain-based prediction model in healthcare and SARS-CoV-2 infection has been carried out to underline opportunities and limits to its adoption. Blockchain could play a strategic role in future digital healthcare: specifically, it may work to improve COVID19-safe clinical practice. The main concepts, and particularly those related to clinical workflow, obtainable from different blockchain-based models have been reported here and critically discussed. Background The World Health Organization (WHO) has recommended that countries worldwide draw up a "Pandemic Plan", due to the increased possibility of pandemic risk. A Pandemic Plan is typically developed according to the pandemic phases declared by the WHO and aims to achieve clear results in managing pandemics from the early stages [1]. In healthcare, we may recognize different approaches in preparing for an emergency; in fact, each emergency is characterized by different phases: mitigation, preparation, response, and recovery [2]. The "tabletop exercise" is a useful tool that simulates the emergence of a critical situation; it provides a scenario that benefits from both communication and cooperation between different sectors and areas, such as management, workers, logistics, communication, and finance. A proper approach could provide a general framework and a mental model reproducing the perfect environment for future decision making [2]. On 30 January 2020, the World Health Organization (WHO) announced a public health emergency due to the spreading of a new coronavirus called SARS-CoV2, associated with the COVID-19 disease, and on 11 March 2020 the epidemic became a pandemic [3]. The COVID-19 outbreak is demonstrating the vulnerability of worldwide people towards novel and highly contagious biological agents. In this landscape, several countries have considered the reinforcement of strategies of "risk management" as a priority. The main concern of timely risk management is related to data sharing among clinicians and mass media, as in most cases this may create panic in the general population. However, in some countries, healthcare authorities are used to block or delay the sharing of important data to preventive understanding of the risk to which people are exposed and also to properly limit the diffusion of dangerous pathologies. In recent decades, the growth of diagnostic technologies and biomedical devices have aided the industrialized countries in strengthening and standardizing risk management in healthcare [4]. The SARS-CoV2 pandemic has involved all continents, testing risk management in all the main worldwide institutions [5]: in this context, Wendelboe et al. have conceived and designed a specific tabletop exercise for universities and companies to suggest reliable objectives and detailed instructions in order to prevent and manage COVID-19 infection: (i) Analysis of cases of COVID-19 with known travel-related exposure, (ii) Analysis of cases of COVID-19 with no known (i.e., community) exposure, (iii) Outbreak of COVID-19 in the local region, (iv) Recovery from COVID-19 (phases 2 and 3) and "looking ahead" [6]. A standardized plan to guide and modulate the communication between institutions and medical staff is strategic for disseminating the information that the community needs, because people should be adequately prepared for the emergency, and trained to improve their skills and preparation [7]. In Italy, the SARS-CoV2 pandemic started in February 2020, from "patient zero" living in Codogno, a small town in the north of Italy. The Lombardy region, involved from the start in the management of the epidemic, has proved to be able to make a rapid response to the outbreak in the north of Italy; in less than two weeks, a traditional infectious disease department was converted into a "COVID-19 department", doubling bed capacity and creating a sub-intensive ward and a highly-targeted care ward. This healthcare plan was further improved, thanks to the efforts of many clinicians, nurses, administrative staff, and hospital management, over a very short time. In fact, in just five days, the COVID-19 department was expanded, separating the first floor from the remaining floors to allow operators to move freely. In just ten days, the ground floors of the COVID-hospitals were converted into an emergency are, where patients with specific symptoms were evaluated and treated with a proper and safe protocol [8]. In February 2020, the Netherlands also became involved in the COVID-19 outbreak. The Dutch national epidemic management (DNEM) team met in March to discuss limitations and understand the spread in the entire country. The strategy was to prevent and manage a hypothetical community infection: sampling of different health-workers from the main hospitals was used to allocate additional professionals in specific areas of the country (North Brabant and Limburg). The Netherlands carried out a rapid two-day study of nine hospitals to observe the health of professionals working in these areas of the country, alerting local authorities when they showed mild respiratory symptoms [9]. Hospitals were asked to provide the screening test to operators and this process represented a representative sample to investigate. Thanks to these data, the regional authorities decided to use restrictive measure to limit the infection to a large part of the population [10]. From these two experiences, it is clear how the risk management of pandemics has preserved European countries from a more severe diffusion of the infection. Adequate containment measures, proportionate to the evolution of the epidemiological situation, based on the pandemic plans drawn up according to WHO directives, have prevented unpredictable health risk and have coordinated national response to the medical emergency [11]. Healthcare management can use several strategic tools to be effective: data sharing and data mining, machine learning, artificial intelligence, and blockchain are the most impacting strategies [12]. In recent years, blockchain technology has been increasingly applied to healthcare, to strengthen the operative protocols and to create the proper basis for an efficient and effective evidence-based decisional process. Blockchain plays a strategic role in safely sharing data between groups of persons, independently of the reliability and the cross-checking of these groups. Blockchain usually works by collaborative tools and can be used in a new workflow or in improved protocols with particular attention to risk management. We aim to validate blockchain in healthcare, and, in more detail, to suggest a traceroute for a COVID19-safe clinical practice. Blockchain in Healthcare Management Blockchain technology belongs to the wider category of Distributed Ledger technologies, whose functioning is based mainly on a register structured in blocks linked in a network; each transaction performed in a block of the network is validated through a process based on the consensus distributed across all the nodes (that is the devices/users connected to the net). The transactions represent the result of the operations that occur among the subjects within the network. Each block, through a cryptographic system, maintains a reference to the previous one, hence the concept of blockchain. Blockchain is not filed on a centralized server, as happens in traditional web applications, but it is distributed on devices (computers) of the network (called nodes), each containing a copy of the whole blockchain. Moreover, it is useful to highlight for our analysis two relevant aspects which characterize this kind of technology: (i) the decentralization of consensus and (ii) decentralization of the ledgers. Due to the decentralization of consensus, the existence of trustworthiness among the subjects involved in any kind of transaction and a central authority may no longer be necessary [5,7]. Similarly with the second aspect, the repetition and the saving of different copies of different blockchains across the nodes of the network guarantees greater security of the system and equity among the users, who can access the same information simultaneously and, therefore, the traceability and immutability of the validated transactions contained in the blocks. Therefore, blockchain is a peer-to-peer network in which all the participants in the network can trust in the system without necessarily trusting each other. The reference literature highlights the application of this type of technology in different sectors, such as the financial, credit, insurance, commerce, and agri-food sectors for the reorganization of specific processes [13][14][15]. Blockchain applied to the health sector can offer new and effective opportunities to improve several activities associated with the prevention and control of pathologies and, therefore, better clinical risk management in the context of a pandemic emergency such as the current one. The sudden appearance and the rapid and uncontrolled diffusion around the world of Coronavirus has shown us not only the failure of existent healthcare surveillance systems in promptly managing the public health emergency, but also an evident lack of advanced predictive systems based on the sharing of clinical data on a large scale, able to prevent or at least lessen emergencies of such magnitude. Different studies suggest the application of blockchain in the health sector mainly for sharing and better management of patients' data, electronic health records (EHR) and, if less frequently, the supply chain management of medical devices and drugs, the management of drug prescriptions, to improve the scientific research and the divulging of scientific knowledge, and for the development of precision medicine [16][17][18][19][20]. The development of new and smart approaches to medicine has opened new pathways for innovative procedures, which have been demonstrated to work well and safely [21,22]. The use of technology can allow the exchange of healthcare data: this is an important step towards the effective interoperability among different Electronic Health Records (EHR) system. The management of EHR with blockchain technology may reduce clinical bias, thus improving the overall healthcare outcomes [23]. The issue of interoperability among different EHR systems may be overcome by using separate blockchain systems that would work as a bridge to ensure cross-communication: in more detail, we may operate with two main blockchain users that will communicate through a third, in the middle of the two cross-talking blockchains. Blockchain is an opportunity to ensure cryptographically secured data exchanges between two or more users: recently, this opportunity has created interest within the scientific community, which aims mainly to facilitate interaction between different secured networks; this will ensure a trustworthy decentralization of activities like asset and message exchange [21][22][23]. The acquisition, conservation, and sharing of clinical data would also foster the development of precision medicine and, therefore, the personalization of prevention, diagnosis, and treatment for the single patient (patient-focused care). Smart contracts based on blockchain technology can also be used to automatize auditing processes, improve the supply chain management of pharmaceutical products and verify their quality and compliance with current regulations [23]. Moreover, current IT infrastructures do not facilitate the constant sharing of the results of scientific research and clinical studies and this does not foster the development and sharing of scientific research capital. Blockchain can be a valid instrument of knowledge management that promotes the diffusion of the best clinical practices and evidence-based medicine [24]. However, the decentralized and transparent nature of this technology raises, in some contexts of application, issues linked to privacy protection of the patient and network security (with many aspects still unsolved and subject to debate), with special focus on the sharing of sensitive data in public blockchains. The management of health records through the use of this technology is based on the possibility of sharing the data among the different parties involved in healthcare management, preserving at the same time patients' privacy, security, and the immutability of data and information contained in the blockchain workflow. In this specific case healthcare providers and healthcare institutions aim at: (i) building a predictive model (machine learning) through the analysis of electronic health records or clinical data related to particular or rare pathological cases; (ii) using the data, conveniently re-elaborated, to predict healthcare outcomes. This kind of instrument contributes, therefore, to the wider process of clinical risk management, allowing healthcare organizations to prevent and contain the onset of adverse events [20,25]. It is believed that the combination of blockchain and machine learning systems will be able to generate data, usable to create predictive models that are useful in risk management: blockchain is based on technologies that offer the strategic advantage of a distributed (peer-to-peer), immutable and safe ledger, and privacy protections for the patients. Recently, researchers have developed medical applications involving the use of the internet: such applications were based on an artificial intelligence that was able to promote continuous machine learning to improve critical steps of diagnosis and treatment of several diseases [24]. The data transmitted by the users/healthcare providers in the blockchain are not sensitive data of the patients, but anonymized data and anonymized information that remains usable on each server for healthcare providers. Furthermore, users may participate in research networks to extrapolate big data, aimed at creating, for example, predictive models of medical workflow or pandemic onset and development. Such models may be processed by machine learning systems, updated through an interactive process of information exchanges within the network. This model would be updated and tested until it achieves the highest reliability. At this point, the machine-learning system stops, and the last updated model is identified as the consensus model [25,26] (Figure 1). The use of blockchain and its combination with artificial intelligence systems allows the creation of a generalizable predictive system that, included in the wider risk management process, could contribute decisively to the containment of pandemic risk on national territory. The results of a constantly updated predictive model, based on information on and clinical data of patients, can in particular influence not only clinical practice but more generally the programmatic policies of risk containment at regional and national levels. SWOT Analysis of the Adoption of the Blockchain-Based Prediction Model in Healthcare To better understand, examine and identify the main strengths and weaknesses of the represented model, the offered opportunities and threats, a SWOT analysis that underlines the opportunities and limits of adoption has been realized ( Figure 2). The use of blockchain and its combination with artificial intelligence systems allows the creation of a generalizable predictive system that, included in the wider risk management process, could contribute decisively to the containment of pandemic risk on national territory. The results of a constantly updated predictive model, based on information on and clinical data of patients, can in particular influence not only clinical practice but more generally the programmatic policies of risk containment at regional and national levels. SWOT Analysis of the Adoption of the Blockchain-Based Prediction Model in Healthcare To better understand, examine and identify the main strengths and weaknesses of the represented model, the offered opportunities and threats, a SWOT analysis that underlines the opportunities and limits of adoption has been realized ( Figure 2). The disintermediation, intended as the absence of a central authority that collects, processes and validates the data or the built and shared models, allows the reduction of time, errors and costs in the performance of processes, aiming at the construction and update of a predictive model which supports clinical practice and risk management. The blockchain is an integrated system and the processes implied in it are automatized and standardized [27][28][29]. The transactions validated through the blockchain and the data contained are immutable, in the sense that they cannot be modified or eliminated, and this guarantees their authenticity, increasing at the same time the safety of the environment in which the operations occur [27,[29][30][31]. Moreover, the cryptographic system, the immutability of the data distributed in the whole network and the absence of a centralized authority generates greater trust in the system, as the need to keep this among the parties involved in the process disappears [31,32]. The commitment among the parties in the chain to collaborate in the processing and update of the partial models is justified by the common interest in obtaining an increasingly accurate, functional, and effective predictive model [29,30]. The disintermediation, intended as the absence of a central authority that collects, processes and validates the data or the built and shared models, allows the reduction of time, errors and costs in the performance of processes, aiming at the construction and update of a predictive model which supports clinical practice and risk management. The blockchain is an integrated system and the processes implied in it are automatized and standardized [27][28][29]. The transactions validated through the blockchain and the data contained are immutable, in the sense that they cannot be modified or eliminated, and this guarantees their authenticity, increasing at the same time the safety of the environment in which the operations occur [27,[29][30][31]. Moreover, the cryptographic system, the immutability of the data distributed in the whole network and the absence of a centralized authority generates greater trust in the system, as the need to keep this among the parties involved in the process disappears [31,32]. The commitment among the parties in the chain to collaborate in the processing and update of the partial models is justified by the common interest in obtaining an increasingly accurate, functional, and effective predictive model [29,30]. All participants can verify the operations that occur in the network as they have a copy of the whole blockchain on their device and this makes the process transparent [30]. The sharing of whole copies of the blockchain, in a model in which sensitive data on a single patients are shared, would create many problems linked to the compliance with privacy regulations, especially if organizations other than public healthcare companies participate in the network [33][34][35]. For example, each participant (healthcare provider, institution, etc.) would have the problem of identifying the subject responsible for any illegal activity committed in violation of privacy regulations. Considering, for All participants can verify the operations that occur in the network as they have a copy of the whole blockchain on their device and this makes the process transparent [30]. The sharing of whole copies of the blockchain, in a model in which sensitive data on a single patients are shared, would create many problems linked to the compliance with privacy regulations, especially if organizations other than public healthcare companies participate in the network [33][34][35]. For example, each participant (healthcare provider, institution, etc.) would have the problem of identifying the subject responsible for any illegal activity committed in violation of privacy regulations. Considering, for example, privacy regulations disciplined by the General Data Protection Regulation (GDPR) that represents a regulation law on data protection and privacy in the European Union (EU), a very important aspect is represented by the modality of detecting the owners or those responsible for the data processing and of guaranteeing, at the same time, protections provided by the same regulations for the parties involved in the network [27][28][29]. However, there are many other aspects linked to the theme of privacy to consider for the regular users of this kind of technology [29,32]. Thus, if decentralization and immutability, typical characteristics of the blockchain, allow on the one hand the actual transparency and safety of transactions, on the other hand this can create a point of conflict with the regulations in force [36][37][38]. To face these problems, the examined model does not include the entry of patients' direct and sensitive data, but specifically metadata (hash, flags, errors of the models) and partial predictive models. Thus, regulatory issues linked to the privacy protection could be solved and become a strong point in the implementation of the examined model. SWOT Analysis of the Adoption of the Blockchain-Based Prediction Model in SARS-CoV-2 Infection From the earliest data on the clinical manifestations of SARS-CoV-2 infection provided by Chinese scholars, information was not homogeneous and was misleading in the very first stage of the contagion [38][39][40]. Initially, affected people were reported to have an average age of 49-56 years, with the rare involvement of the pediatric population [39][40][41]. In a second stage, the spread of COVID-19 was controlled and monitored using swabs for respiratory fluids of nasopharyngeal origin on which the presence of the virus was tested. This specific examination was administered by several hospitals and research centers in Asian countries and it was often contested as not being efficient for proper and rapid detection of the virus. The SWOT applies to this specific condition, as the improvements to be applied are various and at different parts of the medical process. In some cases, researchers have also used integrated SWOT-AHP (Analytic Hierarchy Process) analysis in other fields, to identify strengths, weaknesses, opportunities and threats (SWOT factors), and to weight the factors identified according to the AHP method. [42,43] On the other hand, SWOT analysis is commonly used to describe case studies, comparing them to the related literature, acting as a kind of decision-maker in order to go beyond a "best approach" [43]. The strengths of blockchain in such conditions are the disintermediation and automation of the information chain, the immutability of the information, and the reliability and transparency of the information obtained in all interested countries with respect for people's privacy. On the other hand, several opportunities may also be developed within this outbreak: the first of these is the opportunity to reinforce teambuilding and international networking among regions of different countries. Given the low specificity of the swab test, the opportunity to increase the technological awareness of healthcare personnel may also develop new expertise to better approach the COVID-19 pandemic; for confirmation of the diagnosis of new coronavirus infection, it has been necessary to carry out a laboratory test, the Real-Time PCR (RT-PCR), on respiratory samples and serum. The use of RT-PCR is the most reliable technique, even if there is a slight possibility of false-positive results. In agreement with the positive improvements of SWOT indications, the discovery and development of oligonucleotide primers and probes against the SARS-CoV-2 viral genome has allowed the RT-PCR to be successful, although coronavirus may undergo frequent mutations of its genome. The SARS-CoV-2 genome sequences were discovered and deposited in public databases to develop safe and universal molecular diagnostics in a short time. Scholars from the Berlin Institute of Virology have developed assays to distinguish SARS-CoV-2 from SARS-CoV infection, based on the nucleotide sequence of the RNA-dependent RNA polymerase gene (RdRp). Cross-reactivity was validated through the use of several known respiratory pathogens from infected clinical samples. Now it is the most used and validated protocol to declare positivity. This protocol has been validated and confirmed by the World Health Organization [44,45]. This was possible thanks to the genetic relationship between the 2003 SARS-CoV infection and modern synthetic nucleic acid technologies. After these earlier developed and tested protocols, several different protocols based on reverse polymerase chain reaction (RT-PCR) have been used to confirm COVID-19 infection. Gene sequencing is strategic for validating any type of PCR test; Cepheid and Sherlock Biosciences have recently developed an alternative test based on Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) technology, used not only in genetic editing, but also for its diagnostic potential, and already used for the diagnosis of Zika virus [46]. Furthermore, research into treatments that can combine early healing and less biological and economic cost have pushed researchers towards experimentation with smart materials and nanotechnologies, even if the main issues regarding the safe applications of such technologies on human patients remain [47][48][49]. To provide precise information on the epidemic trend in real-time, an informatic system should be easy to use and quick to achieve results. Fatally, information flow without a standardized workflow, like the blockchain-based protocols, may result in being altered or misunderstood between two different users; nevertheless, blockchain-based protocols are able to ensure privacy and limited sharing of patients' information. Blockchain is structurally aimed to work in environments where trust in data is better than in physical subjects. In fact, data registered in blockchain workflow are impossible to be changed or altered. In this landscape, it is necessary to observe that in a blockchain of permission type (consortium), the management and authority in defining access, control, authorization and especially the possibility to add transactions to the distributed ledger is only attributed to one specific group of operators (who act as validators) [50]. Thus, only a selected group of nodes can participate in the process of distributed consensus. The use of this typology of governance would contribute not only to the resolution of problems linked to privacy, but would particularly adapt to the objectives of the model. As is well known, each node (user) of the chain has on his device a copy of the whole blockchain and this aspect, if on the one hand it creates undeniable advantages in terms of certainty and safety of the datum and transactions, on the other hand it generates, especially in the case of big networks, greater costs for the management and storage of data [32]. Moreover, the creation of new blocks could generate latencies of transmission in the network due to the bandwidth at the time of block validation (fork). Thus, the new blocks could reach the nodes at different moments, in fact generating temporary inconsistencies in the blockchain, which constitute a limitation [51]. The immutability of the blockchain represents one of the main advantages in the use of technology: however, this characteristic can represent a limitation when the modification of a transaction is necessary [52]. The use of a predictive model based on innovative technologies such as blockchain and machine learning and the awareness of the benefits that derive from use inevitably generate the development of new applications and competences [32]. However, blockchain technology is still evolving and thus it must face important social challenges, such as cultural change. Accepting and adopting an innovative technology that implies a method of work completely different from the traditional could generate, within different organizational contexts, strong resistance to change. Moreover, because of the low rate of adoption of similar models in the healthcare sector, expertise able to draw up and make operational models of this type in the short term has not developed in Italy. If on the one hand this could obstruct the implementation of a system, in the short term, on the other hand, it could become food for thought in starting a process of adoption of this kind of tool, precisely because of the current situation of emergency. Finally, it is worth highlighting that the implementation of a model that brings into relation a bigger possible number of healthcare providers and institutions (at national level and/or single regional territory level) can undoubtedly be beneficial to the whole community as it would increase the degree of integration of risk management policies among the operators of the whole healthcare system, except for current regional autonomies in healthcare management. Translational Applications of Blockchain-Based Outcomes and Workflow in Healthcare The combination of the potentialities of blockchain and those expressed by artificial intelligence systems, such as machine learning, is undoubtedly an innovative and effective approach for the construction of models able to rapidly identify choices of diagnosis and treatment specific for COVID-19 patients, contributing also to the formation/development of clinical guidelines for possible future epidemics similar to coronavirus. All the data originating from healthcare providers (e.g., clinical laboratories, hospitals, primary care physicians and pediatricians) and other sources, can be collected and shared while respecting privacy and security through blockchain and later analyzed using solutions based on artificial intelligence. Such a system is a valid tool for risk management, useful in the phases of diagnosis and treatment of the patient affected by COVID19, but also necessary for research into more suitable therapies concerning the typology of the patient and the risk and health conditions associated (comorbidity, associated risk factors, etc.), as well as to foster the development of new drugs or increasingly adequate diagnostic and therapeutic protocols [53] (Figure 3). privacy and security through blockchain and later analyzed using solutions based on artificial intelligence. Such a system is a valid tool for risk management, useful in the phases of diagnosis and treatment of the patient affected by COVID19, but also necessary for research into more suitable therapies concerning the typology of the patient and the risk and health conditions associated (comorbidity, associated risk factors, etc.), as well as to foster the development of new drugs or increasingly adequate diagnostic and therapeutic protocols [53] (Figure 3). Hence, the patient becomes the protagonist on an alternative path to the current one, in which the protection, the analysis, and the reprocessing of data are not standardized and not predictable in large populations. In the path supported by the management model inspired and based on blockchain, triage is entirely computerized and managed by self-implementing systems of machinelearning, with a verification system through feedback of the diagnosis at admission; this allows a reduction of time-consuming procedures and a rationalization of the storage of sensitive information, which are managed by only by those few authorized to perform data analysis. Data-mining and datastorage are finally transmitted through a certified data flow, free from methodological and interpretative errors (operator's bias), that are treated according to certification and standard modulations, and are finally processed in a worldwide network which makes the single datum vital for the creation of databases useful for the future management of medical data via artificial intelligence that can support traditional medicine. (Figure 4). Hence, the patient becomes the protagonist on an alternative path to the current one, in which the protection, the analysis, and the reprocessing of data are not standardized and not predictable in large populations. In the path supported by the management model inspired and based on blockchain, triage is entirely computerized and managed by self-implementing systems of machine-learning, with a verification system through feedback of the diagnosis at admission; this allows a reduction of time-consuming procedures and a rationalization of the storage of sensitive information, which are managed by only by those few authorized to perform data analysis. Data-mining and data-storage are finally transmitted through a certified data flow, free from methodological and interpretative errors (operator's bias), that are treated according to certification and standard modulations, and are finally processed in a worldwide network which makes the single datum vital for the creation of databases useful for the future management of medical data via artificial intelligence that can support traditional medicine (Figure 4).
2020-10-02T13:04:48.822Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "70558242afc23669c08a538fc3d222f5646e702a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/19/7167/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a8fc81269cdd36bc70fc8033f48f8f9a5dea653c", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
149380235
pes2o/s2orc
v3-fos-license
A comprehensive overview on the foundations of formal concept analysis The immersion of voluminous collection of data is inevitable almost everywhere. The invention of mathematical models to analyse the patterns and trends of the data is an emerging necessity to extract and predict useful information in any Knowledge Discovery from Data (KDD) process. The Formal Concept Analysis (FCA) is an efficient mathematical model used in the process of KDD which is specially designed to portray the structure of the data in a context and depict the underlying patterns and hierarchies in it. Due to the huge increase in the application of FCA in various fields, the number of research and review articles on FCA has raised to a large extent. This review differs from the existing ones in presenting the comprehensive survey on the fundamentals of FCA in a compact and crisp manner to benefit the beginners and its focuses on the scalability issues in FCA. Further, we present the generic anatomy of FCA apart from its origin and growth at a primary level. professional bodies including ISC, CSI, ISTE, IACSIT.He is reviewer for many reputed international journals and conferences.He is editorial board member for several international journals. Introduction The developments of information technologies and network have produced huge collection of data every year from different trades.The data flows from various fields such as information technology, agriculture, medicine, finance, markets, social science, demography, etc.This data has no direct information and it is concealed in the data.Extracting the useful information from the huge data is known as knowledge discovery and is an important task in any knowledge based system.According to Han and Kamber (2006), knowledge discovery is to discover the rules and patterns that exist in the data by which one can foretell the trends of the future in the system.So, the invention of methods and means to automatically analyse the patterns and trends of the data is an emerging necessity in order to extract and predict useful information to the society (Malzahn, Ziebarth, & Hoppe, 2013;Mattingly, Rice, & Berge, 2012;Zushi, Miyazaki, & Norizuki, 2012).This is an important issue and apparently has high priority. To this end, several researchers have proposed various models and techniques (Huang, Yang, Chen, & Wu, 2012).Among such models, mathematical models have contributed enormously to understand the KDD (Knowledge Discovery from Data) precisely.Some of such models are: Set theory, Rough set theory, Fuzzy set theory, Probabilistic set theory, Intuitionistic set theory, Soft set theory, etc.Along these mathematical models falls the lattice theory based notion of Formal Concept Analysis (FCA) (Wille, 1982).FCA concentrates mainly on the clustering of certain objects and attributes which are termed as concepts by which the functionality of cluster analysis from knowledge discovery point of view is carried out.Under the poset relation the concepts can be presented in a form of lattice due to which the functionalities such as presentation and prediction of information can be carried out.The functionality of determining associations can be achieved by finding the implications for the given context using FCA.Thus, FCA based techniques in the practices of knowledge discovery process yield fruitful results to the users. The extraction of knowledge using FCA from any database is of three dimensions viz., conceptual clusters, lattices (graphical representation) and association rules.Concepts express the underlying relationships between objects and attributes in the context; a concept lattice portrays the context graphically; and the association rules discover the underlying associations within the attributes of the context. Although FCA is an important formalism for knowledge representation, extraction and analysis, one of the major issues in FCA is the issue of scalability arising due to the size of the contexts which yield bigger concept lattices.As the size of the concept lattice increases, the visualisation of concepts along with hierarchy becomes complex and impractical.This complexity issue arises due to the scalability of FCA and its extensions in various environments.According to Poelmans, Ignatov, Kuznetsov, and Dedene (2013) the scalability issue is focused in 9% of the articles on FCA.We also point out the scalability issue in FCA and review it. In view of the growing and applicative nature of FCA in various fields, we present its fundamental notions with examples in this article as follows.In section 2, the origin and growth of FCA is discussed.The terms and notions related to FCA are presented and illustrated in section 3. Section 4 deals with the scalability issues and the current trends on it.At last, we conclude the article in section 5. Origin and growth of FCA The lattice theory based framework namely Formal concept analysis (FCA) has emerged as a distinctive tool in the field of knowledge discovery.FCA has found its immense growth since its inception in to the field of data analysis and knowledge representation few decades ago.FCA is a theory of mathematics means for determining the concepts and their hierarchies that underlie in any information system (Wille, 1982).The mathematical foundations of FCA were first laid by (Birkhoff, 1948) who first bridged the partial orders and lattices.Further he also proved that any binary relation between a set of objects and a set of attributes can be depicted by means of a unique lattice which provides an insight into the structure of the original relation.FCA has emerged as a result of the attempts of a group of researchers to develop the applications of lattice theory at Darmstadt University of Technology in Germany.The research group was led by the Professor Rudolf Wille, who became the founder of FCA by publishing his first article on FCA in 1982 (Wille, 1982) in which he discussed about the approach of restructuring lattices using hierarchies of concepts. The mathematical theory of FCA has been extended into various frontiers and included with other knowledge representation schemes.Recently Singh, Kumar, and Gani (2016) provided necessary mathematical background for few extensions of FCA for various environments such as FCA with granular computing (rough set theory), fuzzy set theory, interval-valued set theory, possibility theory, triadic concepts, factor concepts and handling incomplete data.Yao (2016) interpreted the notion of rough set (RS) definable concepts and thereafter derived the Boolean algebra for RS-definable concepts.RSdefinable concept is a pair of extension and intension, with extension being a set of objects and the intension being a family of sets of attribute-value pairs. Terms and notations in FCA FCA is an art of describing the world in terms of the objects and attributes possessed by those objects.In FCA the adjective 'formal' is often used to emphasise the mathematisation of the notions used from those of the human mind.The terms and notions used in this article are on the basis of the text book of Ganter and Wille (1999) and also consistent with the notions dealt by Davey and Priestly (2002). FCA is the theory of formalisation of the idea concept.The notion of concept has been already suggested from ancient times by the eminent philosophers such as Plato, Francis Bacon and John Stuart Mill in order to characterize formal logic systems.The notion of a concept from a context was first studied in (Arnauld & Nicole, 1981) and the term has been recognized in the German standard (DIN 2330(DIN , 1993)).The philosophical thought of the notion 'concept' can be described by its extensionthat is the set of all objects belonging to the concept and its intensionthe set of all attributes possessed by those objects in common. For example, consider the object-attribute relation: 'All living beings need water to live.'This relation obviously forms a concept since it has an extent and the corresponding intent.The extent is the set of objects of all living beings including mankind, animals, birds, etc., and the intent is the attribute 'water'.The relationship covering the set of objects and a set of attributes is often represented by means of a formal context which is formally defined as below. Formal context A formal context is a triplet K:=(G, M, I) where G denotes a set of formal objects, M denotes a set of attributes and I G M  is the incidence relation between the objects G and attributes M. The symbols G and M stand for the German words Gegenstande (Objects) and Merkmalle (Attributes) respectively.For any two elements gG  and mM  , the binary relation ( , ) g m I  has to be read as "Object g has the attribute m" and is usually written as gIm. A formal context is often represented using a cross-table in which rows correspond to the set of objects G while the columns correspond to the attributes M. The presence/absence of the incidence relation between G and M are denoted by the presence/absence of crosses.Such contexts with yes or no attribute values are known as binary contexts or one-valued contexts (possible number of attribute values in case of its presence).In order to explain the further notions of FCA, its convenient for us to consider a small formal context.To this end, we look at the context of Wolff (1994) on animals and their characteristics shown in Table 1. In this example, the object set G consists of the animals-Lion, Finch, Eagle, Hare and Ostrich while the attribute set M includes the characteristics-preying, flying, bird and mammal.The symbol  at the intersection of an object row and attribute column points out that the object possesses that attribute.For example, the animal Lion has the attributes preying and mammal in the given context K. The terms context, formal context and cross-table are synonymous and henceforth are used interchangeably throughout the article.Furthermore, since we only deal with the mathematisation of the notions context and concept throughout the article, we don't emphasise the use of the prefix 'formal' with them.Before formally defining a formal concept in a context, we first need to know about the concept-forming operators or ↑ (up), ↓ (down)/up-down operators on any given context K. For example, in the given cross-table shown in Table 1: Clearly, every context induces the concept-forming operators.We notice that the operator ↑ assigns the subsets of G to subsets of M and dually the operator ↓ assigns the subsets of M to subsets of G.For brevity, the concept-forming (up-down) operators A  and B  are denoted as A and B . Formal concepts We next define the notion of a formal concept (Ganter & Wille, 1999).The formal concepts are the clusters of the given context formed as a result of attribute sharing.More formally, For any given context K:=(G, M, I), a formal concept is a duple (A, B) where AG  and BM  such that AB   and BA   .Plainly, in a given context, if A is a maximal set of objects sharing a maximal set of attributes B then the ordered duple (A, B) is called as a formal concept.The sets A and B are respectively known as the extent and the intent of the concept (A, B).The following proposition on extent and intent follows directly. Proposition 1 (Ganter & Wille, 1999): Let K:=(G, M, I) be a context.For any subsets AG  and BM  the following are valid: AA   and BB   .Consequently, ( , ) AA   and ( , ) BB   are valid concepts of the context K. Further, A is an extent if and only if AA   and dually B is an intent if and only if BB   .Thus, combining the definition and properties of a concept we can tell that for any concept (A, B), ( , ) ( , ) ( , ) Properties of concepts It is noteworthy to mention that a set S is said to be maximal/minimal set with property P if there exists no other proper superset/subset of S with property P. Furthermore, S is said to be a maximum/minimum set if its cardinality is maximum/minimum among such sets. A rectangle in a context K:=(G, M, I) is a duple (A, B) such that the cartesian product A B I .i.e., for every xA  and yB  , ( , ) x y I  .For any two rectangles 11 AB  and 22 AB  we call 1 1 if and only if 12 AA  and 12 BB  .Any formal concept (A, B) can also be viewed as a maximal rectangle in the context.The formal concepts remain invariant under the row or column permutations of the cross-table. Computation of concepts The formal concepts can be easily computed for any given context K:=(G, M, I).Though there may be several techniques to compute the formal concepts, the easiest way is to start with an object gG  and determine its attribute-set BM  , the intent of the concept.Next, determine the set of all objects AG  which possess all the attributes in B (intents) which form the extent of the concept.Thus, the ordered pair (A, B) is the required concept.Dual approach of starting with any attribute mM  can also be adopted in the determination process of concepts.More generally, for any subset of objects or attributes of a context the corresponding concept can be determined. A concept ({ } ,{ } ) gg   obtained by starting with an object gG  is called as an object concept denoted by () g  .Dually, a concept ({ } ,{ } ) mm   obtained with the start of an attribute mM  is called as an attribute concept denoted by () m  .Clearly, not all concepts of a context are object or attribute concepts.Any concept may be either object concept or attribute concept or both or neither. We shall illustrate the above concept determination process through the context given in cross-table (Wolff, 1994).Let us start with the object Lion, its intent set is B = {preying, mammal}.The extent set corresponding to B is A = {Lion} only.So, the pair ({Lion}, {preying, mammal}) is a concept for the given context.If we start with the objects Finch, Ostrich we obtain the intent set B = {bird} whose extent set is A = {Finch, Eagle, Ostrich}.So, the ordered pair ({Finch, Eagle, Ostrich}, {bird}) is another concept in the given context.Exploration of the given context further yields the following 8 concepts.concepts 2, 3, 4, 6, 7 are object concepts and concepts 4, 5, 6, 7 are attribute concepts.One can easily note that only certain and not all of the subsets of objects have formed as extents of some concepts and the case of intents also similar, though up-down operation exists for any such subsets.For any given subset of objects/attributes the resulting concept is always unique.Moreover, if the extent A of any concept (A, B) is known then its intent B can be uniquely determined and vice versa. There are several algorithms to generate the formal concepts in a context.Some of the famous algorithms serving this purpose are: Ganter, Bordat, Next neighbours, etc. Kumar and Singh (2014) have studied the performance of various concept generation algorithms. Hierarchy of concepts In order to discuss about the properties of the set of all concepts we require to know the fundamental notions associated with the lattices from set theory, a branch of mathematics.We refer the readers to Davey and Priestly's Lattices and Order (Davey & Priestley, 2002) for an introductory knowledge on lattices and to George Gratzer's General Lattice Theory (Gratzer, 2003) for an encyclopaedic knowledge on lattices.In order to make the article self-content we recall some of the basics of lattice theory. Let P be any set in which any two elements , xy are related using some relation R denoted as xRy .Then P is said to be a partially ordered set or simply a poset if the following properties hold: i. Anti-symmetry : If , x y P  such that xRy and yRx , then xy  .iii. Transitivity : If ,, x y z P  such that xRy and yRz , then xRz . The relation R by which a set P is a partially ordered set resembles the usual relation of  (less than or equal to) in view of the above stated three properties.Hence, conventionally, the symbol R is replaced by the symbol  .A set P with a partial order  is denoted by ( , ) P  . Let ( , ) P  be a partially ordered set and let S be its subset ( SP  ).An upper bound of S is an element xS  such that sx  for all sS  .Dually, a lower bound of S is an element yS  such that ys  for all sS  .A smallest element amongst the set of all upper bounds of S is called the supremum or least upper bound of S and is denoted by S  .Dually, the greatest element amongst the lower bounds of S is called the infimum or greatest lower bound of S and is denoted by , we write simply xy  instead of S  and xy  instead of S  .The terms supremum and infimum are also referred to as join and meet respectively.Consider a partially ordered set ( , ) P  . ➢ If for any two elements , x y P  , xy  and xy  exist, then ( , ) P  is called a lattice.➢ If for any subset SP  , S  and S  exist, then ( , ) P  is called a complete lattice. Turning back to the discussion of concepts of a context, any two concepts 11 ( , ) AB and 22 ( , ) AB of a context can be ordered/related by means of the subconceptsuperconcept ordering relation  , which is defined as follows: if and only if 12 AA  which otherwise also means that 21 BB  . The ordering relation  between concepts can be identified to be a partial order.In other words, the relation  satisfies the three properties of set theory viz., reflexivity, anti-symmetricity and transitivity.The partial order  between the elements of a poset is also known as the hierarchical order or lexicographical order.The definition of the object and attribute concepts achieves the following straight forward result. Proposition 2 (Davey & Priestley, 2002;Lambrechts, 2012): Let B(K) be the set of all concepts of a context K:= (G, M, I).Then, (B(K),  ) is a partially ordered set.Moreover, for any subset of concepts in B(K), there always exists supremum as well as infimum and hence the poset (B(K),  ) forms a complete lattice.The complete lattice (B(K),  ) is often known as a concept lattice for obvious reason. Concept lattices The symbol B for concept lattices is attributed to the mathematician Birkhoff who initiated the theory of formal concept by proving the existence of lattices for binary relations of any context in his Lattice Theory (Birkhoff, 1948).A detailed study about concept lattices and their theoretical aspects can be found from (Sarmah, Hazarika, & Sinha, 2015). The one of the main reasons for considering FCA as a powerful method in the analysis of data is due to the fact that it has the added feature of graphical visualisation of the context which explores the underlying implicit relationships in the given context. Graphical representation of concept lattices Any lattice can be graphically viewed using Hasse (line) diagrams (Davey & Priestley, 2002) and so also the concept lattices.The Hasse diagram of a lattice can be easily drawn as follows. Represent the elements of a lattice ( , ) P  by means of nodes/circles.Let , x y P  be any two elements.Then join the nodes corresponding to the elements , x y P  if and only if xy  and there exists no other element zP  such that x z y .Or simply, if , x y P  are the immediate predecessor and successor (sub-concept and super-concept) respectively, then join their nodes by a line.Another convention adopted in the drawing of Hasse diagrams is that if , x y P  such that xy  , then the node corresponding to x is placed below that of y.It is interesting to note that the Hasse diagrams of a lattice need not be unique in the sense that there can be different drawings for the same lattice, since nodes can be placed as desired.However, any two Hasse diagrams of a lattice are always isomorphic graphs.Isomorphic graphs are the different drawings of the same graph.Having understood the Hasse diagrams of lattices, let us now illustrate the graphical representation of the concept lattices. Hasse diagrams endow us to view every concept lattice graphically easily than any other representation scheme.The only part that remains for us to know is the labelling of the concepts in a concept lattice.Obviously, every concept in a concept lattice is attributed to a node/circle in a Hasse diagram.Labelling each concept over the nodes would be overkill.As an alternative method, 'Reduced Labelling' scheme is available to this end by which the concepts of a concept lattice are labelled as follows: • A node corresponding to an object concept () g  is labelled by the object gG  . • A node corresponding to an attribute concept () m  is labelled by the attribute mM  . • Object labels are written below the nodes while attribute labels above the nodes. The remaining concepts can be retrieved using the proposition 2 stated earlier by understanding their extents and intents properly.In a concept lattice, one can determine the extent of a concept node by collecting all the object labels of the nodes that can be reached starting from the corresponding concept node by descending/downward path including the object label of the starting node if it has one.Similarly, starting from a concept node, the collection of attribute labels of the nodes which can be reached by ascending/upward path including the attribute label of the starting node if it is an attribute concept node yields the intent of the starting concept node. Since any lattice diagram is always Hasse diagram only, we need not emphasise the term 'Hasse' and henceforth we omit it from the discussion.As we move through the nodes of a concept lattice from the bottom/top to the top/bottom, we find that the object set increasing/decreasing and attribute set decreasing/increasing respectively.Thus, the predecessor concepts inherit the objects from their successors while the successors inherit the attributes from their predecessors.Briefly, as we traverse from bottom to top we achieve more general concepts and the reverse traversal achieves more specific concepts.The top most concept which consists of all the objects is called as the unit concept and dually the bottom most concept consisting of all attributes is called as the empty concept.The set of concepts lying on the downward path is known as the down-set or order ideal and dually that on the upward path is known as the up-set or order filter.The concept lattice reflects the relationship of generalization and specialization among concepts.It thereby is more intuitional and effective for knowledge representation and knowledge discovery. Using the principles of the partial order of concepts and Hasse diagram we are now able to draw the concept lattice of the context given in Table 1 as shown in Fig. 1. Fig. 1. Concept lattice for the formal context of Table 1 The given context Table 1 as explained earlier has eight concepts; each of them is represented by means of a node in the concept lattice.Any two immediate predecessors / successors are joined directly which on the whole yields the concept lattice as desired.The concepts corresponding to the nodes can be identified as interpreted earlier the example the node with object label FINCH corresponds to the 6 th concept ({Finch, Eagle}, {flying, bird}).[Recall that extent is the collection of objects from downward paths while the intent is that of attributes from upward paths]. Many valued contexts and their scaling processes Having understood the fundamental notions of FCA, we will now try to explain the FCA structures for varieties of information contexts.In general, FCA is not compatible with all types of information contexts.In such circumstances, the information context is modified using appropriate principles so that it becomes compatible to be processed by FCA. Usually, attributes are considered to be one-valued viz., 'yes'.But in several contexts, attributes are identified with many values.For example, the attributes such as weight, colour, grade, etc., may be characterized as low, medium, high, etc.Such contexts are known as many-valued contexts.In such cases, the usual context representation scheme is not suitable for the analysis using FCA and the context is modified to a one-valued context using methods of 'conceptual scaling' (Ganter & Wille, 1989;Davey & Priestley, 2002).The modified one-valued context is known as the derived context. In the process of scaling, a many-valued context is first transformed into a onevalued context or binary context using conceptual scaling techniques.However, this transformation process is accomplished by the users.Hence, the conceptual scaling of a many-valued (MV) context is not determined uniquely. Literature lists several research articles centered on MV contexts.Messai, Devignes, Napoli, and Smaïl-Tabbone (2008) have studied MV contexts for the first time.They observed that MV contexts yield multi-level concept lattices of have higher precision levels.In the retrieval process of valid information from complex queries, use of MV context methods brings out fruitful results.Before we proceed, let us formally define a MV context.A many-valued (MV) context ( , , , ) G M W I comprises of sets of objects G, attributes M, attribute values W together with a ternary relation I between G and M, W. Stated otherwise, I G M W    such that ( , , ) g m w I  and ( , , ) g m v I  imply wv  .The notation ( , , ) g m w I  , means that 'for the object g, the attribute m possesses the value w'.If W contains n elements, then the quadruple ( , , , ) G M W I is called an n -valued context.Every MV attribute is a partial map : m G W  such that () m g w  .For any attribute m, its domain is defined as then the attribute m is said to be complete. Concept lattices cannot be determined instantly for many valued contexts.In this case, one has to convert it into a binary valued context which is termed as conceptual scaling according to (Ganter & Wille, 1989).Such a modified context is known as the derived context.Normally, a conceptual scale is employed on a single attribute m, and in this case the scale forms a basis for the formal context.The standard scaling method namely plain scaling creates from a scaled MV context (( , , , ),( ) which is an ordered pair that consists a many valued context We will now require an example of a MV context to interpret the forthcoming notions clearly.Let us consider a simple context of platonian bodies given by (Hitzler & Scharfe, 2016) shown in Table 2. Conceptual scaling We will now discuss about the process of conceptual scaling.Every attribute of a MV context is first interpreted using a context.This context is known as conceptual scale.Theoretically, a scale for an attribute can be defined as follows. The 'scale' of an attribute m in a many-valued context is a one-valued context : ( , , ) where m GG  .The objects and attributes in a scale are respectively known as scale values and scale attributes.A scale for an attribute is a context which serves in the process of transformation of a many-valued context into a binary context.For example, in the given example of platonian bodies we can classify the attribute facets into simple, medium and complex using the following scale shown in Table 3. Conceptual scales interpret the columns of a MV context.Conventionally, the contexts which are binary and are clear in structure are called as scales, even though every context can be regarded as a scale.The simplest of all conceptual scales are the nominal scales in which every attribute is subdivided by each of its values.Using the nominal scale in the context given in Table 2, the attributes corners, edges and facets are respectively subdivided into 5, 3 and 5 columns in the derived context.The derived context out of the nominal scale is shown in Table 4 which is followed by its concept lattice Fig. 2. Table 4 Formal context derived from Table 2 Corners Edges Facets The other class of conceptual scales is the ordinal scales and its variety is several.To mention few, we will glance at some basic ordinal scales viz., one-dimensional ordinal scale, inter ordinal scale, biordinal scale and dichotomic scale. In a one-dimensional ordinal scale, the attribute values of every attribute are ordered such that some attribute values subsume other values because the former attribute values are greater or lesser than those of later ones.As a result, the extents form a chain of hierarchy.For example, the attribute values may be arranged in the order {good, better, best}.The following Table 5 is an example of one-dimensional ordinal scaling in the example context under consideration in Table 2 which is followed by its concept lattice in Fig. 3. Fig. 2. Concept lattice for the formal context of Table 4 Table 5 Formal context derived from Table 2 Corners Edges Facets 'Interordinal scales' are used in the representation of contexts of mixed attribute values.For example, the contexts such as the answers of a questionnaire contain bipolar attributes which are mixed, and it can be efficiently scaled using interordinal scales.For instance the attribute values { 1  , 2, 3  , 1  , 2  , 3  } yield extents which fall on the attribute interval values.Another example for application of biordinal scales can be in a marking scheme having values {poor, middle class, rich, very rich}, in which the attribute 'rich' can belong to both attributes 'middle class' and 'very rich'. The 'dichotomic scale' context of binary attributes contain the values of the kind {yes, no} shown in the following Table 6. Having understood various real-life contexts and their scales, one may now be able to construct concept lattices for any given context.Apart from the benefit of understanding the contexts by concepts and their graphical view of line diagrams FCA also empowers the users to explore the hidden rule patterns present in the formal context and we present some fundamental aspects of the same subsequently. Attribute implications The quest of understanding dependencies between attributes leads to the study of attribute exploration in contexts.The attribute logic is the underlying rules between the sets of attributes in a context.Attribute implications portray the data dependencies.For example, the following are some attribute implications. • Every number divisible by 2 and 5 is also divisible by 10. • Every patient with symptoms head ache and fever also gets vomiting symptom. From the attribute hierarchy of concept lattices, we infer that in any intent, the attributes always occur along with those above them.This mathematical property of lattices paves the way to another broad area of knowledge discovery in FCA viz., 'Attribute Exploration'.Let us explore some of its associated basics. According to mathematical logic, an implication XY  is a logical statement that relates a set of formulas X with another set of formulas Y such that Y is a logical consequence of X.The implication XY  literally means, 'if X then Y' and hence can be thought of as 'if-then' dependencies between attributes.In this angle, the implication formulas are also known as 'functional / attribute dependency (AD) formulas' or association rules.In view this, the study of rules or implications viz., Attribute Exploration in FCA is also referred to as 'association rule mining'. In the treatment of formal contexts, for any two attribute subsets , X Y M  of a context (G, M, I), an implication of the form XY  means that the set of objects possessing all attributes in X also possess all attributes in Y.The attribute sets X, Y are respectively are referred to as 'premise' /'antecedent' and 'conclusion' /'consequent'.Some contexts often contain huge set of objects versus relatively small set of attributes and hence deriving all the concepts would be overkill.In such cases, the concept lattices can be conveniently inferred from attribute logic.Sometimes, attribute exploration is the only alternative knowledge discovery technique instead of concept exploration to handle several complexities of FCA.For example, a context may be huge or even infinite in size.Sometimes, contexts may be with 'unknown objects'.Therefore, it may not be possible to explore the entire set of formal concepts and thereby cannot obtain the corresponding concept lattice with entity.In such cases, the use of AD formulas helps us to determine the 'typical' set of objects or attributes (with common properties) of the context.Some authors have derived the typical set of objects from such contexts by the use of 'domain expert / background knowledge' (Belohlavek & Vychodil, 2009;Belohlavek & Macko, 2011;Dias & Vieira, 2010;Burmeister, 2003;Ganter, 1999;Groh & Eklund, 1999;Sumangali & Kumar, 2014).We next illustrate the attribute exploration in FCA. Let us consider the following simple context K=(D60, D60, /) shown in Table 7 where D60 is the set of divisors of number 60 and / is the relation division. Table 7 Formal context of divisors of number 60 D60 1 2 3 4 5 6 10 12 15 20 30 60 By observing the above context, one can easily infer the existence of the , since all the objects (numbers) having the attributes (divisors) '  1, 2, 3 ' also have the attribute (divisor) '   6 '.In this case, the converse of the implication viz,     6 2, 3  also holds.Note that since the divisor 1 is present with all objects it is a redundant attribute and hence can be ignored.Not all the converse implications are valid.For example, the converse of the implication     1, 3, 5 15  does not hold.Perhaps all the implications of the divisors context presented below are easy to understand because of the logical division relation which is familiar to us.But in general, it is not always possible to examine the validity of implication formulas directly by observing the context.To this end, the following proposition helps us to verify the validity of implications. . Furthermore, it is directly valid in the set of all intents the formal concepts B(G, M, I). In the given context shown in Table 7 consider the possibility of the implication Let us validate this implication using the above proposition. Hence the above proposition holds good.Similarly, one can verify the validity of the following propositions.The use of DG basis (discussed subsequently) yields the following set of implications to the divisors context for 60 as shown in Table 8. From the perspective of data mining, a formal context (G, M, I) is replaced by (T, I, R) whose symbols stand for Transactions (Objects), Itemsets (Attributes), and Relations (Incidence Relation) respectively.Any subset of k attributes is called as a 'k-itemset'.An 'intent' is referred to as a 'closed itemset'.The detailed discussions on discovery of association rules in data mining can be found in (Agrawal, Imielinski, & Swami, 1993;Agrawal & Srikant, 1994).The following measures are often used in the mining of association rules. The support of an itemset An itemset is said to be a frequent itemset if its support is greater than or equal to some user specified threshold value.For any implication/ rule, XY  , its degree of association is measured using support and confidence measures which are defined as follows, Support: basis whose confidence levels are 100% and < 100% respectively (Stumme, 2002;Zhang & Wu, 2011).Implications obey Armstrong rules namely, A DG basis is a minimal subset of implications/rules which can derive all implications with Armstrong rules.The main advantage of DG base of attribute implications is that, it produces a minimal possible number of implications among all other bases of implications, which hold in context.In our article, we treat with the implications derived out of DG basis. Though the determination of all the implications of a context may seem to be an easy task, it is not so in general due to huge size of the context and sometimes implications also.To this end, for any formal context, its concept lattice and the set of implications can be produced by the use of software tools.One such software tool developed by Dr. Serhiy Yevtushenko is given in (Yevtushenko, 2000). In the next section we discuss about the scalability issues in FCA and briefly review some of the articles with this interest. Scalability issue in FCA and its improvements Though, FCA is considered as an important formalism to represent, extract and analyse any information system, it faces few problems which are to be addressed.Contexts are in general huge, complicated and contain much redundant knowledge.So, a main problem identified in practical applications of FCA is that the computational cost in processing the information system with FCA is high and the visualisation of lattice structure is difficult to perceive.This complexity issue arises due to the scalability of FCA. The number of formal concepts grows exponentially to the size of the context and it is found to be computationally #P-complete (Kuznetsov, 2001).In addition, the number of implications grows exponentially, as attribute size increases in formal contexts and it is computationally #P-hard (Kuznetsov, 2004).In ICFCA 2006 (International Conference on FCA) handling large context was discussed as an open problem.After this conference several researchers concentrated on the scalability issues in FCA. Literature, describes variety of approaches to control the complexity and size of contexts, concepts, concept lattices and rules.Popular research methods for improving scalability of FCA often involve: conceptual scaling for many-valued contexts, matrix decompositions, iceberg concept lattices, clustering approach, computing granular concepts, concept similarity indices, objective functions, attribute reduction, other filtration strategies, etc. Recently, Dias and Vieira (2015) have classified concept lattice reduction techniques into three classes.In the first class of reduction techniques the redundant information is removed from the context and thereby a minimal concept lattice is obtained.This class of techniques is useful when the context has much redundant knowledge.The second class of reduction methods is the simplification of contexts/concept lattices.This class of techniques is useful to identify very important aspects in a context/concept lattices.Finally, third class of reduction techniques is the selection of formal concepts, objects/attributes.When the context possesses some standard applicable principles this class of reduction techniques is more useful to obtain meaningful information. We next summarize some of the improvements in literature on the scalability issues under the stated three categories in Table 9 as shown below and describe the contribution of each work briefly. Table 9 Some important contributions on FCA scalability issues Paper Redundant Information Removal/Context Pre-processing Ganter & Wille (1999) Authors obtained the clarified context by removing reducible objects and attributes, and the resulting concept lattice preserves the isomorphism with the original one.Wu, Leung, & Mi (2009) Granular structure of concept lattices with application in knowledge reduction in formal concept analysis is examined in this paper.Information granules and their properties in a formal context are first discussed.Concepts of a granular consistent set and a granular reducts in the formal context are then introduced.Wei & Qi (2010) The relation between the reduction methods using concept lattices and rough sets was discussed based on classical formal context.The method unravels the relation research between these two theories. Pei & Mi (2011) Authors have reduced the attributes in a decision formal context based on a homomorphism consistent set from the concept lattice.Medina (2012) Attribute reduction in the three frameworks namely formal, objectoriented, and property-oriented concept lattices were studied in this article.Irrespective of the frameworks, it has been found that the attributes can be classified into three levels of necessity and in any level the attribute reducts are identical. Li, Mei, Kumar, & Zhang (2013) The author has proposed a framework for knowledge reduction from decision formal context using the idea of rule acquisition to discover a new set of non-redundant decision rules.Li, Mei, & Lv (2013) This article concentrates some of the issues in incomplete decision contexts such as approximate concept construction, rule acquisition and knowledge reduction.A method is proposed to build an approximate concept lattice with an incomplete context.The notion of an approximate decision rule is defined, and a method is developed to extract non-redundant approximate decision rules from an incomplete decision context.These rules are again reduced by constructing a discernibility matrix and its associated Boolean function. Li & Wang (2016) This paper deals with knowledge discovery in incomplete contexts.It concentrates on two issues namely concept determination with three-way decisions and attribute reduction with incomplete contexts.The notions of acceptance, rejections and noncommitment are used in the formulation of 3-way decisions.Xu & Li (2016) The important task of granular computing (GrC) is to represent, construct, and process information granules.The authors propose a novel GrC method using FCA description of information granules.This method organizes arbitrary fuzzy information granules to become necessary and sufficient fuzzy information granules.The method is presented along with an algorithm.Qian, Wei, & Qi (2017) In this paper, a three-way concept lattice of a given formal context is proposed.Type-I and Type-II combinatorial contexts are constructed with original and complementary formal contexts.From these two contexts, three-way concepts are constructed by two-way operators.And then the relationships between three-way concept lattices and classical concept lattices are achieved.Authors have studied the behaviour of concept lattices which are reduced using SVD matrix and NMF decomposition techniques.They also have focused on rule reduction after the context compression. Dias & Vieira (2010) Junction based on objects similarity (JBOS) uses the background knowledge in order to replace similar objects by representative elements using certain degree of similarity. Kumar & Srinivas (2010) Kumar (2012) Reduced the size of the concept lattices using fuzzy k-means clustering (FKM).Context matrix is reduced, and quotient lattices are obtained using equivalence relations derived by means of FKM Clustering.In which each record can belong to more than one cluster, and a set of membership levels is associated with each element. The same technique has been adopted in association rule mining of concept lattices in (Kumar, 2012) from the healthcare item set.Kauer & Krupka (2014) The reduction of incidence relations from the formal context also controls the complexity of the concept lattice.Kumar, Dias, & Vieira (2015) Compressed the original context based on non-negative matrix factorization (NMF).Context matrix is decomposed using NMF and formal context is obtained using threshold value.The non-negative constraint suits the context better as attributes values are always non-negative NMF permits only additive combinations but not subtractive combinations of the original vectors.Li, Shao, & Wu (2017) The authors introduced the three-way decision theory viz., acceptance, rejection, non-commitment in FCA recently.An axiomatic approach is proposed to generalize the three-way concepts learning through granular computing.The authors have studied concept lattices under fuzzy environments.They analysed the fuzziness in a many-valued context which is transformed into a fuzzy formal contexts and fuzzy formal concepts. They have reduced the number of fuzzy formal concepts by simplifying the corresponding fuzzy concept lattice structure.An algorithm is also presented for the method. They also have introduced the notion of bipolar fuzzy setting in FCA.They have devised a method for investigating the bipolar fuzzy formal concepts.They also produced lattice representation using bipolar fuzzy graph. Paper Selection/Concept filtration Stumme (2002) Large databases can be analysed using iceberg lattices introduced by Stumme (2002) which uses the variant 'support'.The main drawback of this approach is that the iceberg concept lattice only denotes the most upper part of the concept lattice.As such, it may not be an extraction of all the concepts of the large context.Belohlávek, Sklenar, & Zacpal (2004) Authors proposed a method that reduces the number of concepts using certain constraints, which are derived from attribute dependency formulas (ADF) that are additionally inputted along with the formal context.The set of concepts, which are compatible with the given set of ADFs, are reduced as important concepts.Authors reduced the dimensionality of the concept lattice using the equivalence classes of objects in the process of information retrieval in which the matrix reduction technique was adopted. In these works, the selection of formal concepts is based on the notion of distance or similarity.The concepts of equivalence classes and similarity of objects or attributes are used in the process of selecting important concepts.Belohlavek & Macko (2011) In this article, a weight is assigned to each attribute to express its relevance, and then selects formal concepts considered relevant. To facilitate the application of weights, assign equal weights are assigned to attributes derived from multivalued attributes.The importance of a formal concept is measured by the sum of the weights of its attributes intention divided by the cardinality of its intention.Li, Li, & He (2014) Compressed a concept lattice arising from incomplete contexts using k-medoids clustering.In this process, Accuracy and similarity measures of approximate concepts are obtained and then K-medoids clustering is performed and concept lattice is compressed. Singh, Cherukuri, & Li (2015) Sumangali, Kumar, & Li (2017) Few studies have recently utilized the notion of entropy based FCA.Singh et al. (2015) have concentrated on decreasing the number of formal concepts in FCA with fuzzy attributes using entropy.Further, the number of fuzzy formal concepts is reduced at chosen granulation of the entropy based attribute intent weight.Singh & Kumar (2016) Recently, authors have concentrated to reduce a concept lattice using different subset of attributes as information granules. Conclusion In this paper we have presented an overview on the foundations of FCA and its historical growth to fulfil the thirst of the beginner towards FCA.The terms and notions relevant to FCA are recalled and illustrated by means of examples.FCA extracts the knowledge from any data par excellence in three dimensions viz., conceptual clusters, lattices (graphical representation) and association rules.The main advantages of the use of FCA are its simplicity, diagrammatical representation, and hierarchical overview of the underlying patterns and rules from the formal context.The common issue arising in FCA is the scalability owing to huge size contexts.We have reviewed some of the recent works on scalability. Fig. 3 . Fig. 3. Concept lattice for the formal context of Table 5 Table 2 Many-valued context of platonian bodies Table 3 Forge (2010)l.(2010))ceptlattices, another type of lattices viz., alpha concept lattices were introduced by the authors.Some class restraints are constructed in a formal context with attributes.The resulting concept lattice is known as alpha concept lattice byPernelle et al. (2002).An unrestricted lattice results in an iceberg concept lattice, i.e. one having only frequent formal concepts.Soldano et al. (2010)have discussed the construction of alpha lattices.The extent of a term in Alpha lattices is restricted according to constraints based on an apriori categorization ofForge (2010)instances in classes, and on a degree α, which results in a smaller lattice.Authors determine the Attribute dependency (AD) formulas from the background knowledge.Those concepts which do not obey these AD formulas are removed.
2019-05-11T13:07:05.588Z
2017-12-13T00:00:00.000
{ "year": 2017, "sha1": "fb6a1f27146bcbab1c13a3f2efa453961c1bcbc5", "oa_license": "CCBY", "oa_url": "http://www.kmel-journal.org/ojs/index.php/online-publication/article/download/397/391", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "464915b4b53d961ae96665ffe69bc5fe1bdfee0a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15829434
pes2o/s2orc
v3-fos-license
Efficacy of Olanzapine Combined Therapy for Patients Receiving Highly Emetogenic Chemotherapy Resistant to Standard Antiemetic Therapy Objective. Olanzapine is proved to be effective for chemotherapy induced nausea and vomiting (CINV). But its efficacy in combination with standard antiemetic therapy is unknown. The purpose of this study is to prove the preventive effect of olanzapine for the prevention of CINV caused by highly emetogenic chemotherapy when used with standard antiemetic therapy. Method. Gynecologic cancer patients receiving cisplatin-based chemotherapy who had grade 2 or 3 nausea in overall phase (0–120 h after chemotherapy) despite standard therapy were assigned to this study. From the next cycles to cycles in which patients developed grade 2 or 3 nausea, they received olanzapine with standard therapy. 5 mg oral olanzapine was administered for 7 days from the day before chemotherapy. The effectiveness of preventive administration of olanzapine was evaluated retrospectively. The primary endpoint was nausea control rate (grade 0 or 1) with olanzapine. Results. Fifty patients were evaluable. The nausea control rate with olanzapine was improved from 58% to 98% in acute phase (0–24 h after chemotherapy) and 2% to 94% in delayed phase (24–120 h after chemotherapy). In overall phase, the nausea control rate improved from 0% to 92%, and it was statistically significant (P < 0.001). Conclusion. Preventive use of olanzapine combined with standard antiemetic therapy showed improvement in control of refractory nausea. Introduction Chemotherapy induced nausea and vomiting (CINV) is one of the most harmful adverse effects even though there is a significant progress in antiemetics nowadays. CINV could bring anorexia, malnutrition, dehydration, and anxiety toward chemotherapy to patients and for that it is important to reduce symptoms of CINV as possible. National Comprehensive Cancer Network (NCCN), American Society of Clinical Oncology (ASCO), and Multinational Association of Supportive Care in Cancer (MASCC) have developed antiemetic guidelines based on evidence. In Japan, the first guideline for proper use of antiemetics was provided by Japan Society of Clinical Oncology in 2010 based on guidelines written above. These guidelines recommend triple therapy consisted of 5-HT3 receptor antagonist, NK-1 receptor antagonist, and dexamethasone as a standard antiemetic therapy toward highly emetogenic chemotherapy (HEC) [1,2]. Multiple reports proved the effect of this therapy [3][4][5][6]. Complete response (no vomiting, no rescue, and any nausea) to HEC is reported to be around 80% in acute phase (0-24 h after chemotherapy) and 60-70% in delayed phase (24-120 h after chemotherapy). However, there is no effective therapy toward CINV which is resistant to standard antiemetics reported. In guidelines above, olanzapine, the atypical antipsychotic, is mentioned as a usable agent for CINV refractory for standard antiemetic therapy. Olanzapine is reported to be equal or more effective for CINV compared to aprepitant and dexamethasone [7,8]. Moreover, olanzapine is reported as an effective and tolerable agent which can be used as a salvage therapy to standard therapy refractory CINV [9]. However, preventive administration of olanzapine for standard therapy refractory CINV has not been proved effective or safe so far. In this study, we administered olanzapine with standard antiemetic therapy as a preventive therapy to patients treated with HEC containing cisplatin who had grade 2 or 3 nausea (Common Terminology Criteria for Adverse Events; CTCAE ver. 4.0) in spite of receiving standard antiemetic therapy. The control of nausea and vomiting was evaluated retrospectively. Patients. Fifty patients were assigned to this study. They were gynecological cancer patients who were treated with HEC regimen containing cisplatin and had symptoms of grade 2 or 3 nausea in overall phase (0-120 h after chemotherapy) in spite of receiving standard antiemetic therapy. There were 32 patients of grade 3 and 18 patients of grade 2. All patients were informed of drug information and the consent of using olanzapine was obtained. Since olanzapine is contraindicated in patients with diabetes mellitus, their blood sugar level and hemoglobin A1c were checked to confirm that they do not have glucose intolerance. Regimens with less than 50 mg/m 2 cisplatin were included in this study because ASCO, MASCC, and Japanese guidelines include these regimens in HEC although NCCN classifies them as moderate emetogenic chemotherapy (MEC). We conducted this study in accordance with ethical principles based on the Declaration of Helsinki. All the data and information of patients were processed considering privacy and patients were not identifiable. Treatment Plans. As a standard antiemetic therapy, 5-HT3 receptor antagonist (palonosetron 0.75 mg or granisetron 3 mg on day 1), NK-1 receptor antagonist (aprepitant 125 mg on day 1, 80 mg on days 2-3), and dexamethasone (9.9 mg on day 1, 6.6 mg on days 2-4) were administered. Olanzapine was given with standard antiemetic therapy from the cycles next to cycles in which patients developed grade 2 or 3 nausea in overall phase though they were treated with standard antiemetic therapy. 5 mg of oral olanzapine was given for 7 days starting at the day before cisplatin was administered. Parameters Assessed. The grades of nausea through acute phase, delayed phase, and overall phase were evaluated according to medical record written by doctors, nurses, and pharmacists using CTCAE ver. 4.0. The primary endpoint was nausea control rate. It was defined as the rate of patients whose grade of nausea was controlled within 0 or 1. The secondary endpoints were no vomiting rate (the rate of patients who did not have any vomiting), complete response rate (no vomiting, no rescue, and any nausea), complete control rate (no vomiting, no rescue, and nausea grade 0 or 1), and total control rate (no vomiting, no rescue, and no nausea). We compared cycles containing only standard antiemetic therapy with cycles containing both standard antiemetic therapy and olanzapine. Adverse effects and laboratory data were evaluated based on CTCAE ver. 4.0. Statistical Analysis. We compared cycles in which patients developed grade 2 or 3 nausea with standard therapy and cycles in which they received olanzapine with standard therapy for the first time. We used McNemar test to evaluate the improvements in each parameter. < 0.05 was considered statistically significant in this study. Results Patient's characteristics are shown in Table 1. Regimens with cisplatin more than 50 mg/m 2 were used in 45 patients and regimens with cisplatin less than 50 mg/m 2 were used in 5 patients. In FP therapy, weekly CDDP, and weekly TP therapy, radiation therapy (external pelvic irradiation, 1.8 Gy/day) was used simultaneously. The mode of the number of cycles in which patients developed grade 2 or 3 nausea though they were treated with standard antiemetic therapy was cycle 1 and there were 29 cases. The changes of nausea grades with the usage of olanzapine are shown in Table 2. There were no patients who had heavier nausea. In most of the patients, their nausea improved after they started to use olanzapine. The nausea control rate is shown in Table 3. The nausea control rate with olanzapine improved from 58% to 98% in acute phase and 2% to 94% in delayed phase. In overall phase, the nausea control rate improved from 0% to 92%, and it was statistically significant ( < 0.001). No vomiting rate, no rescue therapy rate, complete response rate, complete control rate, and total control rate of cycles before using olanzapine and those with olanzapine are shown in Table 4. In the cycle where patients developed grade 2 or 3 nausea, 19 patients vomited and 49 had rescue therapy in overall phase. As this result, complete response rate of overall phase in group of patients without olanzapine was only 2%. No vomiting rates of cycles using olanzapine in acute phase, delayed phase, and overall phase were 100%, 96%, and 96%, respectively. In each phase, improvement was statistically significant ( < 0.05). No rescue therapy rates of cycles using olanzapine in acute phase, delayed phase, and overall phase were 98%, 82%, and 82%, respectively. In each phase, improvement was statistically significant ( < 0.001). Complete response rate and complete control rate of cycles using olanzapine were 82-98% in all phases and they improved significantly compared with cycles without olanzapine ( < 0.001). Total control rate using olanzapine was 86% in acute phase, 42% in delayed phase, and 40% in overall phase, but all rates improved significantly ( < 0.001). As adverse effects, grade 1 or 2 drowsiness was seen in 26 patients. There were 18 patients of grade 1 (36%) and 8 patients of grade 2 (16%). Six patients had to reduce the dose of olanzapine to 2.5 mg because of grade 2 drowsiness, but no patients had to stop taking it. There were no grade 3-4 adverse effects. Forty-nine patients out of 50 wished to continue taking olanzapine and used olanzapine through whole cycles of chemotherapy. One patient had to stop receiving chemotherapy because of the progression. Discussion As far as we know, this is the first study to report the preventive effect of olanzapine used with standard antiemetic therapy toward CINV caused by highly emetogenic chemotherapy. First, the most important point in this study is that using olanzapine combined with standard antiemetic therapy was effective for preventing nausea and vomiting in patients with CINV resistant to standard antiemetic therapy. Although olanzapine was given to patients who had grade 2 or 3 nausea despite standard antiemetic therapy, the nausea control rate was improved more than 90% in those patients by combined use of olanzapine. The improvement was statistically significant. This shows the possibility of preventive administration of olanzapine becoming a new effective choice toward CINV resistant to standard antiemetic therapy. Second, the nausea and vomiting were also controlled in delayed phase in almost the same level of acute phase, although delayed phase is known to be more difficult in terms of controlling these symptoms. It is very interesting that the nausea control rate was 94% in delayed phase while it was 98% in acute phase. There is a possibility that olanzapine is effective to the mechanism of nausea in delayed phase which is refractory to standard antiemetic therapy. Even in patients who had grade 2 or 3 nausea, complete response rate was 82% in delayed phase because both the nausea control rate and the vomiting control rate significantly improved. 0 1 2 3 0 1 2 3 0 1 2 3 Nausea grade without OLN 0 1 It is reported that complete response rate of standard antiemetic therapy is 80% in acute phase and 60-70% in delayed phase [3][4][5][6], which are thought to be relatively good results. However we must pay attention to the fact that these studies include both male and female patients. The results with a group consisted of only female patients are worse compared to those with a group consisted of both sexes because women have higher risk of CINV. In particular, complete response rate in delayed phase is as low as 50% in gynecologic cancer patients [10]. In phase III randomized control trial which compared the effect of antiemetic therapy toward cisplatin-based chemotherapy between both genders, there were no differences in male patients and female patients in first cycle when they were treated with triple therapy. The percentage of patients who had no emesis was 70% in both genders. However, the difference of effect in gender became bigger as they continued the chemotherapy. The percentage of the female patients with no emesis in 6th course of chemotherapy was only 44% where 60% of male patients had no emesis [11]. Therefore we need stronger antiemetic therapy in female patients and that is the reason we have expectation in efficacy of olanzapine. Olanzapine is classified as an atypical antipsychotic and it is used to treat schizophrenia and bipolar disorders. Olanzapine is called MARTA (multi-acting-receptor-targeted antipsychotics) and its main characteristic is that it is an antagonist of multiple chemoreceptors such as dopamine (D1, D2, D3, D4, and D5), serotonin (5-HT2a, 5-HT2c, 5-HT3, and 5-HT6), histamine (H1), adrenalin ( 1), and acetylcholinemuscarine (Achm1-Achm5) [12]. Olanzapine is not originally antiemetic agent, but, due to its strong antiemetic effect, there are many studies reporting its efficacy toward CINV, nausea due to opioids, and nausea and vomiting in terminal stage in patients with malignant tumors. Acetylcholine-muscarine (Achm), dopamine (D2), histamine (H1), serotonin (5-HT2, 5-HT3), and neurokinin-1 (NK-1) are known as main neurotransmitters related to CINV. There are chemoreceptors of these transmitters in central nervous system. There are H1 and Achm in vestibular apparatus, 5-HT3, NK-1, and D2 in CTZ, and 5-HT2, 5-HT3, NK-1, D2, H1, and Achm in vomiting center and it is thought that the network between these receptors causes nausea and vomiting [13]. Olanzapine is a medication which can be an antagonist of those 4 receptors except NK-1 receptor and related to all of the vestibular apparatus, CTZ, and vomiting center. Theoretically, by using both standard antiemetic therapy and olanzapine, all chemoreceptors affecting CINV can be blocked because olanzapine can be an antagonist of chemoreceptors which cannot be blocked using only standard antiemetic therapy. Also, olanzapine is known to have less adverse effects such as extra pyramidal symptoms and akathisia compared with conventional antipsychotics (prochlorperazine, haloperidol, etc.) and metoclopramide which have been used for CINV [14]. There are several phase III randomized control trials on efficacy of olanzapine towards CINV. In the study which compared the olanzapine with aprepitant in patients using regimen containing cisplatin or AC therapy (doxorubicin, cyclophosphamide), complete control rates were almost the same in both acute phase and delayed phase. But rate of patients who had no nausea at all was 69% in olanzapine group while it was 38% in aprepitant group [7]. Therefore olanzapine was proved to be comparable or even more effective compared with aprepitant. The study compared olanzapine and dexamethasone with patients using HEC or MEC; complete control rates in acute phase were almost the same in both groups. However in delayed phase, olanzapine group had significantly better complete response rate in both HEC and MEC regimens (HEC: nausea 69% versus 30%, vomiting 78% versus 56%; MEC: nausea 83% versus 58%, vomiting 89% versus 75%) [8]. In the study which compared olanzapine with metoclopramide used as salvage therapy for patients who had CINV resistant to standard antiemetic therapy, the rates of patients without vomiting were 70% versus 31% and the rates of those without nausea were 68% versus 23% within 72 hours after salvage. By this result, olanzapine was proved to be a stronger salvage therapy agent [9]. There were no grade 3 or 4 adverse effects in these studies. The limitation of this study is that, firstly, this is a retrospective before-after comparative study with only small number of patients. A prospective study should be conducted in the future. Second, the evaluation of nausea was done by doctors, nurses, and pharmacists based on objective indicator. We believe that there is no big divergence with the self-evaluations of patients but to evaluate the true therapeutic effect, evaluation tool such as patient diary is needed as subjective self-evaluation. Third, we do not know the optimal dose of olanzapine yet. The rate of drowsiness was quite high such as 52% in this study, so there were 6 patients who had their olanzapine reduced to 2.5 mg. Meanwhile, there was one patient who had to take 10 mg of olanzapine due to strong nausea. We have to verify the optimal dose of olanzapine used with standard antiemetic therapy. Finally, there are only gynecological cancer patients in this study. We also have to verify if this is also effective for patients using regimens for other kinds of malignant tumors. We now have an ongoing prospective phase II trial to prove the efficacy and safety of olanzapine used with standard antiemetic therapy toward CINV caused by HEC. Conclusion We treated patients using cisplatin containing HEC regimen with 5 mg of olanzapine who had grade 2 or 3 nausea although they were receiving standard antiemetic therapy. Using olanzapine combined with standard antiemetic therapy improved nausea control rate to more than 90% and it was statistically significant. There was grade 1 or 2 drowsiness in half of the patients but it was feasible. It is suggested that using olanzapine as a preventive antiemetic agent combined with recommended standard antiemetic therapy could be useful antiemetic regimen. Moreover, this could improve quality of life in patients with cancer who are receiving chemotherapy.
2016-05-12T22:15:10.714Z
2015-09-03T00:00:00.000
{ "year": 2015, "sha1": "12d08c338b894ed2052ce2cc663f3be1f81cccdf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2015/956785", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "224495d02d61f1cd3efcec08470a56248489b44f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
86159980
pes2o/s2orc
v3-fos-license
Early events in lymphocyte transformation by phytohemagglutinin. I. DNA-dependent RNA polymerase activities in isolated lymphocyte nuclei. The DNA-dependent RNA polymerase activities of isolated nuclei from lymphocytes were examined after stimulation with phytohemagglutinin (PHA). The nuclear fraction was prepared with Mg(++) or Mn(++) to distinguish between polymerase I (nucleolar) and polymerase II (nucleoplasmic). Distinction between polymerases II and III was obtained by the addition of alpha-amanitin to the reaction mixture. The results indicated that within 15 min after exposure to PHA the activity of polymerase I increased. Polymerase II activity increased after 1 hr. The enhancement was linear for 6 hr and then leveled off for the subsequent 48 hr. Small increase in polymerase III activity was observed at 48 hr. Inhibition of protein synthesis at the time of exposure to PHA did not prevent the increase in activities during the initial 6 hr. These results imply that the initial increase in enzymatic activities is dependent upon preexisting polymerase molecules and/or factors. INTRODUCTION Lymphocyte transformation by plwtohemagglutinin (PHA) and other mitogenic agents has been widely studied as a model system in which a resting cell is stimulated to enlarge and divide. The addition of PHA to lymphocyte cultures generates several biochemical changes in the cell. RNA and protein synthesis (1)(2)(3), histone acetylation (2), phosphorylation of nuclear proteins (4), the incorporation of phosphate into phosphatidylinositol (5) of glycerol, glucose, and choline into cell lipids (6), and the levels of cyclic adenosine monophosphate (AMP) (7) are stimulated within the first hour after addition of PHA. A concomitant increase in the cell permeability to RNA, protein, and lipid precursors (8,9), as wet1 as in K + trans-port (i0), has been reported, suggesting that some alteration in the cell membrane precedes lymphocyte transformation Although an increase in RNA polymerase activity m lymphocytes treated with PHA for 18 hr has been demonstrated (11), no information is available either about earlier times after treatment or about the effects on each of the individual polymerases found in eukaryotic cells (12). Taking advantage of the fact that it is now possible to select conditions for assaying the different DNA-dependent RNA polymerase activities in isolated nuclei (13,14), these activities were studied in nuclei isolated from human peripheral blood lymphocytes at different intervals after PHA treatment. It was found that soon after PHA stimulation there was an elevation in the RNA polymerase I activity (12,13), followed by an increase in polymerase II. MATERIALS AND METHODS Human peripheral Iymphocytes from normal donors were used throughout all experiments. Detailed descriptions of the nutrient media, preparation of lymphocyte cultures, and experimental methods have been published (2,8). Isolation of the Nuclear t¥actions Approximately 1 X 10 ~ cells were washed with phosphate-buffered saline (PBS) and then resu~spended in 1 ml of 0.01 ~ Tris-HC1 buffer (Schwarz BioResearch Inc., Orangeburg, N.Y./Mann Research Labs Inc, New York) pH 7 81 with I mM MgCI2 or 1 m~ MnCI~ and 10 m~ KC1 (Tris-saline). Swelling of cells was allowed to proceed for l0 rain at 4°C. Triton X-100 was added at a final concentration of 0.5% and the cells were disrupted by 10 strokes of a tight-fittlng pestle of a Dounce homogenizer, and then the homogenate was centrifuged at 800g for 3 rain. The pellet containing the crude nuclear fraction was resuspended in 1 ml of Tris-saline with 0.5% Triton X-100 and 0 1~ sodium desoxycholate (DOC). The nuclear fraction was centrifuged at 800g for 3 min and washed with the Tris-saline containing 0.5% Triton X-100 and 0 1% DOC until it appeared to be free of cytoplasmic debris when examined under the phasecontrast microscope. The final nuclear pellet was resuspended in the medium to be used for the RNA polymerase assay. RNA Polymerase Assay The nuclear RNA polymerases were assayed by the incorporation of labeled nucleotides into RNA. Portions of the nuclear fraction containing 20-30 #g of DNA were added to the assay mixture containing different ingredients appropriate to the particular activity being determined. The nuclear fraction prepared in Mg ++ was used to assay the RNA polymerase I activity (16) The reaction was allowed to proceed for 10 rain at 37°(] and then was stopped by adding 10/~moles of the unlabeled nucleotide and 5 ml of 10% cold trichloroacetic acld (TCA) eontaimng 0 05 M sodium pyrophosphate. The precipitates were collected in R~illipore filters (Milhpore Corp., Bedford, Mass ), washed twice with 15 ml of 10% TCA, and the radioactivity was determined as described elsewhere (15) In the experiments in which (NH4)~SO4 was added to the incubation mixture at low (0.03 ~a) and high (0.4 M) ionic strength, both Mg ++ and Mn ++ were present at the concentrations indicated above. To determine the relative incorporation of each nucleotide into RNA, synthesis was allowed to occur using each labeled nucleotide independently under assay conditions already described. Each assay was carried out in triplicate and the entire experiment was repeated on three different lymphocyte preparations, i.e., nine determinations in all. Nuclear DNA was determined by Burton's procedure (17). DNA-Dependent RNA Polymerase Activity in Lymphocyte Nuclei at Different Times after Incubation with Phytohemagglutinin The requirements for the DNA-dependent RNA polymerase activities in isolated lymphocytes nuclei were found to be similar to those described for rat liver nuclei (13,16). The reactions required either Mg ++ or Mn ++, the presence of the four nucleotides, 2-mercaptoethanol, and addition of Na or NH4 salts at low ionic strength. The reactions were inhibited by actmomycin D, RNase, and DNase They were linear for 15 rain and then reached a plateau. Nuclear samples were prepared from lymphocytes after exposure to PHA and assayed for RNA polymerase activity Addition of 10 /~ of o~amanitin to the assay mixture of nuclei prepared with either Mg ++ or Mn ++ allows one to differentiate somewhat between the three enzymatic activities designated RNA polymerases I, II, and III (12). In the presence of Mg ++ and The Base Composition of the RNA Synthesized at Different Conditions of Incubation The product synthesized by the nuclear RNA polymerases under various conditions of incubation was studied by measuring the rate of incorporation of the four respective nucleotides. The results of different experiments on the base composition of the products are summarized in Table I. In the presence of Mg ++, both control and PYIAstimulated nuclei synthesized mostly GC-rich RNA, suggesting that the ribosomal genes were being transcribed. In the Mn ++ mixture, A and U were preferentially incorporated into the RNA. When a-amanitin was added to the assay mixture, the base composition in the Mn++-incubated nuclei shifted towards an RNA product enriched in adenylic acid in both controls and PHA-treated cells. The possibility that a homopolymerization reaction was taking p/ace in the isolated nuclei in the presence of ~-amanitin was investigated by following incorporation of each nucleotide in the absence of the other three in the reaction mixture Under these conditions, only incorporation of adenylic acid was observed, but, it was reduced by 50%. This result suggested that a polymer of adenylic acid was synthesized. The relationship between poly A synthesis and the nature of the RNA being transcribed as a whole is being investigated further. Effect of Puromycin on Polymerase Activities The increase in RNA polymerase I and II activities could be due to de novo synthesis of polymerase molecules (or factors), or could occur because preexisting enzyme molecules become functional. To test the first possibility, puromycin was added at the time of PHA addition, and the RNA polymerase activities were measured. In a parallel experiment, puromycin at 20 ]zg/mI inhibited 90% of the amino acid incorporation within 1 hr (2), As can be seen in Fig 3, the increase in RNA polymerase activities was not blocked by puromycin until 6 hr had elapsed. Therefore, the initial increase in the activities was probably due to enzyme molecules that were already present in the cell. Further evidence for that interpretation was obtained by experiments in which high ionic strength (NH4)2SO4 concentration was used. In this condition, more template should become available for the enzymes to copy (18). It can be seen that the nuclei from control cells were highly stimulated by 0.4 • (NH4)~SO~, whereas the PHA-treated nuclei were stimulated less. This would imply that more template was available as a result of PHA treatment (Table II). Addition of calf thymus DNA to the incubation mixture slightly increased the incorporation of nucleotides 6 hr after induction with PHA This result would suggest that at this time more polymerase molecules were free and available for binding to the template. As previously shown (see TABL~ I Fig 3), at this crmcal period protein synthesis was required to maintain the increase in RNA synthesis (Table III) DISCUSgION Relative Nucleotzde Incorporation by Isolated Lymphocyte Nuclei The results reported in this paper indmate that increase in the activity of the DNA-dependent RNA polymerases is one of the earliest responses of the cell to the mitogenie agent. The Mg++-dependent RNA polymerase activity was stimulated immediately after the ceils were exposed to PHA On the other hand, the Mn ++dependent RNA polymerase activity was stimulated after a lag period of 1 hr. The third RNA polymerase activity, which is Mn++-dependent but not inhibited by ~-amaniUn, remained unstimulated by the mitogenic agent. Although protein synthesis was not required for the initial increase, it was necessary for the maintenance of the stimulated RNA polymerase activities. These results indmate that the number of enzymatic molecules (or factors) necessary for the initial stimulation in RNA synthesis preexisted within the nonactivated lymphocytes Three mechanisms by which PHA may increase the RNA polymerase activities can be postulated: (a) the amount of RNA polymerase is controlled by affecting the synthesis of the enzymes or some of their subunits, (b) the availability of the DNA template ts altered, and (c) the function of some factor(s) necessa~" for the binding of the enzyme to the template is affected. The first possibility assumes that the amount of RNA polymerase is the limiting factor within the cell. However, this could be ruled out because inhibitors of protein synthesis failed to affect the initial stimulation of the enzymes. The possibility that more sites in the DNA template are available is suggested by the experiments using high ionic strength in the enzyme assay. Contradictory interpretations can be drawn from these results since it was shown that the isolated Mg++-dependent RNA polymerase was inhibited while the isolated Mn++-dependent RNA polymerase was stimulated by high concentrations of salts (12) Nevertheless, Hirschhorn et al. (I9) and B. G. T. Pogo (unpublished results) have observed increases in template activity after addition of exogenous DNA-dependent RNA polymerase to nuclei isolated from PHA-treated lymphocytes. Before the third possibility, concerning the function of some factor(s) required for the binding of the enzyme to the template, can be considered, the study of isolated enzymes from normal and stimulated Iymphocytes is necessary. The results reported here indicate that RNA polymerase I was the earliest synthetic activity to be stimulated after lymphocytes were exposed to PHA. Activation of RANA polymerase I has been also demonstrated after cells were stimulated by hormones (20)(21)(22)(23), by surgical removal of a segment of the organ (24,25), or by a medium enriched in amino acids (26).
2014-10-01T00:00:00.000Z
1972-06-01T00:00:00.000
{ "year": 1972, "sha1": "bdbc9ef8bd1da4a569e45c73c81dd2abbe36abb9", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/53/3/635/1386329/635.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "bdbc9ef8bd1da4a569e45c73c81dd2abbe36abb9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
237254302
pes2o/s2orc
v3-fos-license
FILIP1L Loss Is a Driver of Aggressive Mucinous Colorectal Adenocarcinoma and Mediates Cytokinesis Defects through PFDN1 This study identifies FILIP1L as a tumor suppressor in mucinous colon cancer and demonstrates that FILIP1L loss results in aberrant stabilization of a centrosome-associated chaperone protein to drive aneuploidy and disease progression. Introduction In the United States, colorectal cancer is the third leading cause of cancer-related deaths in men and women, and the second most common cause of cancer deaths when men and women are combined (1). Colorectal cancer is expected to cause 53,200 deaths during 2020 (1). Mucinous colorectal adenocarcinoma (MAC) is a distinct form of colorectal cancer, affecting 10% to 15% of patients. MAC is characterized by abundant mucin secretion comprising at least 50% of the tumor volume (2). Approximately half of MAC tumors are aneuploid (3)(4)(5)(6)(7), whereas the remainder are diploid and associated with microsatellite instability. Aneuploid MAC tumors were shown to be clinically more aggressive than diploid tumors (8,9). Compared with common colorectal adenocarcinoma, MAC has an entirely different molecular "signature," as well as an aberrant and aggressive metastatic pattern that is associated with poor response to treatment and worse prognosis (2). The mechanisms of MAC tumorigenesis are currently unknown. Filamin A interacting protein 1-like (FILIP1L) is a tumor suppressor, which we identified in several types of cancer, including colorectal cancer (10)(11)(12)(13). We showed that FILIP1L expression is downregulated by promoter methylation (10,12), a key mechanism of tumor suppressor downregulation in cancer. Downregulation of FILIP1L is associated with chemoresistance and worse prognosis in ovarian and colon cancer (14,15). Moreover, its expression is inversely correlated with the invasive/aggressive potential of tumors and with epithelial-to-mesenchymal (EMT) marker expression (11,16). Its structural homologies and centrosomal localization suggest that FILIP1L may bind to elements of the cytoskeleton, and chaperone proteins to proteasomes (11,17,18). The prefoldin1 (PFDN1) chaperone is overexpressed in multiple cancer types and is associated with poor prognosis in colon cancer (19,20). PFDN1 participates in a multimeric prefoldin complex that facilitates proper folding of key cytoskeletal components such as actin and tubulins. Loss of PFDN1 decreases tubulin levels, thereby reducing microtubule growth and causing defects in cell division and embryonic lethality (21), although the mechanisms are unknown. Moreover, loss of PFDN1 results in mitotic spindle misorientation and mispositioning of cytokinetic furrows (21). The mechanisms regulating PFDN1 localization and levels are unknown. The spindle apparatus specifies the position of the cleavage furrow during anaphase, and the cleavage plane is positioned by microtubuledependent mechanisms (22). Microtubules are primarily nucleated from centrosomes, thus centrosomes are critical organelles that ensure faithful mitosis and chromosome distribution to daughter cells. Centrosome abnormalities lead to mitosis/cytokinesis defects that are associated with aneuploidy (23). Centrosome abnormalities are found in virtually all cancer types and have been linked to chromosomal instability, aneuploidy, and tumorigenesis (24). Changes in the levels of several centrosomal proteins, including prefoldins, have been linked to dysregulation of centrosome function (21,(25)(26)(27)(28)(29). Dysregulated expression of several centrosome-modulating proteins is implicated in human cancer (28)(29)(30)(31)(32). Although we have shown that FILIP1L is a versatile tumor suppressor in many types of cancer, its function has not been fully elucidated. Here we show that loss of FILIP1L increases xenograft growth in vivo, drives colonic epithelial hyperplasia in mice, and increases mucin secretion and mitotic defects in MAC. We determined that FILIP1L binds to the centrosome-localized chaperone protein, PFDN1, and regulates its level at the centrosomes through a proteasome-dependent mechanism. FILIP1L is down-regulated and PFDN1 is up-regulated in human MAC samples. Reduction of FILIP1L and subsequent increase in PFDN1 levels results in mucin secretion and multinucleation, which recapitulates the central characteristics of human aneuploid MAC. Our findings suggest that downregulation of the tumor suppressor FILIP1L is a driver of the neoplastic changes in an aggressive form of MAC. Materials and Methods Cell culture and transient transfection The . Cells were cultured following the manufacturer's guidelines, and passaged up to five times after each thawing. All cell lines were routinely tested for Mycoplasma contamination using Universal Mycoplasma Detection Kit (ATCC, 30-1012K). HEK293 cells were transfected with equimolar amounts of control empty plasmid or plasmid encoding FILIP1L-HA and/or Flag-PFDN1 using lipofectamine 3000 solution (Thermo Fisher Scientific) following the manufacturer's protocols. For MDCK.2 cells, using the 4D-Nucleofector system (Lonza; SE solution; program CM 113), homogenous expression in over 90% of cells was routinely achieved. At 24 to 48 hours following transfection, transfected cells were subjected to downstream assays such as immunoprecipitation, immunoblot, and immunofluorescence staining. FILIP1L-knockdown clones from MDCK.2 cells were generated using CRISPR-Cas9 at the Genome Editing Core at Rutgers Cancer Institute of New Jersey. Cells were electroporated with pX458-C199 (gRNA sequences targeting exon 3 of FILIP1L, CCTTGCTGAAAC-CAGAGTTC). pX458 [pSpCas9(BB)-2A-GFP] was obtained from Addgene (#48138). After electroporation, individual cells were sorted into 96-well plates and single-cell derived clones were genotyped by PCR. PCR primers of TCACAGCTGATAAGTTGCTAAAGCACC and CTGCCTCATTGGTGAGCTTTGC were used. T7 endonuclease digestion was used to determine clones with indels. Fifty-six clones were analyzed and Sanger sequencing of candidate clones confirmed frameshift mutations in FILIP1L clones. Although we aimed to generate knockout clones, clones demonstrating complete deletions were not found. Mouse xenograft model All use of vertebrate animals described in this study was conducted in accordance with NIH regulations and was approved by the Animal Use Committee of Rutgers University. Indicated number of colon cancer clones were suspended in Matrigel [Corning, #356231, 1:1 ratio (v:v)] and subcutaneously injected in 8-week-old female nude mice (Taconic, catalog no. TAC:nmrinu, RRID:IMSR_TAC:nmrinu). Tumor growth was measured for indicated times, and tumor weights were measured after sacrifice. Xenograft tumors were fixed in 10% neutral buffered formalin and subject to IHC analysis. Filip1l conditional knockout mice Filip1l-floxed mice were generated using CRISPR-Cas9 at the Genome Editing Core at Rutgers Cancer Institute of New Jersey. C57BL6/J (IMSR Cat# JAX_000664, RRID:IMSR_JAX:000664) embryos were microinjected with a mixture containing Cas9 protein (IDT), a sgRNA (Millipore, Sigma), and a ssODN (IDT), which contained homology arms and a loxP site. For the 5 0 loxP insertion-sgRNA, CATTCTTGCCCTGTGTTAAG was used along with the 5 0 loxP donor oligo, CATTTTACCGAATAACCAACGTGTTAA-ACAGTAACTAGTAATATAGCACATGCGTAATGGCTCAAGCA-AGCCACTATAACTTCGTATAGCATACATTATACGAAGTTAT-AACACAGGGCAAGAATGAGTAATTCAAAAAGTGCCATGGC-AACAGTTATCAAG (loxP sequence in bold). For the 3 0 loxP insertion-sgRNA, ATGTAATATATGCTGTAGGG was used with the 3 0 loxP donor oligo, GAGTTTGGAACTTTAAGTTAGCTT-Institute of New Jersey, under our IRB exemption. Immunohistochemical staining was carried out and a second pathologist scored the staining under blinded conditions. FILIP1L cytoplasmic staining was scored according to the staining intensity [categorized as 0 (absent), 1 (weak), 2 (moderate), or 3 (strong)] as well as the percentage of staining (0%-100%). The final expression score was calculated by multiplying the intensity and the percentage of staining, resulting in a score of 0 to 300. DNA ploidy Experimental details were followed as described previously (33). Briefly, four 60-mm-thick sections were cut from each FFPE tissue block of 16 MAC tumors. MAC area was identified by a pathologist and manually dissected to maximize the chance of finding an aneuploid tumor population (to increase its percentage in a background of normal diploid cells). Tissue samples were subjected to deparaffinization, rehydration, and pepsin digestion. Dissociated pellets were resuspended in 4,6-diamidino-2-phenylindole (DAPI) solution containing 0.1% NP40 and 10% DMSO. Prior to flow cytometry, potential nuclei aggregates were further dissociated by passaging 15 times through a 26-gauge needle. Flow cytometry analysis of DAPIlabeled nuclei was performed on a Cytek Aurora 5-laser cytometer using the SpectroFlow software package version 2.2 (Cytek Biosciences). Single nuclei population was selected using forward and side scatter. For a thorough doublet exclusion, it was further gated using forward scatter-area and forward scatter-height. 1  10 5 gated events were collected for each sample. The percentage of nuclei representing over 4  10 4 DAPI-area signal was calculated. On the basis of the average percentage of diploid controls, we defined "aneuploid tumor" as those with greater than 20% aneuploid cells. Three-dimensional cell culture MDCK.2 cells and transduced colon cancer clones were cultured on growth Factor-Reduced Matrigel (Corning, #356231). Cysts were routinely formed from MDCK.2 cell cultures, and they were fixed at days 3 and 6 for immunofluorescence analysis. Colon cancer clones mostly resulted in compact cell clusters that were fixed at days 3 to 5. Time-lapse imaging HEK293 cells were transfected with plasmids encoding FILIP1L-eGFP and mCherry-PFDN1, and incubated for 24 h. Cells were then placed in the EVOS Onstage Incubator at 5% CO 2 , 20% O 2 and 80% humidity. Fluorescent images were acquired at 20-minute intervals. Caco2 clones were incubated with SPY-595 DNA and SPY-650-tubulin fluorescent dyes (Cytoskeleton) for 1 h and placed in the EVOS Onstage Incubator. Forty random fields were selected, and fluorescent and phase contrast images were acquired at 5-minute intervals. Images acquired over the initial 4 h were used to quantify data. Durations longer than 4 h demonstrated substantial fluorescent signal bleaching. Images were acquired by an EVOS FL Auto 2 microscope (Thermo) at 20X objective magnification (z-stack of 1.7 mmol/L thickness). Mitotic length was quantified as nuclear envelope breakdown (NEBD) to Anaphase. Time to cytokinesis completion was quantified as NEBD to membrane fission by phase contrast. Acquired images were analyzed and quantified using Celleste software (Thermo, Version 4.1.1). Detailed quantification procedures were written in Supplementary Information. Yeast two-hybrid screening Procedures were carried out using Matchmaker Gold Yeast Two-Hybrid System (Clontech, #630489), as recommended by the manufacturer. Wild-type FILIP1L cDNA was used to generate a bait clone. Quantitative real-time RT-PCR Total RNA preparation and qRT-PCR were performed as described previously (10). The gene-specific primers used with SYBR Green reagent are written in Supplementary Information. Coimmunoprecipitation and immunoblot Following transient transfection, HEK293 cell lysates were subject to immunoprecipitation. Immune complexes were eluted with Flag Peptide (Sigma, K4799) and HA Peptide (Sigma, I2149) for Flag tagand HA tag-immunoprecipitation, respectively. Experimental details for immunoblotting were followed as described previously (11). Densitometric analysis was performed using ImageJ (RRID:SCR_003070) on scanned images of immunoblots. Plots were created for region of interest, and gel analysis feature was used to create numeric values for these plot areas. Antibody list used in assays such as immunoprecipitation, immunoblot, immunofluorescence, and IHC is shown in Supplementary Information. IHC Experimental details were followed as described previously (14). Images were acquired by AxioImager microscope (Zeiss). For stitched images, they were acquired by EVOS FL Auto microscope (Thermo Fisher Scientific). WST1 cell proliferation assay Various colon cancer cell lines were seeded in 96-well plates (2.5  10 3 cells per well) and incubated with WST1 (Millipore Sigma, #5015944001) for 1 hour. Cell proliferation by WST1 incorporation was measured using a Synergy Mx microplate reader (Biotek) daily up to 4 days after cell plating. Statistical analysis Data are presented as the mean AE SEM. Statistical analyses were performed using a two-tailed Student t test (GraphPad Prism 6.0 [RRID:SCR_002798]), and differences were considered statistically significant at P < 0.05. FILIP1L negatively regulates xenograft growth, and its knockdown induces mucin secretion and multinucleation in colon cancer in vivo We and others have shown that FILIP1L is downregulated, and its low expression is associated with a poor prognosis in colon cancer (12,15). Thus, we examined the consequences of FILIP1L modulation on mouse xenograft tumor growth. We stably overexpressed FILIP1L in low-expressing colon cancer lines (HT29 and HCT116). Immunoblotting confirmed increased FILIP1L levels ( Supplementary Fig. S1A). In nude mouse xenografts, overexpression of FILIP1L caused a statistically significant >6-fold and >2-fold inhibition of tumor growth in HT29 and HCT116 cells, respectively (Fig. 1A-C; Supplementary Figs. S1B-S1D), confirming the tumor suppressor function of FILIP1L. IHC staining of FILIP1L confirmed increased FILIP1L levels in the tumors from FILIP1L-overexpressing clones (Fig. 1D). Knockdown of FILIP1L significantly enhanced tumor growth nearly 3-fold compared with controls ( Fig. 1E and F). IHC staining and immunoblotting confirmed decreased FILIP1L expression ( Fig. 1G; Supplementary Fig. S2A). Although tumors from both groups demonstrated poor differentiation and high Ki67 index ($90%) as determined by two clinical pathologists (PJ and ZZ), considerably more compact cells were observed in FILIP1L-knockdown groups (shown by hematoxylin and eosin (H&E) staining in Fig. 1G; Supplementary Fig. S2B). FILIP1L-knockdown tumors also demonstrated increased FILIP1L levels affect colon xenograft tumor growth, and its knockdown induces mucin secretion as well as multinucleation in colon xenograft tumors. A, HT29 (1.5  10 6 ) clones of either control or FILIP1Lþ derivatives were subcutaneously injected into the nude mice (8 mice per cell line). Tumor growth was measured, every 2 to 3 days for a total of 37 days. The y-axis represents tumor volume that was calculated by the formula: (length  width  height  0.52). B and C, Pictures of mice and HT29 xenograft tumors at the time of sacrifice (B) as well as tumor weights (C) are shown. D, HT29 xenograft tumors from either control or FILIP1Lþ derivatives were fixed and IHC stained for FILIP1L. E-H, Caco2 (5  10 6 ) clones of either control or FILIP1L-knockdown derivatives were subcutaneously injected into the nude mice (8 mice per cell line). E and F, Tumor growth (E) was measured every 2 to 3 days for a total of 29 days, as described in A, and tumor weights at the time of sacrifice (F) were measured. G, Caco2 xenograft tumors from either control or FILIP1L-knockdown derivatives were fixed and stained with H&E and PAS. They were also IHC stained for FILIP1L. Scale bar, 50 mm. H, Enlarged images of Ki67-stained Caco2 xenograft tumors from either control or FILIP1L-knockdown derivatives are shown. Arrows, clumpy multinucleated cells. Scale bar, 10 mm. à , P < 0.05; Ãà , P < 0.01; ÃÃà , P < 0.001. mucin expression [stained by Periodic Acid Schiff (PAS); Fig. 1G]. PAS-stained stitched images of whole tumors also clearly demonstrated considerably increased mucin expression in FILIP1L-knockdown tumors ( Supplementary Fig. S2B). Light pink regions in PAS-stained cells indicated acellular/cyst areas that were depleted of mucin, and comprised more than 50% of the total tumor volume in control tumors, but very few to absent cysts in tumors from FILIP1L knockdown cells. In addition, considerably increased tight clusters were detected in the tumors from FILIP1L-knockdown clones (Fig. 1H), suggesting increased multinucleation in these tumors. Mucinous colon tumors have decreased expression of FILIP1L MAC is a distinct form of colorectal cancer, characterized by abundant mucin secretion (2). Having demonstrated increased mucin expression following FILIP1L knockdown in vivo, we examined the expression of FILIP1L in atypical serrated polyps closely associated with mucinous differentiation (35). IHC staining demonstrated that FILIP1L localizes in the apical surfaces of the normal colon (Fig. 2B), and its expression is reduced in serrated polyps (Fig. 2D). We subsequently examined human MAC samples as well as nontumor adjacent colon tissues (NAT). Human MAC samples (4 well/moderately and 12 poorly differentiated) demonstrated a significantly decreased FILIP1L expression ( Fig. 2F and H; representative) compared with their matched NATs (Fig. 2I). Supporting our observation, FILIP1L mRNA expression was also significantly downregulated in MAC, in an Oncomine public dataset (Fig. 2J). In line with previous reports (12,15), samples from nonmucinous colorectal adenocarcinoma (9 well/moderately and 7 poorly differentiated) also demonstrated a significantly decreased FILIP1L expression compared with their matched NATs (Fig. 2K). As described earlier, approximately half of MAC tumors are aneuploid (3)(4)(5)(6)(7). To identify if FILIP1L downregulation is related with aneuploidy status of the MACs, we examined DNA ploidy from the MAC tumors that were used in Fig. 2I. Pathologist scored FILIP1L staining under blinded conditions ahead of ploidy experiments. We first tested control tissues from normal colon as well as poorly differentiated non-mucinous colon tumors. As shown in flow cytometric DNA histograms, DAPI-stained nuclei from normal colon tissues (Diploid CTL) displayed a single peak around 10 4 DAPI-area, whereas those from nonmucinous colon tumors (Aneuploid CTL) displayed a right-shifted peak around 10 5 DAPI-area ( Supplementary Fig. S3). DAPI-area of right-shifted peak was gated and calculated for three normal colon tissues (Diploid CTL), three nonmucinous colon tumors (Aneuploid CTL) and 16 MAC tumors. Average percentage of three diploid and aneuploid controls was 14.3 AE 0.94 and 84.3 AE 12.4, respectively. On the basis of the average percentage of diploid controls, we defined "aneuploid tumor" as those with larger than 20%. We then plotted diploid/aneuploid status against previously analyzed FILIP1L expression score. Although most MAC tumors demonstrated a significantly decreased FILIP1L expression compared with their matched NATs (Fig. 2I), aneuploid MAC tumors demonstrated a significantly decreased FILIP1L expression compared with diploid MAC tumors (Fig. 2L). In total, these results suggest that FILIP1L downregulation is associated with neoplastic changes in aneuploid MAC. Tissue restricted FILIP1L loss in mouse colon induces mucin secretion as well as hyperplasia To address the in vivo consequence of FILIP1L gene inactivation in the colon, we generated colon-specific Filip1l conditional knockout mice. Using the Cre-loxP system, we successfully obtained Filip1l homozygous alleles (Fig. 3A). Filip1l fl/fl mice were then crossed with Cdx2-CreER T2 transgenic mice that express a tamoxifen (TAM)regulated Cre protein (CreER T2 ) for deletion of loxP-containing alleles in adult terminal ileum, cecum, colon, and rectal epithelia (36). We observed a partial gene deletion efficiency from the Filip1l conditional allele that FILIP1L mRNA expression was reduced by approximately 3-fold in Filip1l fl/fl ; Cdx2-CreER T2 mice (CKO) compared with Filip1l fl/fl mice (CTL; Fig. 3B). Four weeks after TAM induction in Filip1l CKO mice, H&E staining demonstrated a significantly elongated crypts as well as compromised crypt integrity, as evidenced by aberrant cell arrangements and irregular nuclei ( Fig. 3C and D). FILIP1L expression was reduced in CKO mice (Fig. 3E). Importantly, although FILIP1L reduction was observed throughout the entire colon, crypt elongation/integrity loss was mainly restricted to the proximal location, where human MAC development usually occurs ( Fig. 3C; Supplementary Fig. S4; ref. 2). MUC2 mucin is predominantly secreted by colonic goblet cells (37). PAS staining (Fig. 3F) and qRT-PCR (Fig. 3H) demonstrated a significant increase in MUC2 expression in the colon crypts of Filip1l CKO mice. Notably, although the expression of stem cell markers such as Lgr5 was not changed, other markers for goblet cells and secretory progenitors were also significantly increased ( Fig. 3H; refs. 38,39). Although cell proliferation was restricted to the bottom third-to-half of colonic crypts in CTL mice, it was evident in most cells of elongated crypts in Filip1l CKO mice (Fig. 3G). Quantification of Ki67 staining confirmed cellular hyperplasia in the colons of Filip1l CKO mice. (Fig. 3I and J). Collectively these results lead us to conclude that FILIP1L downregulation is associated with mucin secretion and the aneuploidy-phenotypes seen in MAC, and drives hyperplasia in normal colon epithelial cells. FILIP1L knockdown induces cytokinesis defects The epithelium acts as a selectively permeable barrier, comprised of tightly associated polarized cells forming lumens. Defects in epithelial architecture are the source of nearly 90% of human cancers (40)(41)(42). To identify the effects of FILIP1L down-regulation on epithelial architecture and to have clear insights into whether FILIP1L loss contributes to the generation of aneuploidy, we knocked down its expression in normal diploid MDCK.2 cells, a well-studied epithelial model (43)(44)(45)(46)(47)(48)(49). Immunoblotting confirmed decreased FILIP1L levels (Fig. 4A). Clones were then cultured in an extracellular matrix to form 3D cysts. Single lumen-containing cysts were formed by control cells as expected. Also, consistent with our observations that downregulation of FILIP1L in tumor xenografts was characterized by the presence of tightly packed multinuclear cells, the majority of cysts formed by FILIP1L-knockdown clones contained multiple lumens, further suggesting loss of FILIP1L causes impaired cytokinesis ( Fig. 4B; ref. 50). Staining for the tight junction marker, ZO-1 showed wellsegregated single-cell junctions in control cells with single nuclei, whereas multiple nuclei were often present in tight junction boundaries of FILIP1L-knockdown clones (Fig. 4C). F-actin staining outlining the cell periphery confirmed this phenotype (Fig. 4D). A significant increase in multinuclei in FILIP1L-knockdown cells compared with controls ( Fig. 4E; indicated by arrows) confirmed defects in cytokinesis ( Fig. 4F; ref. 51). In addition, the level of FILIP1L knockdown demonstrated dosage effects on multinuclei formation, as clone 1 generated a significantly more multinuclei than clone 2 (Fig. 4F). To further identify the defects in cytokinesis following FILIP1Lknockdown in colon cancer cells, we examined Caco2 clones (shown in Supplementary Fig. S2A) by live imaging. Caco2 clones were marked for DNA and tubulin, and cells entering mitosis were monitored every 5 minutes. Interestingly, we observed that FILIP1L-knockdown caused cells to grow on top of each other as shown in Video A. We first examined whether mitotic length was affected by FILIP1L-knockdown. Mitotic length was shown to be determined by the time between nuclear envelope breakdown (NEBD) to ana-phase (52)(53)(54). Control clones were in mitosis for an average of 35 minutes, which was similar to what was observed for various cancer cells (52)(53)(54). No significant changes in mitotic length were observed in FILIP1L-knockdown clones compared with control [representative FILIP1L expression (B, D, F, and H) was analyzed in specimens from NATs (n ¼ 16), serrated polyps (n ¼ 9), well/moderately differentiated mucinous adenocarcinoma (n ¼ 4), and poorly differentiated mucinous adenocarcinoma (n ¼ 12). Note that multinucleated cells were not present in serrated polyps samples. Scale bar, 50 mm. I, FILIP1L expression in IHC stained slides (as shown in panels B, F, and H) was compared between matched normal (n ¼ 16) and mucinous colon adenocarcinoma (n ¼ 16). Expression score was carried out as described in Materials and Methods. J, FILIP1L mRNA expression was compared between normal (n ¼ 5) and mucinous colon adenocarcinoma (n ¼ 13). Data are derived from Oncomine public databases [Kaiser Colon (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc¼GSE5206)]. K, NATs (n ¼ 16), well/moderately differentiated nonmucinous colorectal adenocarcinoma (n ¼ 9), and poorly differentiated nonmucinous colorectal adenocarcinoma (n ¼ 7) were stained for FILIP1L, and expression score was carried out as described in I. L, DNA ploidy in 16 MAC tumors used in I was analyzed. FILIP1L expression score of each tumor is shown in table, and either diploid or aneuploid tumors were plotted against FILIP1L expression score. à , P < 0.05; Ãà , P < 0.01; ÃÃÃà , P < 0.0001. Supplementary Fig. S4. Note that although crypt length was significantly increased throughout the entire colon, a larger difference between CTL and CKO was observed in proximal location. D-G, Colons were fixed and stained with H&E (D) and PAS (F). They were also IHC stained for FILIP1L (E) and Ki67 (G). The exact same regions of proximal colon were imaged in CTL and CKO mice. Scale bar, 200 mm. Higher magnification images are shown in insets (scale bar, 50 mm). H, mRNA levels of markers for goblet cells, secretory progenitors, and stem cells were measured by qRT-PCR. Mucin 2 (Muc2), anterior gradient 2 (Agr2), Atonal bHLH transcription factor 1 (Atoh1), SAM pointed domain-containing ets transcription factor (Spdef), and neurogenin 3 (Neurog3), and leucine-rich repeat containing G protein coupled receptor 5 (Lgr5) were analyzed. For B and H, epithelial cells from colons of CTL and CKO mice were prepared. The y-axis represents fold change over CTL mice, where each value was standardized with the housekeeping gene b-actin (6 mice each). I and J, Colons were stained for Ki67 (I), and Ki67-positive areas were quantified (J). Ten random fields per mouse were quantified (three mice each). Scale bar, 50 mm. Ãà , P < 0.01; ÃÃà , P < 0.001; ÃÃÃà , P < 0.0001; NS, nonsignificant. time-lapse images (Fig. 4G), videos (Video B), and quantified data (Fig. 4H) are shown], suggesting the cytokinesis defects were independent of mitosis. To directly test if the defect was in cytokinesis, we measured the time for membrane fission, the final step in cytokinesis (55,56). As shown in Fig. 4I, FILIP1L-knockdown clones demonstrated a significant delay to complete cytokinesis compared with controls. We could not detect clear membrane fission for up to 4 hours (maximum experiment duration) in 25% of cells from FILIP1L-knockdown clones. We therefore set the time to fission as 240 minutes for these cells, although the actual time was longer. Thus, these findings suggest that FILIP1L-knockdown leads to cytokinesis defects. FILIP1L colocalizes with its binding partner PFDN1 at centrosomes in each phase of mitosis To explore the mechanisms responsible for FILIP1L's tumor suppressor activity, we set out to find binding partners for FILIP1L. We identified prefoldin 1 (PFDN1) from yeast two-hybrid screening (a top hit out of 30 positive colonies using a stringent screening system). We subsequently confirmed this interaction by coimmunoprecipitation. HEK293 cells were cotransfected with Flag-PFDN1 and FILIP1L-HA plasmids, and cell lysates were immunoprecipitated with an anti-Flag antibody followed by immunoblotting for FILIP1L and Flag tag (Fig. 5A). Cell lysates were also immunoprecipitated with an anti-HA antibody followed by immunoblotting for HA tag and PFDN1 ( Fig. 5B). PFDN1 is a molecular chaperone of a six subunit-prefoldin complex that facilitates proper folding of key cytoskeletal components such as actin and tubulins. Loss of PFDN1 leads to a decrease in tubulin levels, resulting in reduced microtubule growth, defects in cell division and embryonic lethality in C. elegans (21). We previously showed that FILIP1L localizes in the cytoplasm and centrosomes of interphase cells (11). Because centrosomes are a major microtubule-organizing center in the cell, and PFDN1 is a tubulin chaperone, we hypothesized that PFDN1 also localizes at centrosomes. As shown by a-tubulin and DNA staining, endogenous FILIP1L and PFDN1 localize at the centrosomes in all phases of division ( Fig. 5C and D). To further test if they colocalize, we transfected cells with Flag-tagged FILIP1L, then labeled exogenous FILIP1L and endogenous PFDN1 due to unavailability of different species-antibodies for the proteins. As shown in Fig. 5E, FILIP1L and PFDN1 colocalize at the centrosomes in every cell phase. In addition, unlike centriolin that localizes only at the mother centriole, FILIP1L and PFDN1 colocalize at both the mother and daughter centrioles ( Fig. 5D and F). To further confirm their colocalization, we cotransfected HEK293 cells with plasmids expressing FILIP1L-eGFP and mCherry-PFDN1, and monitored the cells by time-lapse imaging. As shown in Supplementary Fig. S5 and Video C, exogenous FILIP1L and PFDN1 demonstrated colocalization in the perinuclear area where centrosomes are located. FILIP1L regulates PFDN1 levels at the centrosomes in a proteasome-dependent manner Centrosomes have been identified as a proteolytic center of the cell (57)(58)(59)(60). In addition, we previously showed that FILIP1L colocalizes with proteasomes in centrosomes (11), and that it plays a role in proteasome-dependent protein degradation (18). Many molecules regulating cell division are located at the centrosomes and their level of expression has to be tightly regulated, as proliferating cells continually change their cell phase. Thus, just like molecules involved in cell-cycle regulation, the levels of many centrosomal proteins regulating cell division are controlled by proteasome-mediated degradation in a time-dependent manner (61). To determine if the binding of FILIP1L to PFDN1 results in proteasome-dependent degradation of PFDN1, we cotransfected HEK293 cells with a constant amount of Flag-PFDN1 and an increasing amount of FILIP1L-HA, in the presence or absence of the proteasome inhibitor MG132. As shown by immunoblotting, FILIP1L expression increased with plasmid dose, and this correlated with a decrease in PFDN1 levels (Fig. 6A). Proteasomal inhibition rescued the levels of PFDN1, indicating that degradation might be mediated by the proteasome. Many centrosomal proteins harbor coiled-coil domains (31). FILIP1L (893 amino acids) also has a coiled-coil domain at amino acids 1-542 and two leucine zippers (83- 104 and 218-239; ref. 17). Leucine zippers have been shown to mediate protein-protein interactions (62). A FILIP1L truncation mutant that lacks its two leucine zippers (D1-368) no longer colocalizes with pericentrin, a centrosome marker (Fig. 6B) and fails to modulate PFDN1 protein levels (Fig. 6C). We then determined that FILIP1L enhances polyubiquitination of PFDN1, a major signal for proteasome-mediated degradation (Fig. 6D). Together, these results suggest that binding of FILIP1L to PFDN1 occurs at the centrosomes, their interaction requires the leucine zipper domain of FILIP1L, and their stoichiometry appears important for maintaining PFDN1 levels through a proteasome-dependent mechanism. FILIP1L knockdown induces mucin secretion in colon cancer, mediated by PFDN1 Next, we examined the relationship between FILIP1L and PFDN1 in the context of colon cancer. Expression of FILIP1L and PFDN1 was inversely correlated in a panel of colon cancer cell lines (Fig. 7A). Although FILIP1L proteins were previously undetectable by immunoblot using mouse monoclonal FILIP1L antibody (12), differential FILIP1L expression was observed in this panel of colon cancer cell lines using a rabbit polyclonal FILIP1L antibody. We also determined that FILIP1L-low expressing colon cancer cell lines were proliferating faster than FILIP1L-high expressing colon cancer cell lines (Fig. 7B). Using lentiviral transduction, we knocked down FILIP1L in Caco2, SW620, and Ls174T colon cancer lines. Immunoblotting confirmed that knockdown of FILIP1L resulted in increased PFDN1 expression (Fig. 7C). In addition, overexpression of FILIP1L in HT29 and HCT116 cells resulted in decreased PFDN1 expression (Supplementary Fig. S6A). Furthermore, expression of FILIP1L did not change PFDN1 transcription levels, as confirmed by qRT-PCR (Supplementary Fig. S6B), further indicating that modulation of PFDN1 by FILIP1L likely occurs at the protein level. Xenograft tumors from FILIP1L-overexpressing clones also demonstrated decreased PFDN1 expression (Supplementary Fig. S6C). In human MAC samples, FILIP1L and PFDN1 were decreased and increased, respectively (Supplementary Figs. S6L, S6O, S6M, and S6P). PFDN1 was previously reported to be overexpressed in colon cancer, and its high expression was associated with poor survival in patients with colon cancer (20). Notably, the pattern of PFDN1 distribution was also markedly different between MAC tumors and matched NATs. In normal colon crypts, it localized to the perinuclear/ cytoplasmic area, whereas in MAC tumor tissues it mainly localized in the nucleus (Supplementary Figs. S6G and S6P). Interestingly, nuclear localization of PFDN1 was not identified in nonmucinous colorectal tumor tissues (Supplementary Fig. S6S). It has been previously shown that PFDN1 not only functions as a molecular chaperone in the cytoplasm but also regulates gene expression in the nucleus (19,63). PFDN1 was increased in the areas of the colon where FILIP1L expression was reduced from Filip1l conditional knockout mice ( Supplementary Fig. S6T). Thus, these results collectively suggest that FILIP1L modulates PFDN1 protein levels both in vitro and in vivo in colon cancer. We next postulated that increased PFDN1 levels may be responsible for the phenotypes seen when FILIP1L is downregulated. Expression/secretion of mucin proteins is often altered in colon cancer (64,65). MAC is characterized by abundant mucin secretion comprising at least 50% of the tumor volume (2). Mucin 2 (MUC2) is the predominantly secreted mucin synthesized by colonic goblet cells (37), and its overexpression is frequently found in MAC (35). We have shown that FILIP1L is downregulated in human MAC samples (Fig. 2I) and that mucin secretion was increased following FILIP1L knockdown in both Caco2-xenograft tumors (Fig. 1G) and colons from Filip1l conditional knockout mice (Fig. 3F). On the other hand, mucin secretion was decreased following FILIP1L overexpression in HT29-xenograft tumors (Supplementary Fig. S6C). We then aimed to determine whether MUC2 mRNA expression is altered following modulation of FILIP1L/PFDN1 levels. FILIP1L knockdown or PFDN1 overexpression did not increase MUC2 transcription levels, as confirmed by qRT-PCR (Supplementary Fig. S7A). Thus, we asked whether the cellular localization of MUC2 is changed. FILIP1L-knockdown clones cultured in three-dimensional extracellular matrix demonstrated a significantly increased secretion of MUC2 (first and second panels in Fig. 7D and E). For these experiments, we could not use SW620 clones because MUC2 levels were too low to be detected ( Supplementary Fig. S7B). It is noteworthy that MUC2 secretion was increased following FILIP1L-knockdown in both enterocyte-like Caco2 cells and goblet cell-like Ls174T cells (66). Moreover, PFDN1 overexpression in Caco2 cells also resulted in the same phenotype as FILIP1L knockdown (third panels in Fig. 7D and E; Supplementary Fig. S7C). To further prove cause-and-effect, we tested if PFDN1 knockdown can reverse the mucin secretion-phenotype resulting from FILIP1L knockdown. Using lentiviral transduction, we knocked down PFDN1 in FILIP1L-knockdown Caco2 clones. Immunoblotting confirmed that knockdown of PFDN1 in doubleknockdown clones (Fig. 7F). Mucin secretion was significantly decreased in FILIP1L/PFDN1 double-knockdown clones compared . Twenty-four hours later, cell lysates were immunoprecipitated using Flag antibody-agarose, followed by immunoblotting for FILIP1L and Flag tag (A) or using HA antibody-agarose followed by immunoblotting for HA tag and PFDN1 (B). Input control (4 mg lysates) was also immunoblotted. C-F, MDCK.2 cells were immunofluorescently stained for FILIP1L (C) or PFDN1 (green; D) and a-tubulin (red). Nuclei were counterstained with DAPI (blue). Cell phase was determined by a-tubulin and DNA stain. A merged image is shown. E, MDCK.2 cells were transfected with a FILIP1L-Flag construct and stained for Flag tag (green) and PFDN1 (red) at 24 hours after transfection. F, MDCK.2 cells were stained for FILIP1L (green) and g-tubulin or centriolin (red). Scale bar, 5 mm. FILIP1L knockdown leads to multinucleation in colon cancer, mediated by PFDN1 Approximately 50% of MAC tumors are aneuploid (3)(4)(5)(6)(7), and aneuploid tumors are clinically more aggressive than diploid MAC tumors (8,9). We have demonstrated that FILIP1L knockdown results in multinuclei phenotype in the xenograft tumors from Caco2 clones (Fig. 1H) as well as in normal MDCK.2 cells (Fig. 4F). We have also demonstrated that FILIP1L expression was significantly decreased in aneuploid MAC tumors compared with diploid MAC tumors (Fig. 2L). Thus, we tested whether FILIP1L-knockdown modulates aneuploidy-related phenotypes. FILIP1L-knockdown clones from colon cancer cell lines demonstrated significantly increased numbers of multilobed or multinucleated cells compared with their respective control cell lines (first and second panels in Fig. 8A and B). For these experiments, we could not use Ls174T clones because they tend to grow on top of each other, preventing accurate quantification. In Caco2 cells, smaller fragments of nuclei that are stained by both markers of nuclei (DAPI) and nuclear envelope (Lamin A/C) were also significantly increased. We have named these "budding nuclei" (blue bar graph in Fig. 8B). In addition, cells with larger nuclei are often observed in Caco2 cells with FILIP1L-knockdown (first panel in Fig. 8A), suggesting the prevalence of polyploid cells derived from incomplete mitosis. Representative images of multinuclei quantification are shown in Supplementary Fig. S8A. PFDN1 overexpression also resulted in similar multilobular/polyploid phenotypes as FILIP1L knockdown (third panel in Fig. 8A and B). In addition, the multinuclei phenotype was also demonstrated in MDCK.2 cells when PFDN1 was overexpressed (Fig. 8D). Defects in cytokinesis leads to increased multinuclei cells (51), and we demonstrated that FILIP1L knockdown resulted in cytokinesis defects (Fig. 4I). As shown in FILIP1L knockdown, PFDN1 overexpression in Caco2 colon cancer cells also resulted in cytokinesis defects. Although mitotic length was not affected (Fig. 8E), the time to cytokinesis completion was significantly increased (Fig. 8F). Furthermore, while mitotic length was not affected (Fig. 8G), the time to cytokinesis completion was significantly decreased in FILIP1L/PFDN1 double-knockdown clones compared with FILIP1L knockdown clones (Fig. 8H), further suggesting that PFDN1 mediates the cytokinesis defects-phenotype following FILIP1L knockdown. Dysregulated expression of several centrosome-modulating proteins is implicated in human cancer (28)(29)(30)(31)(32). Changes in the levels of several centrosomal proteins, including prefoldins, have been linked to dysregulation of centrosome function (21,(25)(26)(27)(28)(29). Centrosome abnormalities lead to mitosis/cytokinesis defects that are associated with aneuploidy (23). We demonstrated that binding of FILIP1L to PFDN1 occurs at centrosomes (Fig. 5C-E) and centrosomal localization of FILIP1L is critical for its modulation of PFDN1 levels ( Fig. 6B and C). Thus, we examined whether FILIP1L levels affect centrosomal localization of PFDN1. Although overall PFDN1 levels were increased following FILIP1L knockdown, there was substantially reduced localization of PFDN1 in centrosomes, suggesting that FILIP1L is required for centrosomal localization of PFDN1 (first and second The y-axis represents absorbance that was subtracted OD 650 nm from OD 440 nm . C, FILIP1L knockdown was achieved by stable expression of lentiviral shRNA in FILIP1L-high Caco2, SW620, and Ls174T colon cancer cells. Control clones were made with scrambled shRNA. FILIP1L, PFDN1, and GAPDH control were detected by immunoblotting. By densitometric quantification, FILIP1L protein was decreased by 11-, 3.6-, and 3.5-fold in Caco2, SW620, and Ls174T clones compared with their corresponding controls, respectively. D and E, Clones from either control or FILIP1L knockdown (Caco2 and Ls174T clones) as well as those from either control or PFDN1 overexpression (Caco2 clones) were grown in the presence of Matrigel, and three-dimensional clusters were stained for F-actin (green), MUC2 (red), and DAPI (blue; D), and the total fluorescence intensity per area of cell cluster was quantified (E). Ten to 15 cell clusters were counted. Scale bar, 20 mm. F, PFDN1 knockdown was achieved by stable expression of lentiviral shRNA in FILIP1L-knockdown Caco2 clones. FILIP1L, PFDN1, and GAPDH control were detected by immunoblotting. By densitometric quantification, PFDN1 protein was decreased by 2.1-fold in FILIP1L/PFDN1 double knockdown clones compared with FILIP1L knockdown clones. G, Caco2 clones from either FILIP1L knockdown or FILIP1L/PFDN1 double knockdown were analyzed for MUC2 total fluorescence intensity as described in E. à , P < 0.05; Ãà , P < 0.01. Fig. 8I). Consistent with the other phenotypes shown in Figs. 7D and E and 8A and B, PFDN1 overexpression also resulted in similarly reduced localization of PFDN1 in centrosomes as had FILIP1L knockdown (third panel in Fig. 8I). Display curves for centrosomal occupancy were left-shifted (Fig. 8J) and a significantly lower percentage of PFDN1 protein was detected in either FILIP1L knockdown or PFDN1 overexpressed cells (Supplementary Fig. S8B). Thus, these findings suggest that the phenotypes such as mucin secretion and aneuploidy in human aneuploid MAC samples are recapitulated by FILIP1L knockdown or PFDN1 overexpression in colon cancer cells. Discussion Colon adenocarcinoma arises through the adenoma-carcinoma sequence characterized by chromosomal instability with an associated accumulation of genetic alterations in tumor suppressor genes such as APC and TP53 (67). MAC is a histologic subtype of colon adenocarcinoma with distinct clinical and histopathologic characteristics, as well as molecular signatures (2). Colon cancers arisen from atypical serrated polyps closely associate with a CpG Island Methylator Phenotype (CIMP: tumor suppressor genes are inactivated by widespread epigenetic silencing; ref. 68), microsatellite instability, BRAF p. V600E mutation (69)(70)(71), mismatch repair deficiency and mucinous FILIP1L knockdown leads to multinucleation in colon cancer cells, mediated by PFDN1. A, Clones from either control or FILIP1L knockdown (Caco2 and SW620 clones) as well as those from either control or PFDN1 overexpression (Caco2 clones) were stained for lamin A/C (green), F-actin (red), and DAPI (blue). B, The number of cells with multinuclei were quantified. Over 600 cells were counted. Scale bar, 20 mm. C, FILIP1L knockdown was achieved by CRISPR-Cas9 system in MDCK.2 cells. PFDN1 and GAPDH control were detected by immunoblotting. Two independent clones were tested. Note that FILIP1L-knockdown clones demonstrated increased PFDN1 expression. D, MDCK.2 cells were transfected with control GFP or PFDN1 construct, and stained for DNA. Scale bar, 10 mm. E-H, Time-lapse imaging of Caco2 clones. Caco2 clones from either control or PFDN1 overexpression (E-F) as well as those from either FILIP1L knockdown or FILIP1L/PFDN1 double knockdown (G-H) were analyzed as described in Fig. 4H and I. Mitotic length (E and G) and cytokinesis completion (F and H) were quantified from three independent experiments. à , P < 0.05; ÃÃà , P < 0.001; ÃÃÃà , P < 0.0001; NS, nonsignificant. I and J, Clones from either control or FILIP1L knockdown (Caco2 and SW620 clones) as well as those from either control or PFDN1 overexpression (Caco2 clones) were stained for pericentrin (green), PFDN1 (red), and DAPI (blue; I), and the area of centrosome occupied by PFDN1 in metaphase-cells was quantified (J). The y-axis (frequency) represents the total number of metaphase cells that fall into each bin. Scale bar, 5 mm. differentiation (35). We previously showed that FILIP1L expression is downregulated by promoter methylation in ovarian cancer as well as cancer cell lines from various histology (10,12). FILIP1L downregulation in MAC might recapitulate CIMP phenotype. Thus, it will be interesting to identify whether FILIP1L is downregulated by promoter methylation in MAC tumors, and DNA demethylating agents can restore the phenotypes observed from FILIP1L knockdown. CRAD knockout mice were shown to induce epithelial cell integrity loss and Wnt signaling activation, resulting in the development of intestinal mucinous adenoma (72). However, the exact aberrations that are responsible for MAC development are currently unknown. Approximately 50% of MAC tumors are aneuploid (3)(4)(5)(6)(7), which are clinically more aggressive than diploid MAC tumors (8,9). We demonstrate here that FILIP1L is significantly downregulated throughout the spectrum of well to poorly differentiated MAC (Fig. 2I), and that FILIP1L is significantly more decreased in aneuploid MAC tumors than in diploid MAC tumors (Fig. 2L). FILIP1L knockdown leads to a significant increase in multinucleation (that leads to aneuploidy; Fig. 8A and B) and cytokinesis defects in colon cancer cells. Colon cancer cell lines such as Caco2 and SW620 that we demonstrated aneuploidy phenotype are already highly aneuploid cancer cell lines (73), yet we observed a significant increase in multinucleation following both FILIP1L knockdown and PFDN1 overexpression ( Fig. 8A and B). Simultaneous knockdown of PFDN1 in FILIP1L-knockdown cells rescued the cytokinesis defects-phenotype in these cancer cells (Fig. 8H). To have clear insights into whether FILIP1L loss contributes to the generation of aneuploidy, we also knockdown FILIP1L in normal diploid MDCK.2 cells and demonstrated the same aneuploidy phenotypes (increase in multinucleation and cytokinesis defects; Fig. 4B-F). PFDN1 overexpression in MDCK.2 cells also led to increase in multinucleation (Fig. 8D). Thus, these results strongly suggest that loss of FILIP1L plays a role in the generation of aneuploidy in vivo. Activation of mutant Kras in mouse colon tissues promotes hyperplasia and increased goblet cell numbers, but the change in goblet cell numbers may simply reflect the increase in the transit amplifying population and/or cell differentiation promoted by Kras mutation (74). In this study, we showed that increased mucin secretion was detected as early as day 7 following Filip1l loss in the mouse colon (Fig. 3H). However, cellular hyperplasia was not identified until 4 weeks following Filip1l loss (Fig. 3G, I, and J). Expression of stem cell markers such as Lgr5 was not changed (Fig. 3H). In addition, both FILIP1L knockdown and PFDN1 overexpression led to a significant increase in mucin protein secretion in colon cancer cells ( Fig. 7D and E), and simultaneous knockdown of PFDN1 in FILIP1L-knockdown cells rescued the mucin secretion-phenotype (Fig. 7G). These findings suggest that increased mucin secretion is not simply resulted from increased cell proliferation. It is currently unknown how mucin secretion-phenotype is mechanistically related with aneuploidy phenotype. Further studies are warranted. Many factors that regulate cell mitosis are located at centrosomes, and changes in their levels deregulate the cell cycle. Altered expression of centrosomal proteins is implicated in human cancer (24,75). The levels of centrosome components are tightly regulated by timely, proteasome-mediated degradation (76). Here, we show that the tumor suppressor FILIP1L is downregulated in MAC with a concomitant increase in PFDN1, a chaperone that binds to FILIP1L. Both of these proteins normally localize to centrosomes, and changes in their levels are associated with mitosis/cytokinesis failure. Furthermore, we show that FILIP1L stimulates proteasomal degradation of PFDN1. A minor weakness that needs to be followed up is that we have yet to show the interactions between endogenous FILIP1L and PFDN1 proteins due to the lack of working antibodies. Centrosomes not only serve as a primary source of microtubules that build mitotic spindles but also determine the orientation of the mitotic spindle and the cytokinetic furrow. Mitotic spindle misorientation has been shown to be one of the key mechanisms generating multiple lumen-containing cysts (50), which we observe in our FILIP1L-knockdown epithelial cells (Fig. 4B). Loss of PFDN1 leads to mitotic spindle misorientation and mispositioning of cytokinetic furrows (21,30). Either downregulation or up-regulation of prefoldin proteins has been linked to dysregulation of centrosome-associated function (21,(25)(26)(27). UXT, a prefoldin-like protein expressed in centrosomes is also overexpressed in human cancers, and its overexpression reduces microtubule growth and subsequently promotes centrosome disassembly (27). Noncanonical prefoldin mutants showed cytokinesis defects characterized by multipolar spindles and polyploid cells (51). Cytokinetic defects result in multinucleated cells that contain extra centrosomes, which in turn disrupt mitotic spindle formation (24). Aurora A kinase is a centrosomal protein that regulates cytokinesis. At cytokinesis, it localizes at the spindle midzone where it performs a regulatory function. Its overexpression in breast cancers leads to centrosome accumulation secondary to cytokinesis failure (77,78). FILIP1L and PFDN1 also strongly localize at the spindle midzone in telophase and cytokinesis, which supports their potential function in regulating cytokinesis (Fig. 5C-E). Tumor suppressor genes, including APC, PTEN, and VHL have been linked to spindle misorientation (79)(80)(81)(82)(83)(84)(85). In fact, loss of the tumor suppressor APC results in spindle misorientation followed by cell fate changes, leading to colon adenocarcinoma development (36,86). However, none of these tumor suppressors have been shown to be associated with MAC pathogenesis. One possibility is that the pathologic changes following FILIP1L loss are the result of the unopposed function of its binding partner, PFDN1. Although PFDN1 is a molecular chaperone that facilitates folding of tubulins and actin, the cytoskeletal function of PFDN1 is not essential for the housekeeping assembly of microtubules or actin filaments. However, it becomes ratelimiting, and the most upstream regulator under strong cytoskeleton biogenesis conditions such as mitosis and B-lymphocyte activation as Pfdn1 KO mice are severely affected in these processes (63). Colon epithelium is one of the fastest regenerating organs in the body, so it can be highly susceptible to the effects of PFDN1 upregulation. It has been shown that prefoldin not only functions as a molecular chaperone in the cytoplasm but also regulates gene expression in the nucleus (63). Overexpressed PFDN1 bound to the cyclin A2 promoter and the subsequent repression of cyclin A2 was associated with EMT promotion in lung cancer cells (19). We demonstrate here that not only is the PFDN1 level upregulated but it also is located in the nucleus in human MAC samples (Supplementary Fig. S6P). However, we could not identify nuclear PFDN1 in nonmucinous colorectal adenocarcinoma samples (Supplementary Fig. S6S). Thus, it is noteworthy to identify whether nuclear PFDN1 plays a role in MAC development. In summary, we have shown that a tumor suppressor FILIP1L stimulates proteasomal degradation of its binding partner PFDN1, a molecular chaperone that regulates spindle orientation and cleavagefurrow positioning (21,30). We showed that human mucinous colon tumors have decreased and increased expression of FILIP1L and PFDN1, respectively. FILIP1L knockdown and the resultant PFDN1 increase leads to increased mucin secretion and mitosis/cytokinesis defects in mouse colon as well as colon cancer cells, recapitulating the same phenotypes as seen in aggressive aneuploid MAC. These results strongly implicate FILIP1L as the essential regulator of MAC tumorigenesis. Since FILIP1L is downregulated in various other cancer types (10)(11)(12), these studies will also have a broad impact on our understanding of the pathogenesis of other cancers and the role played by this novel tumor suppressor gene.
2021-08-22T06:16:20.904Z
2021-08-20T00:00:00.000
{ "year": 2021, "sha1": "919f6ce31370c3dcd03c34706d36f8e7c8ae4291", "oa_license": "CCBYNCND", "oa_url": "https://aacrjournals.org/cancerres/article-pdf/81/21/5523/3084998/5523.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d1d6febbe8378e38167a8fb8973331aafdedc548", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234139041
pes2o/s2orc
v3-fos-license
Effect of dip wash treatments with organic acids and acidic electrolyzed water combined with ultraviolet irradiation on quality of strawberry fruit during storage The objective of this study was to determine the effects of dip wash treatments with 2% citric acid (CA), 0.2% benzoic acid (BA), 0.2% sorbic acid (SA) and acidic electrolyzed water (AEW) followed by 2 kJ·m–2 ultraviolet (UV) irradiation on the postharvest decay and quality of strawberry fruit cultivar ‘Malvina’, as compared to control, and UV alone treated samples. Weight loss, firmness, soluble solids content, titratable acidity, total phenolics content, total anthocyanins content, antioxidant activity and decay incidence of control and treated strawberry fruits were investigated during cold storage at 8 °C for 21 days. The result showed that UV-treated fruits had a lower weight loss, higher titratable acidity, phenolic and anthocyanin content and were firmer than the untreated fruits. Dip wash in AEW before UVC treatment reduced weight loss and increased firmness of strawberry fruits but did not significantly affect total phenolic content, total anthocyanins content, 2,2-diphenyl-1-picrylhydrazyl (DPPH) antioxidant activity and fruit decay. Dip wash treatment with organic acids followed by UVC irradiation was significantly more effective than UV treatment alone in reducing fruit decay and weight loss and in maintaining at higher levels titratable acidity, total anthocyanins content, total phenolic content and antioxidant activity of strawberries during refrigerated storage. The present findings demonstrate that dip wash treatment with 0.2% SA, 0.2% BA or 2% CA followed by UV treatment can be an effective method for maintaining the phytochemical content and delaying the decay of strawberry fruit during cold storage. INTRODUCTION Strawberry (Fragaria × ananassa Duch.) is a very popular fruit with huge nutraceutical and commercial value, appreciated by consumers for its unique flavor and nutritious qualities (Bianco et al. 2009;Parvez and Wani 2018). Strawberries contain high levels of phytochemicals, such as anthocyanins, flavonoids and phenolic acids, that strongly influence the sensorial and nutritional quality of the fruits and possess excellent free radical scavenging capacity (Erkan et al. 2008). They are characterized by high respiration and metabolic rates determining a rapid tissue softening and degradation during the last stages of development (Pombo et al. 2009;Aday et al. 2013;Moya-León et al. 2019). The loss of fruit firmness during ripening and the sensitivity to fungal attack is mainly due to the continuous decrease of cell wall content as a result of the solubilization and depolymerization of cell wall components, (i.e., polyuronides and hemicelluloses) and to the loss of neutral sugars (Pombo et al. 2009;Bal 2019). Fresh strawberries have a short shelf life (5 to 7 days) when stored under normal atmospheric conditions at 0 °C, fruit losses can be up to 40% during storage (Collins and Perkins-Veazie 1993;Guo et al. 2018). As a consequence, https://doi.org/10.1590/1678-4499.20200440 POSTHARVEST TECHNOLOGY Article V. Nour et al. there is a current interest in the enhancement of strawberry fruit shelf life by using various techniques, like controlled atmosphere storage, modified atmosphere packaging, high oxygen treatment, ultraviolet (UVC) or gamma irradiation, edible coatings, chemical treatments, etc. (Parvez and Wani 2018). Washing treatments in sanitizer solutions are useful and effective methods to inactivate pathogenic and spoilage microorganisms on fresh fruits. Among sanitizers, chlorine and its derivatives were the most widely used due to their low relative cost, ease of use and effectiveness. In the past years, other sanitizers like organic acids and acidic electrolyzed water (AEW) have been evaluated in the fruit processing industry as effective disinfection alternatives to chlorine in order to avoid the risks associated with exposure to chlorinated organic by-products and to meet current safety standards (Ma et al. 2017;Pablos et al. 2018;Nicolau-Lapeña et al. 2019). In addition, the nonthermal physical treatments, such as ultrasound or ultraviolet processing, have shown effectiveness and significant advantages in ensuring microbial safety of fresh fruits (Deng et al. 2020). Organic acids, such as citric (CA), benzoic (BA) and sorbic (SA) acids, have been used to control spoilage or pathogenic bacteria on fresh and freshcut fruits and vegetables by disturbing their ionic permeability across the membrane, anion accumulation and decreasing the internal cellular pH (Parish et al. 2003). Likewise, several previous studies have reported on the effectiveness of AEW to inactivate contaminant microbiota on fresh-cut apples, carrots and on ready-to-eat vegetables and sprouts (Graça et al. 2011;Issa-Zacharia et al. 2011;Koide et al. 2011). Exposure to low UVC radiation doses (0.43, 2.15 and 4.30 kJ·m -2 ) has been reported to reduce postharvest decay of fresh fruits and vegetables by increasing the resistance of tissues to storage pathogens. These have been related to the increase in the transcription and activity of a set of enzymes and proteins involved in the defense against pathogens and to the biosynthesis of several secondary metabolites with antioxidant, antifungal and/or antibacterial activity (Erkan et al. 2008;Severo et al. 2015). Delaying of the ripening process and reduction of fruit softening by UVC application have also been reported (Baka et al. 1999, Pan et al. 2004Pombo et al. 2009). Furthermore, some previous studies have shown that the combination of UVC with other preservative methods (edible coatings, heat treatment) have achieved good results in extending the postharvest life of fruits and vegetables and in maintaining their quality (Bal 2019;Pan et al. 2004;Lin et al. 2017). The present study was conducted to investigate the effects of postharvest chemical treatments followed by UVC irradiation on fruit quality attributes (weight loss, firmness, titratable acidity, total soluble solids), bioactive compounds (total phenolics, total anthocyanins) and antioxidant activity of 'Malvina' strawberries during 21 days storage at 8 °C. The effectiveness of these treatments in reducing decay of strawberry fruits was also examined. Plant material and treatment Strawberry fruits (Fragaria × ananassa) of the cultivar 'Malvina' were hand-harvested in 2020 at the commercially mature stage from an orchard near Marsani (44°00'56"N, 23°59'30"E), a village located in Oltenia, region of Romania. Strawberry was cultivated by applying conventional farming practice for growing in open air. The fruits were sorted to eliminate damage, poor quality and unripe fruit and selected for uniform size, color and maturity. After removing the calyx and peduncle, fruits were washed in tap water, then drained at ambient temperature, placed on filter paper and randomly divided into six groups (80 fruits per group) corresponding to the following treatments: (C) -fruits dipped in tap water; (UV) -fruits dipped in tap water and UVC irradiated; (CA + UV) -fruits dipped in 2% citric acid and UVC irradiated; (AEW + UV) -fruits dipped in acidic electrolyzed water and UVC irradiated; (BA + UV) -fruits dipped in 0.2% benzoic acid and UVC irradiated; (SA + UV) -fruits dipped in 0.2% sorbic acid and UVC irradiated. Dipping time in the treatment solutions was about 5 min at ambient temperature (20 °C). Ultraviolet radiation was performed under ambient conditions using a light LED UVC bulb germicidal lamp with peak emission at 254 nm (60 W, AC100-277V, China). An UVC irradiation dose of 2.0 kJ·m -2 was used in this study based on the Effect of chemical treatments and UV on strawberry results reported by Wang et al. (2015) and Jin et al. (2017) who found that this was the most effective dose for inhibiting decay and maintaining quality in strawberry fruits. The UVC set up was placed in a fume hood and fruits were allowed to dry before UV treatment. Fruits were placed in a single layer for the UV treatment and were irradiated during 30 min at approximately 30 cm under the lamp, to obtain a dose of 2.0 kJ·m -2 . The UVC irradiation intensity was measured using a UVC portable digital radiometer (TN-2254, Taine Co., Ltd., Taiwan, China). Immediately after the treatment, fruits were randomly placed in disposable plastic containers (500 mL capacity), each containing 16 to 20 fruits (about 240 g), covered by a lid and stored at 8 °C for 21 days. Untreated (control) fruits were stored under the same conditions. Each treatment was replicated three times and the experiment was repeated twice. Four containers were used for each replicate. Weight loss, firmness, total soluble solids, titratable acidity, total phenolic content, total anthocyanins content, antioxidant activity and fruit decay were evaluated at 0, 7, 14 and 21 days of storage. Each determination was run in triplicate. Weight loss Fruit weights were determined using a sensitive digital scale (Sartorius CP124S, UK, accuracy = 0.01 g). Weight loss during storage was determined by measuring the fruit weight at the beginning of the experiment and at the end of each storage period and was expressed as the percentage of weight loss with respect to the initial weight. Firmness The fruit firmness was measured using a GY-3 fruit penetrometer (Sundoo Instruments, Zhejiang, China) fitted with a round plunger (6 mm diameter). An even force was applied to the penetrometer tip to penetrate the fruit tissue. Eight fruits from each replicate were analyzed in opposite sides of the equatorial zone for each treatment and storage time assayed and the average value was reported in kg·cm -2 . Fruit decay The external appearance of strawberries was evaluated after 7, 14 and 21 days of storage. Strawberry fruits showing macroscopic fungal growth or injuries on the fruit surface were considered decayed. Fruit decay was expressed as percent of fruit showing decay symptoms. Titratable acidity The titratable acidity was titrimetrically determined in 10 g of homogenate from three fruits, made up to 100 g with deionized water and titrated to pH 8.2 with 0.1 M NaOH solution. The results were expressed as grams of citric acid per 100 g fresh weight. Two independent extracts were prepared and each one was titrated in duplicate. V. Nour et al. Total soluble solids Homogenous sample was prepared by blending twenty fruits from replicate sample in an electrical blender. The total soluble solids content was determined by using a digital refractometer (Hanna Instruments, Woonsocket, USA) and the results were expressed as percentage of soluble solids. Extraction Strawberry homogenates (3 g) were extracted with methanol (100 mL) in an ultrasonic bath for 60 min at room temperature (20 °C). The extracts were then centrifuged at 6000 rpm for 15 min. The supernatants were transferred to vials, stored at −40 °C, and later used for total phenolic content and DPPH free radical-scavenging activity. Total phenolic content The total phenolic content was assessed according to the Folin-Ciocalteu procedure (Singleton and Rossi 1965). Briefly, a 100 μL aliquot of extract was mixed with 5 mL of distilled water and 500 μL of Folin-Ciocalteu reagent. After 3 min, 1.5 mL of (20% w/v) sodium carbonate solution was added and the reaction mixture was diluted with distilled water to a final volume of 10 mL. After shaking vigorously and incubating in the dark at 40 °C for 30 min, the absorbance was measured at 765 nm on a Varian Cary 50 UV spectrophotometer (Varian Co., USA). A calibration curve was prepared using standard solutions of gallic acid. Results were expressed as milligrams of gallic acid equivalents (GAE) per 100 fresh weight (fw). Antioxidant activity The free radical scavenging activity of the extracts against DPPH free radical was evaluated based on the method described by Oliveira et al. (2008). An aliquot (50 μL) of fruit extract was mixed with 3 mL of DPPH methanolic solution (0.004%). The mixture was shaken vigorously and kept in the dark for 30 min. After incubation, the absorbance was measured at 517 nm on a Varian Cary 50 UV-VIS spectrophotometer against a blank of methanol without DPPH. The inhibition of the DPPH radical by the samples was calculated according to the following formula: DPPH scavenging activity (%) = [1 -absorbance of sample/absorbance of blank] × 100. Trolox was used as a standard reference and results were expressed as mmol Trolox equivalents (TE) per 100 g fresh weight (fw). Statistical analysis Results were expressed as means ± standard deviations. Effect of treatment was analyzed using the least significant difference (LSD) test and differences at p < 0.05 were considered to be significant. The statistical analysis was carried out using Statgraphics Centurion XVI software (StatPoint Technologies, VA, USA). Weight loss The influence of treatments on fruit weight loss is shown in Fig. 1. Weight loss of treated and untreated fruits increased during storage. The loss of weight in all UV-treated fruits was significantly lower than that in control fruit. After 14 days of storage, the loss of weight in control fruits was around 32% higher compared to UVC treated alone samples. These results are consistent with those of previous studies demonstrating that UVC irradiation stimulated the activity of lignifying enzymes and significantly suppressed the loss of water in fruits (Lu et al. 1991;Bal 2019). Unlike these, Alothman et al. (2009) found no significant difference in the percent weight loss between the UVC treated and the nontreated fresh-cut tropical fruits analyzed, while Maharaj et al. (1999) found that weight loss was higher in UV treated tomato fruits than in the untreated control. Washing with organic acids or AEW caused a significant decrease in moisture loss as compared with water washing. At the end of storage, the highest weight loss was determined in control fruits (1.65%), while the lowest weight loss was determined in SA + UV (0.37%) followed by CA + UV-treated fruits (0.62%). Firmness Firmness is a very important quality factor for postharvest strawberries, as firmer fruits are better able to withstand postharvest handling and transportation (Charles and Arul 2007). The firmness of strawberries decreased during the storage period at 8 °C in both control and treated fruit. However, UV treatment had a beneficial effect on the fruit firmness, as UV-treated fruits remained significantly firmer than the control throughout the storage period (Fig. 1). V. Nour et al. Similar changes in firmness induced by UVC treatment were reported in strawberries (Baka et al. 1999;Pan et al. 2004;Pombo et al. 2009), peaches (Lu et al. 1991) and mangoes (González-Aguilar et al. 2001). The higher values of firmness found in UVC treated fruit have been attributed to the decreased activity of polygalacturonase and of other cell-wall-degrading enzymes, leading to slowed cell wall degradation, a main factor involved in softening of fleshy fruits (Pan et al. 2004). More recently, Pombo et al. (2009) considered that the effect of UVC irradiation on strawberry fruit softening could be related to the decrease of the transcription of a set of genes involved in cell wall degradation during the first hours after treatment. The combination of postharvest UV treatment with washing in AEW or organic acids significantly delayed the softening of strawberries stored at 8 °C. No significant differences were found in firmness between UV, CA + UV and BA + UV-treated samples throughout the storage period. After 14 days of storage, the samples treated with 0.2% SA followed by UVC (SA + UV) maintained significantly greater firmness than the other treated and control samples. Softening was also significantly delayed in fruit treated with AEW and UVC after 7 days of storage. Fruit decay Fruit decay occurred rapidly in control strawberries stored at 8 °C, with 37.77% of control fruits showing signs of infection after 14 days of storage (Fig. 2). Treatment with UV alone delayed the appearance of decay in fruits during cold storage. Previously, UV irradiation was found significantly effective in maintaining the quality and delaying the decay and appearance of the fungal growth in strawberries (Baka et al. 1999;Pan et al. 2004;Bal 2019). The reduction of fungal decay in strawberries by UV is considered to be due to the germicidal effect of UV light on micro-organisms and to the modification of the fruit physiology through the induction of phytoalexins and possibly other defense mechanisms resulting in a higher disease resistance (Erkan et al. 2008). No significant differences were found in fruit decay between the samples treated with UV alone and the samples washed with AEW before the UV treatment. However, washing with organic acids before the UV treatment significantly reduced the decay of strawberries. In these samples, decay was below 14% in the 14th day of storage. This could be attributed to the synergistic effect of the treatments on delaying the appearance of mold growth and other physiological processes, like respiration and senescence. At the end of the storage, the highest decay rate was determined in control fruits (83.33%) and lowest decay rate was determined in BA + UV-treated fruits (40.11%) followed by SA + UV (47.44%) and CA + UV (54.26%) treated fruits. The inhibition of decay could be due to pH reduction, disturbance of membrane transport and/or permeability, anion accumulation, inhibition of enzymes, cytoplasm acidification, as well as to the specific antimicrobial effect of organic acids (Parish et al. 2003). Total soluble solids The change in total soluble solids (TSS) content of strawberries as a function of storage time is shown in Fig. 3. Total soluble solids decreased slowly at the beginning of the storage period in all samples. In the last 14 days, it was recorded a notable decrease of TSS in fruits, probably determined by the fruit senescence. No significant difference in TSS content was observed between untreated and treated fruits within the first 14 days of storage. In the last week of the storage period, the combination of washing in organic acids and UV treatment maintained higher levels of TSS in strawberries, as compared to control and UV alone treated samples, by controlling fruit fungal decay and by decreasing the respiration rate and metabolic activity. At the end of 21 days of storage, the highest TSS content was recorded in BA + UV and SA + UV treatments (8.8%), followed by CA + UV treatment (8.5%), while lowest TSS content was recorded in control (7.9%) and AEW + UV treatment (7.8%). Titratable acidity The titratable acidity increased slightly in the first 7 days of storage, but decreased steadily thereafter in all samples (Fig. 3). The decrease of organic acids content during postharvest storage of strawberry fruit has also been reported in previous studies (Koyuncu and Dilmaçünal 2010) and has been attributed to the use of organic acids as respiratory substrates. Titratable acidity decreased at a slower rate in UV treated strawberries as compared with the control, while dipping in organic acids before UV treatment delayed more the acidity decline during storage. Some previous studies also reported that UV treatment decreased the respiration rate and maintained a higher acidity in strawberries (Baka et al. 1999) and peaches (Lu et al. 1991). Total anthocyanin content Total anthocyanins content in control strawberries was 53.08 mg CGE·100 g -1 . The anthocyanin content gradually increased during storage up to the 14th day in UV-treated fruits dipped in organic acids, while a slight decrease has been recorded in control fruits and AEW + UV-treated fruits (Fig. 4). Further, the anthocyanin content recorded a sharp decline in all samples until the 21st day of storage, when fruits could be considered as over-ripe and in senescence. Other previous studies also reported a similar pattern of initial increase in anthocyanins followed by decrease after prolonged storage of V. Nour et al. strawberry fruit (Zheng et al. 2007;Bal 2019). Fruits treated with UV tended to have higher total anthocyanin content as compared to control fruits after the first 14 days of storage. Ultraviolet treatment has also been shown to affect changes in anthocyanin content in strawberries in other studies (Erkan et al. 2008;Bal 2019). Total phenolic content In this study, the mean total phenolic content of strawberries was 144.55 mg GAE·100 g -1 on the treatment day. Immediately after treatments, no significant difference in the total phenolic content was found. The total phenolic contents of strawberry fruit increased in all UV treated and control samples during 14 days storage period, thereafter they decreased sharply during the remainder of storage (Fig. 4). However, the increase was relatively lower in control fruit as compared to UV treated fruit. The surface treatment with organic acids before UV irradiation showed a positive effect in maintaining higher concentration of total phenolics. The highest phenolic content was found with SA + UV (191.96 mg GAE·100 g -1 ) followed by CA + UV treatment (173.72 mg GAE·100 g -1 ) and the lowest one was found in control (157.45.4 mg GAE·100 g -1 ) after 14 days of storage. DATA AVAILABILITY STATEMENT All dataset were generated or analyzed in the current study. Not applicable. These results are consistent with previous studies demonstrating that UV irradiation induces the accumulation of phenolic compounds in fruits (González-Aguilar et al. 2001;Erkan et al. 2008). This effect has been attributed to the activation of phenylalanine ammonia-lyase, which is one of the key enzymes in the synthesis of phenolic compounds in plant tissues, and to the enhancement of phenolic extractability as a result of the depolymerization and dissolution of the cell wall polysaccharides (Alothman et al. 2009). The decrease in total phenolic content observed after 14 days of storage might be attributed, at least in parts, to the degradation of anthocyanins. Antioxidant activity Initially, the mean DPPH antioxidant activity of strawberries was 4.8 mmol Trolox·100 g -1 fw. The DPPH values of strawberries slightly increased in all samples during the first 7 days storage at 8 °C and decreased thereafter until the end of the storage period (Fig. 4). However, this increase was relatively lower in control fruit when compared to all UV treated fruits. After 14 days of storage, SA + UV strawberry fruits had the highest DPPH values, followed by CA + UV-treated samples. Moreover, these treatments recorded the smallest decreases in antioxidant activity in the last two weeks of storage. At the end of storage, control fruits had the lowest DPPH value, which was only 4.11 mmol Trolox·100 g -1 . Results are in good agreement with previous studies, showing that UVC treatment increased the antioxidant activity of fruits (González-Aguilar et al. 2001;Erkan et al. 2008). CONCLUSION Results showed that 2 kJ·m -2 UV irradiation of fresh strawberries significantly suppressed the loss of water, maintained firmness and delayed the appearance of decay symptoms in fruits during cold storage. In addition, UV treatment alone promoted the accumulation of phenolics and increased the antioxidant activity of strawberries. Use of chemical dips before the UV treatment significantly reduced the postharvest decay of strawberries (0.2% BA > 0.2% SA > 2% CA) and was more effective in maintaining the content of health promoting compounds (polyphenols, anthocyanins) and antioxidant capacity of the fruits as against the UV treatment alone. The results suggest that postharvest treatments of strawberries with organic acids followed by UV irradiation may be a useful way of maintaining strawberry fruit quality and extending their postharvest life. V. Nour et al. ACKNOWLEDGMENTS This work benefited from the networking activities within the European funded COST ACTION CA18113 -Understanding and exploiting the impacts of low pH on micro-organisms.
2021-05-11T00:07:33.409Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6ab0966abeeb298284719b25784f43324f0c38a8", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/brag/a/dsPtPxgnmtGvpcxtRpv6cHk/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b41567011f3ffd844ae2bc23c8baa71abb2755a", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
53087899
pes2o/s2orc
v3-fos-license
Thresholds of Genotoxic and Non-Genotoxic Carcinogens Exposure to chemical agents is an inevitable consequence of modern society; some of these agents are hazardous to human health. The effects of chemical carcinogens are of great concern in many countries, and international organizations, such as the World Health Organization, have established guidelines for the regulation of these chemicals. Carcinogens are currently categorized into two classes, genotoxic and non-genotoxic carcinogens, which are subject to different regulatory policies. Genotoxic carcinogens are chemicals that exert carcinogenicity via the induction of mutations. Owing to their DNA interaction properties, there is thought to be no safe exposure threshold or dose. Genotoxic carcinogens are regulated under the assumption that they pose a cancer risk for humans, even at very low doses. In contrast, non-genotoxic carcinogens, which induce cancer through mechanisms other than mutations, such as hormonal effects, cytotoxicity, cell proliferation, or epigenetic changes, are thought to have a safe exposure threshold or dose; thus, their use in society is permitted unless the exposure or intake level would exceed the threshold. Genotoxicity assays are an important method to distinguish the two classes of carcinogens. However, some carcinogens have negative results in in vitro bacterial mutation assays, but yield positive results in the in vivo transgenic rodent gene mutation assay. Non-DNA damage, such as spindle poison or topoisomerase inhibition, often leads to positive results in cytogenetic genotoxicity assays such as the chromosome aberration assay or the micronucleus assay. Therefore, mechanistic considerations of tumor induction, based on the results of the genotoxicity assays, are necessary to distinguish genotoxic and non-genotoxic carcinogens. In this review, the concept of threshold of toxicological concern is introduced and the potential risk from multiple exposures to low doses of genotoxic carcinogens is also discussed. INTRODUCTION "The dose makes the poison" is a basic principle of toxicology. Coined by Paracelsus, who was a 15th century Swiss scientist, physician, alchemist, and mysterious thinker (https://en.wikipedia.org/wiki/The_dose_makes_the_poison), he is known as "the father of toxicology" because of this famous phrase. The adage means that any chemical can be poison if the dose is beyond a certain threshold and also that any poison can be non-toxic if the dose is below a certain threshold. Indeed, the aim of toxicology is to find the appropriate threshold or safe level of a chemical below which no hazardous effects to humans are thought to result (Fig. 1). For example, chemicals developed for food additives, pesticides, or veterinary drugs are all subject to toxicology assays before marketing; from these assays, the threshold level, that is, the acceptable daily intake (ADI), is determined by the authorities based on the no observed adverse effect level (NOAEL) and the safety factor, which is usually 100 (= 10 × 10), reflecting species difference between rodents and humans (10-fold) and individual variations in humans (10-fold) (1). The ADI is the daily intake level below which no adverse effects are estimated to occur, even if a person were to take the chemical for their entire life. The NOAEL is the highest dose in toxicological assays at which no significant adverse effects can be observed. The use of chemicals in society is permitted if the intake level is below the ADI. The concept underlying this risk management approach is exactly the principle established by Paracelsus: any poison can be non-toxic if the dose is below the appropriate threshold. The principle of Paracelsus cannot be applied to the regulation of genotoxic chemicals. Genotoxic chemicals are substances that interact with DNA and may subsequently induce mutations. Owing to their DNA interaction properties, genotoxic chemicals are not considered to have a safe threshold or dose (2)(3)(4). Therefore, they are expected to impose genotoxic and carcinogenic risks on humans, even at very low concentrations. This assumption is similar to the regulatory policy for radiation, which generally employs a linear non-threshold model (5). Genotoxic chemicals, like radiation, induce DNA damage and mutations that may lead to cancer; indeed, genotoxic chemicals used to be called "radiomimetic substances" owing to their DNA interaction properties (6) and it is therefore unsurprising that the regulatory policies are similar. Strict regulatory policies for genotoxic chemicals are globally accepted. The Environmental Health Criteria set by the World Health Organization (WHO) state that "substances that are both genotoxic and carcinogenic would generally not be considered acceptable for use as food additives, pesticides or veterinary drugs. For those substances that are genotoxic and carcinogenic, the traditional assumption is that there may not be a threshold dose and that some degree of risk may exist at any level of exposure" (7). Therefore, once a chemical is judged as genotoxic and carcinogenic, it will be banned for use as a food additive, pesticide, or veterinary drug. This is contrast to the policy for non-genotoxic carcinogens, which may be used in the market if the intake level is below the ADI (8). Thus, the ability to distinguish genotoxic and non-genotoxic is of critical importance in the regulation of chemicals. WHAT ARE GENOTOXIC AND NON-GENOTOXIC CARCINOGENS? The term "genotoxic carcinogens" was coined in the late 1980s based on the results of the United States National Toxicology Program (NTP) (9). In the program, chemicals were evaluated for their DNA reactivity, mutagenicity in Salmonella (Ames test), and carcinogenicity in rodents. Of the 222 chemicals tested, 115 were carcinogens; 71 of these 115 (62%) were DNA reactive (structure alert) positive and Salmonella (Ames test) positive. The remaining 44 (38%) were carcinogens, but were structure alert negative and Salmonella (Ames test) negative. The former group carcinogens was carcinogenic in rats and mice (trans-species carcinogens) and induced tumors in multiple organs in rodents. In contrast, the latter group of carcinogens was carcinogenic in either rats or mice (single-species carcinogens) and induced tumors in single organs, in particular in the liver of mice. The report clearly indicated that rodent carcinogens are not all equal and can be categorized into two classes: the former, genotoxic carcinogens, and the latter, Fig. 1. Models for dose-response curves of non-genotoxic and genotoxic carcinogens. Non-genotoxic carcinogens like as other toxic chemicals have threshold while genotoxic carcinogens have no threshold. Non-genotoxic carcinogens can be used in the society if the intake level is below the threshold. Genotoxic carcinogens are supposed to have carcinogenic risk even at very low doses. Therefore, genotoxic carcinogens are generally not be considered acceptable for use as food additives, pesticides or veterinary drugs. (11)(12)(13). In practice, it is not easy to identify clear distinctions, because some genotoxicity assays, such as the chromosome aberration assay or the micronucleus assay, give positive results even in the absence of DNA damage (14)(15)(16). For example, aphidicolin, an inhibitor of DNA polymerases, and colchicine, a spindle poison, were shown to have a threshold to their clastogenicity (17)(18)(19). These chemicals do not interact with DNA, but instead inhibit functions of proteins involved in DNA replication or cell division. Therefore, it is important to consider the type of damage, either DNA damage or protein damage, which is responsible for the positive results in the genotoxicity assay. GENE MUTATION ASSAYS ARE CRITICAL TO DISTINGUISH GENOTOXIC AND NON-GENOTOXIC CARCINOGENS The genotoxicity of chemicals is usually evaluated by multiple assays, including gene mutation assays and cytogenetic assays ( Table 2). In vitro gene mutation assays include the bacterial reverse mutation assay (Ames test, the mammalian gene mutation assays, and the transgenic rodent gene mutation assay in vivo. These assays detect genotoxicity based on DNA damage. Positive chemicals in these assays can be considered as DNA-reactive genotoxic chemicals, which have no safe threshold. Conversely, cytogenetic assays, including the chromosome aberration assay in cultured mammalian cells or human lymphocytes in vitro and the micronucleus assay in vitro and in vivo, detect genotoxicity not only by DNA damage, but also by other mechanisms, such as topoisomerase inhibition, spindle poison, or excessive cytotoxicity (16,(20)(21)(22). In general, Ames-positive and transgenic-positive chemicals can be regarded as DNA-reactive in vivo genotoxic chemicals. The importance of gene mutation assays in the assessment of carcinogenic risk at low doses is emphasized in the ICH M7 guideline (https://www.pmda.go.jp/files/ 000208234.pdf). The International Council for Harmonization (ICH) Technical Requirements for Pharmaceuticals for Human Use is an international organization for the establishment of guidelines for pharmaceuticals. M7 is the ICH guidelines for the assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals to limit potential carcinogenic risk. In Section 3 (general principles), it is stated that "the focus of this guideline is on DNA reactive substances that have a potential to directly cause DNA damage when present at low levels leading to mutations and therefore, potentially causing cancer. This type of mutagenic carcinogen is usually detected in a bacterial reverse mutation (mutagenicity) assay. Other types of genotoxicants that are non-mutagenic typically have threshold mechanisms and usually do not pose carcinogenic risk in humans at the level ordinarily present as impurities". In Section 6 (hazard assessment elements), it is stated that "a positive bacterial mutagenicity result would warrant further hazard assessment and/or control measures. For instance, when levels of the impurity cannot be controlled at an appropriate acceptable limit, it is recommended that the impurity be tested in an in vivo gene mutation assay in order to understand the relevance of the bacterial mutagenicity assay result under in vivo conditions". Thus, in vitro and in vivo gene mutation assays are of critical importance for the risk assessment of low doses of genotoxic chemicals, because chemicals with positive results in these assays should be considered to have no safe threshold. IN VITRO GENE MUTATION ASSAY: BACTERIAL REVERSE MUTATION ASSAY The bacterial reverse mutation assay is a representative in vitro gene mutation assays (https://www.oecd-ilibrary. org/docserver/9789264071247-en.pdf?expires=1529937092 &id=id&accname=guest&checksum=7B88AFDD99C1C 8A18CF725E5539CF272). In general, the assay uses four TG numbers represent numbers of test guidelines established by OECD (https://www.oecd-ilibrary.org). Although it is neither gene mutation assay nor cytogenetic assay, OECD test guideline (TG489) has been established for in vivo comet assay, which detects DNA strand breaks. Salmonella typhimurium strains and one Escherichia coli strain to detect a variety of point mutations. This assay is called Ames test, because Dr. Bruce N. Ames developed it using Salmonella strains (23). It is a simple in vitro assay to determine to what extent histidine-dependent bacteria become independent by gene mutations induced by chemicals. In the case of E. coli, tryptophan-dependent bacteria become independent; because the phenotype reverts from histidine-or tryptophan-dependent to independent, this is called a reverse mutation assay. In practice, the bacterial culture is mixed with a test chemical and the mixture is incubated for two days on agar plates. If metabolic activation is needed, the 9,000 × g supernatant of liver homogenates of rats pretreated with inducers of drug metabolizing enzymes plus an NADPH-generating system (S9 mix) is added to the reaction mixture. After incubation, the number of revertant colonies is counted and the dose-response curves are produced. In the assay, different types of point mutations, such as base substitutions and frameshifts, are detected by using distinct bacterial tester strains. This assay detects only DNA reactive genotoxic chemicals; chemicals with a positive result in this assay usually have structural alert for reaction with DNA (24). IN VIVO GENE MUTATION ASSAY: GPT DELTA TRANSGENIC MOUSE/RAT GENE MUTATION ASSAY This assay can detect point mutations and deletions in various organs of rodents (https://www.oecd-ilibrary.org/ docserver/9789264203907-en.pdf?expires=1529937241&id= id&accname=guest&checksum=3243A61DDEF495D57925 DB655E2F379F). The transgenic rodents have been established in C57BL/6 mice and Fisher 344, Sprague-Dawley, and Wistar Hannover rats (25). These transgenic rodents have reporter genes for mutations in all cells in all organs (26)(27)(28). After treatment of the rodents with test chemicals, the transgene, i.e., lambda EG10, is rescued as phage particles from various organs by in vitro packaging reactions and mutations are detected by infection of the res-cued phages to E. coli strains. Point mutations and deletions can be detected by gpt and Spi − assay, respectively, with different bacterial strains. As this assay detects mutations in all organs of rodents, it is possible to examine the mutagenicity of the chemical carcinogens in the target organs for carcinogenesis. Approximately 20 chemicals, most of which are carcinogenic to rodents, have been examined by using gpt delta mice or rats to determine their food safety (29). The results showed that estragole, madder color, and methyleugenol yielded positive results in the transgenic assays and were therefore judged to be genotoxic carcinogens (30)(31)(32). In contrast, citrinin, flumequine, ginko biloba extract, and 3-monochloropropane-1,2-diol esters yielded negative results in the target organs for carcinogenicity and were therefore judged as non-genotoxic carcinogens (33)(34)(35)(36). Therefore, gpt delta transgenic rodent gene mutation assays are able to effectively distinguish genotoxic and non-genotoxic carcinogens. AMES-NEGATIVE, BUT TRANSGENIC ASSAY-POSITIVE CARCINOGENS: ARE THEY GENOTOXIC CARCINOGENS? Although it appears simple to distinguish genotoxic and non-genotoxic carcinogens, the reality is more complex. Some rodent carcinogens yield negative results in the bacterial reverse mutation assay, but positive results in the gpt delta rodent gene mutation assay in the target organs for carcinogenesis. The following are examples for which the judgement of genotoxic or non-genotoxic carcinogens is difficult, even with in vitro and in vivo gene mutation assays ( Table 3). The first example is estragole, a fragrant herb, which induces liver tumors (hepatoma) in female mice (37). This chemical yields a negative result in the bacterial reverse mutation assay (38,39). To examine the genotoxicity in mice, male and female gpt delta mice were fed estragole for 13 weeks by gavage and micronucleus in bone marrow, gpt gene mutation status, and DNA adducts in liver were examined (40). Although the in vivo micronucleus assay was negative, gpt mutation frequency was clearly enhanced in liver in female mice. DNA adduct levels were also increased and higher in female than in male mice. Estragole is hydroxylated on its side chain and further activated by sulfotransferase (41). The activity of this enzyme is known to be higher in female mice than in male mice (42). The results indicate that estragole was a DNAreactive genotoxic carcinogen. The second example is leucomalachite green, a reductive metabolite of malachite green, which is an antifungal agent for fish (43). Leucomalachite green induces liver tumors in female mice (44). Malachite green and leucomalachite green yield negative results in the bacterial reverse mutation assay (45). However, it was reported by the US FDA that leucomalachite green induced gene mutations in the liver of female mice when administered in the diet for 16 weeks in Big Blue mice, another transgenic rodent model used for mutation detection (46). No mutagenicity was detected in liver of Big Blue rats (47). DNA adducts were also detected in the liver of female mice. Although malachite green did not induce tumors and genotoxicity in rats and mice, it induced DNA adducts in the liver of male rats and female mice (48). Therefore, the European Food Safety Authority (EFSA) concluded that malachite green and leucomalachite green should be regarded as genotoxic carcinogens (49); however, it remains to be clarified why leucomalachite green yields negative results in all in vitro genotoxicity assays, including the bacterial reverse mutation assay (45). The third example is dicyclanil, which is used to regulate the growth of insects. It induces adenoma and adenocarcinoma in the liver of female mice (50,51). This chemical yields negative results in various genotoxicity assays, including in vitro assays such as the bacterial reverse mutation assay and in vivo assays such as the micronucleus assay and comet assay. Therefore, it was regarded as a nongenotoxic carcinogen (52). However, dicyclanil induces gpt gene mutations in the liver of female mice when administered in the diet for 13 weeks (51). No mutations were detected in the liver of male mice. 8-Hydroxy-guanine, an index of oxidative DNA damage, was increased in the liver of both male and female mice, but cell proliferation was increased only in female mice. Therefore, the induction of oxidative DNA damage and enhanced cell proliferation may be the reason for female-specific mutations induced by dicyclanil. However, it is unclear why this chemical is negative in bacterial reverse mutation assay and whether carcinogens that induce oxidative DNA damage via the generation of reactive oxygen species should be regarded as DNA reactive genotoxic carcinogens. The fourth example is ochratoxin A, which is a mycotoxin that induces adenoma and adenocarcinoma in the kidney of rodents (53). It is regarded as a causative agent of Balkan epidemic disease in humans (54). Thus, ochra-toxin A is classified into group 2B (possible human carcinogen) by International Agency for Research on Cancer (IARC) (55). Ochratoxin A yields a negative result in the bacterial reverse mutation assay, but a mixture of positive and negative results have been reported in chromosome aberration and micronucleus assays in vitro and in vivo (55). Hibi et al. reported that the frequency of mutation of Spi − mutant was significantly increased in the outer medulla of the kidney when gpt delta rats were fed ochratoxin A in their diet for 4 weeks (56). The outer medulla includes the target site for carcinogenesis, i.e., the S3 segment of proximal convoluted tubule. Interestingly, no increase in gpt mutant frequency was observed in the cortex or outer medulla. Because Spi − selection detects deletion mutations, increase in Spi − mutant frequency indicates that DNA strand breaks were induced in the target site for carcinogenesis (25,57). However, if DNA adducts were induced by ochratoxin A in the target site, gpt mutant frequency should have been increased in addition to Spi − mutant frequency. Therefore, ochratoxin A may inhibit functions of proteins involved in cell cycle or DNA repair and induce DNA strand breaks. Ochratoxin A may be a non-genotoxic carcinogen. PRACTICAL THRESHOLDS OF GENOTOXIC CARCINOGENS It was previously discussed the regulatory policy that states there are no safe exposure thresholds for genotoxic carcinogens. However, this policy was recently challenged by a number of experimental and theoretical approaches that claim that even DNA-reactive genotoxic carcinogens may have practical threshold for their action (58)(59)(60)(61). Indeed, the consideration of the mechanisms through which a chemical induces mutation and cancer, there are several steps that may suppress the induction of mutation and can- Fig. 2. Self-defense mechanisms against genotoxic chemicals. Genotoxic chemicals may be inactivated by metabolic inactivation. When DNA adducts are formed, the adducts may be removed by DNA repair mechanisms. If the adducts remain in DNA, error-free translesion DNA synthesis (TLS) will incorporate correct dNTPs against the lesions, thereby suppressing induction of mutations. cer (62). Genotoxic compounds are metabolically activated to reactive intermediates that induce DNA adducts and DNA lesions; subsequently, the DNA lesions become mutations after DNA replication. To counteract this adverse pathway, humans and other organisms have self-defense mechanisms such as antioxidants, metabolic inactivation mechanisms, DNA repair or error-free translesion DNA synthesis (TLS) (Fig. 2). Detoxification mechanisms inactivate the genotoxic compounds, DNA repair removes the DNA adducts and error-free translesion synthesis incorporates the correct base opposite DNA lesion during DNA synthesis, thereby suppressing the induction of mutations. From mutations to cancer, there are other mechanisms, such as apoptosis, that suppress the induction of cancer. These self-defense mechanisms may constitute a practical threshold for genotoxic carcinogens. In an examination of this possibility, the DNA repair enzyme 8-hydroxyguanine DNA glycosylase encoded by the mutM gene in Salmonella typhimurium was disrupted (63). This enzyme repairs 8-hydroxyguanine in DNA and reduces G:C to T:A mutation. In fact, the enzyme-deficient strains exhibited a much greater sensitivity to the mutagenicity of oxidative mutagens than the enzyme-proficient strains (Fig. 3) (63,64). In particular, potassium bromate tested virtually negative for mutagenicity in the enzyme-proficient strain, whereas it exhibited high mutagenicity in the deficient strain. Therefore, 8-hydroxyguanine DNA glycosylase appears to be a constituent of practical thresholds for oxidative mutagens. THRESHOLD OF TOXICOLOGICAL CONCERN (TTC) OF GENOTOXIC CARCINOGENS Another challenge to the regulatory policy is that DNAreactive genotoxic carcinogens have no threshold is the concept of "threshold of toxicological concern" (TTC) or "threshold of regulation" (TOR) (65). Essentially, the underlying concept of TTC or TOR is that it is impossible to completely suppress the excess lifetime cancer risk associated with chemical exposure and that there is an increasing number of new chemicals for which there is insufficient toxicological information. Therefore, TTC or TOR are applied to prioritize the chemicals that need further toxicological evaluation. In 1995, US FDA adopted a TOR of 0.5 ppb for food contact materials (corresponding to 0.025 µg/kg body weight (bw)/day or 1.5 µg/person/day, based on 60 kg bw and combined food and drink daily consumption of 3 kg) if the chemical has no concern for DNAreactive genotoxicity (66). In other words, the chemical can be used in the market without additional toxicity assays if the intake level is below 0.5 ppb or 1.5 µg/person/day and there is no concern that the chemical will have DNA-reactive genotoxicity. Later, ICH M7 guidance proposed a TTC for pharmaceutical impurities, in which an intake level below 1.5 µg/person/day does not increase excess lifetime cancer risk more than over 10 −5 , even when there is a concern that the impurity may have DNA-reactive genotoxicity (https://www.pmda.go.jp/files/ 000208234.pdf); however, the guidance indicates that Closed black circles, Salmonella typhimurium TA1535; closed red circles, YG3001 (same as TA1535 but ΔmutM); closed black squares, TA1975 (same as TA1535 but uvrB + ); closed red squares YG3003 (same as TA1975 but ΔmutM). When the mutagenicity of benzo[a]pyrene in the presence of visible light, plates were exposed to fluorescent light 15 W lamps at a distance of 30 cm during incubation at 37 o C for two to three days. The data are from references (63,64). plSSN: 1976-8257 eISSN: 2234-2753 highly potent DNA-reactive carcinogens, such as aflatoxinlike, azoxy-or N-nitroso-carcinogens, are outside of the application of the TTC approach. These exceptional chemicals are sometimes called the "cohort of concern" (COC) (67). Both the US FDA and ICH adopt the same value of 1.5 µg/person/day as the TOR and TTC; however, the former excludes DNA-reactive genotoxic chemicals while the latter includes them. The differences in the policy may depend on the usage of the chemicals. Food contact materials do not necessarily provide health benefits to consumers, and therefore the policy is more conservative or strict, whereas pharmaceuticals may be needed to maintain or improve health conditions of patients, even if the drugs contain some DNA-reactive impurities. In fact, EFSA/ WHO proposed a level 10 times lower, i.e., 0.15 µg/person/day or 0.0025 µg/kg bw/day, than that proposed by ICH as a sufficiently protective TTC for DNA-reactive genotoxic chemicals (67,68). The COC chemicals are also excluded from the TTC approach. Currently, the use of the TTC approach has been established to regulate chemicals in several areas, such as food contact materials, food flavoring agents, and pharmaceutical impurities (65). A TTC approach for non-genotoxic carcinogens was also proposed (67,69). It should be noted, however, that the TTC approach is not an alternative to a chemical-specific risk assessment, but a screening tool to decide whether further toxicological evaluation is necessary for the chemical (68). FUTURE CHALLENGE: RISK ESTIMATION OF COMBINED EXPOSURE OF GENOTOXIC CARCINOGENS AT LOW DOSES Following the emergence of the TTC approach in several areas of chemical regulation, questions have been raised as to whether public is adequately protected from multiple exposure or intake of DNA-reactive genotoxic carcinogens at low doses. The current regulatory policy for chemicals is the evaluation of the genotoxic and carcinogenic risk individually. Moreover, TTC is not an absolute threshold and thus some low level of cancer risk, e.g., 10 −5 or 10 −6 , exists, even below TTC. This is in contrast to the indication of an absolute threshold below which there is no risk on human health (19). Therefore, it is suspected that detectable carcinogenic risk may appear when people are exposed to multiple DNA-reactive genotoxic carcinogens, even below the TTC. It is reported that mutagenicity in Salmonella typhimurium strains was detectable when six DNA-reactive genotoxic carcinogens were combined at quite low doses (70). Each single carcinogen did no exhibit any detectable mutagenicity owing to the low dose. Although this is a simple additive effect, synergistic effects may occur, depending on the combination of chemicals. Although chemicals are regulated by different authorities depending on their intended use, e.g., food-related chemi-cals, industrial chemicals, air pollutants, pharmaceuticals and the impurities, simultaneous exposure to these chemicals is unavoidable. Currently, there is no effective approach to evaluate genotoxic and carcinogenic risk from exposure to low doses of multiple DNA-reactive genotoxic carcinogens. One approach for the regulation of the total carcinogenic risk on humans would be to establish weighted allocations for each class of chemicals, such as food-related chemicals, 50% (0.5 × 10 −5 ); industrial chemicals, 10% (0.1 × 10 −5 ); air pollutants, 10% (0.1 × 10 −5 ); pharmaceuticals, including impurities, 20% (0.2 × 10 −5 ), and others 10% (0.1 × 10 −5 ). Then, the total carcinogenic risk would be less than 1 × 10 −5 , even when people are exposed to multiple genotoxic carcinogens. Risk assessment of multiple exposure to DNA-reactive genotoxic carcinogens below the TTC may be a challenge for regulatory genetic toxicology.
2018-11-11T01:39:44.450Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "94a6e291383d9adb4a42497915f692aedf60704b", "oa_license": "CCBYNC", "oa_url": "http://www.toxicolres.org/journal/download_pdf.php?doi=10.5487/TR.2018.34.4.281", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "94a6e291383d9adb4a42497915f692aedf60704b", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }