id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
53101190
|
pes2o/s2orc
|
v3-fos-license
|
A Hybrid Monte Carlo Self-Consistent Field Model of Physical Gels of Telechelic Polymers
We developed a hybrid Monte Carlo self-consistent field technique to model physical gels composed of ABA triblock copolymers and gain insight into the structure and interactions in such gels. The associative A blocks of the polymers are confined to small volumes called nodes, while the B block can move freely as long as it is connected to the A blocks. A Monte Carlo algorithm is used to sample the node configurations on a lattice, and Scheutjens–Fleer self-consistent field (SF-SCF) equations are used to determine the change in free energy. The advantage of this approach over more coarse grained methods is that we do not need to predefine an interaction potential between the nodes. Using this MC-SCF hybrid simulation, we determined the radial distribution functions of the nodes and structure factors and osmotic compressibilities of the gels. For a high number of polymers per node and a solvent-B Flory–Huggins interaction parameter of 0.5, phase separation is predicted. Because of limitations in the simulation volume, we did however not establish the full phase diagram. For comparison, we performed some coarse-grained MC simulations in which the nodes are modeled as single particles with pair potentials extracted from SF-SCF calculations. At intermediate concentrations, these simulations gave qualitatively similar results as the MC-SCF hybrid. However, at relatively low and high polymer volume fractions, the structure of the coarse-grained gels is significantly different because higher-order interactions between the nodes are not accounted for. Finally, we compare the predictions of the MC-SCF simulations with experimental and modeling data on telechelic polymer networks from literature.
■ INTRODUCTION
Here we describe a combination of the Scheutjens−Fleer selfconsistent field theory with a Monte Carlo algorithm, which is used to simulate a gel network of symmetric telechelic polymers. These polymers have associative end-blocks, while the middle block is soluble. This combination leads to the formation of micelles in which the end-blocks associate in the core and the middle blocks form the corona. Such micelles are called flower-like micelles, with the core as the heart and the polymer loops as petals. At a sufficiently high concentration of micelles, the micellar cores are so close to each other that the ends of the polymers can be in different micelles, thus forming a bridge. Because the polymers can now form both loops and bridges, the number of possible conformations and thus the entropy increases. This increase in entropy gives an attractive contribution to the interaction between the micelles. If there are enough bridges to form a percolating network, a gel network is formed with the micellar cores as the nodes, as shown in Figure 1.
Some researchers have reported that the attraction can become so strong that phase separation occurs; 1−3 others however did not observe phase separation. 4 One reason why these experiments show different outcomes is that it is difficult to synthesize these polymers. Often the middle blocks show considerable polydispersity and not all polymer ends are functionalized. The latter will increase steric repulsion between the micelles and thus prevent phase separation. In computer simulations, these problems can be avoided.
We assume that the binding energy of end-blocks to the micellar cores is so high that the concentration of free ends is negligible but still low enough that the ends can exchange between the cores. This allows the polymers to redistribute themselves over the micelles and form new bridges. This enables these gel networks to heal themselves when damaged. 5,6 Because of these properties, telechelic polymers are applied in the paint industry to improve the rheological behavior of paints. They can also be used as a gel material for gel electrophoresis. Furthermore, they are studied as a drug carrier for slow drug release. A hydrophobic drug can be dissolved in the core of the micelles. When a gel made of telechelic polymers is placed in the body, it will slowly release individual micelles and thus the drug over time. An additional advantage in chemotherapy is that, due to the increased permeability of blood vessels in tumors, the micelles can accumulate in tumor tissue, which then receives a higher dose of the drug. 7 There have been many experimental studies on gels made of telechelic polymers. 1−6 The number of theoretical studies and simulations is, however, limited. 8 The length of the polymers makes it time-consuming to study these networks with molecular dynamics simulations even when coarse-grained bead and spring models are used for the chains. This is because a representative fraction of all the possible states of the system has to be sampled. As the polymers are large and can easily entangle, they diffuse slowly and it therefore takes a long time to reach and sample the equilibrium structure. One could choose to use even more coarse-grained models, for example, by simulating an entire micelle as a single particle. 8 It is, however, difficult to describe the proper interaction potentials between the micelles, as the interactions are not necessarily pairwise additive. That is, the strength of the interaction between two micelles depends on the surroundings of the micelles. 9 A solution to this problem is to employ a hybrid simulation technique which combines the benefits of particle simulations with the computational efficiency of self-consistent field calculations. Here, we combine the SF-SCF (Scheutjens− Fleer self-consistent field) method with a Monte Carlo algorithm. With the SF-SCF model, the free energy of a particular configuration of the nodes is calculated based on an average over all possible freely jointed chain conformations. This reduces the simulation time because the polymer configurations no longer need to be sampled individually. The positions of the nodes are sampled using a Monte Carlo algorithm that uses the free energy determined by the SF-SCF model to accept or reject the moves of the cores. In our model, we focus on the interactions between the nodes caused by polymers. The binding of the polymers to the nodes is done in a simplistic manner because the interactions between the nodes are not influenced by the specific mechanism through which the polymer ends bind. Hence the results can be applied to a variety of gels with polymers with associative end groups regardless of the exact binding mechanism.
The goals of this article are to demonstrate this hybrid Monte Carlo SF-SCF approach and to apply it to a system of polymer micelles in solution to gain insight in the structure of such systems. We compare the results to Monte Carlo simulations where the nodes, with polymers, have been coarse-grained to particles, with effective pair potentials as calculated in our previous article. 9 To further validate the method, the structure factors of the systems simulated by the Monte Carlo SF-SCF method are compared to experimental data found in literature.
It should be noted that this is not the first time that the SF-SCF theory is combined with a Monte Carlo algorithm. Previously, we showed some preliminary results for a charged polymer gel adsorbed on a wall, obtained with a model very similar to the one described in the present article. 10 Furthermore, Charlaganov et al. 11 used a combination of SF-SCF with Monte Carlo to study the depletion interaction of polymers near walls. They used approximate pair potentials to do a Monte Carlo simulation and subsequently used the selfconsistent field theory to calculate a more accurate free energy and correct for the wrong weighing of the states. Potentially their method is more efficient, as the SF-SCF equations do not need to be solved for the rejected states. The rate at which the states of the system are visited is, however, determined by the free energy of the Monte Carlo simulation. If this free energy is not accurate, more steps are needed to reduce the noise level. This method is therefore only effective if a good approximation for the free energy can be determined. The more particles are present, the more accurate the interaction potential needs to be as the error would scale with the root of the number of particles. With our method, we do not need approximate potentials and the number of particles we could simulate is thus not limited in this way.
■ METHOD
First, we will explain the SF-SCF theory, specifically for the 3D simple cubic lattice that was used in the present study. It is based on the method used in our previous paper on the interactions between nodes with telechelic polymers, and some more details can be found there. 9 Descriptions of the SF-SCF theory for other types of lattices can be found in literature. 9,12−15 Next, we will show how we modeled the physical gel with the SF-SCF theory. Subsequently, the details of the Monte Carlo method will be described, and finally the methods for analyzing the data will be discussed.
SF-SCF Theory. With the SF-SCF method, space is divided into lattice sites, which in the present study have a simple cubic ordering. Small molecules, such as solvent molecules, are represented by a single segment that has the size of one lattice site. Larger molecules, such as the polymers considered here, are represented by multiple segments. We assume that the segments of a polymer are connected like a freely jointed chain. Because we use a simple cubic lattice, the angle between subsequent segments can only be 180°, 90°, or 0°. For 0°, the polymer is thus allowed to fold back onto itself. Segments adjacent to each other in the molecule of course still have to be next to each other on the lattice. The short-range part of the interaction between different types of segments is quantified by the Flory−Huggins parameter χ, which is half of the change in free energy when two segments are exchanged between homogeneous phases of each segment type.
It would be far too much work to generate all the ways in which the polymers can distribute themselves over the system one by one. So instead, we determine the average distribution of the polymers over the system, i.e., we try to find the volume fractions for each segment type at each lattice site. These volume fractions can also be regarded as an average over time. This is done by generating all the possible polymer
Journal of Chemical Theory and Computation
Article conformations, which are all the possible paths of the polymer chain on the lattice. Subsequently, the polymers are distributed over these conformations according to their Boltzmann weights. Because many of the conformations are nearly identical, this saves computation time. A disadvantage is that the interactions between the segments are calculated based on the average surroundings rather than on a specific configuration of the polymers, where with a configuration we mean a particular distribution of the polymers over the conformations.
The polymers will distribute themselves over the polymer conformations according to the Boltzmann weight e −U c of these conformations. U c is the energy in units k B T of a particular polymer conformation c, given the average surroundings of this conformation. U c is the sum of the energy contributions of each segment. We call these contributions the segment potentials u X (r), where X is the segment type and r is its location. These segment potentials u are calculated from the volume fractions φ of the various segment types in the neighboring lattice sites, which in turn are calculated from the segment potentials. We repeat this iterative process until we find a self-consistent solution, in other words, until the segment potentials derived from the volume fractions are the same as those that were used to calculate these volume fractions. The segment potential of a segment of type X is given by Here, the first term ∑ Y χ XY ⟨φ Y (r)⟩ describes the average interaction energy of a segment of type X, at position r, with segments of types Y in sites adjacent to position r, χ XY is the Flory−Huggins parameter for the interaction between segments of type X and Y, and ⟨φ Y (r)⟩ is the average volume fraction of segment type Y in all neighboring lattice sites r′. The latter is given by (2) where Z is the number of neighboring lattice sites. In our case, we have a simple cubic lattice and Z = 6. We do not need to consider the interaction energy between segments of the same type as the Flory−Huggins parameter χ XX = 0 by definition.
The second term in eq 1, α(r), is used to ensure that the sum of the volume fractions at each lattice site is unity. It has to increase when the sum of the volume fractions is larger than one and decrease when the sum is less than one. We chose to update α(r) with each iteration step as The factor η = 0.3, which is small enough to prevent divergence. For the first iteration α old (r) = 0.
The volume fraction of a segment s of the polymer chain at lattice site r is given by the sum of the Boltzmann weights of all chain conformations c that pass through r with segment s, multiplied with a normalization constant C: The normalization constant C is the number of polymers n divided by the partition function q, which is the sum of the Boltzmann weights of all polymer conformations of the polymer: An efficient way to calculate q and φ s is to use the propagator formalism.
The end point distribution function G(r,N + 1) is the average Boltzmann weight of all chain conformations ending with segment s = N+ 1 on lattice site r. In this study, we only allowed the polymers to start at coordinates that lie within the nodes. We therefore write the end point distribution function as G(r,N + 1|{r n },0) indicating that only the conformations starting with segment 0 within {r n } contribute to the end point distribution function, as we have set the Boltzmann weight of all other polymer conformations to zero. Because the position of the last segment is the same for all conformations, we can move the contribution of the last segment e −u X (r) outside this summation: where X is the segment type of segment N + 1. The second part is the summation of the end point distribution functions of the chain without the last segment over all sites r′ that are adjacent to r. Because only a fraction As the Boltzmann weight of segment s is in both propagators, we need to divide by e −u X (r) . Because the polymers in our system are symmetric, we can save computation time by rewriting the first line of eq 7 as the second line so that only the propagators starting with segment 0 have to be calculated.
The overall volume fraction distribution of polymers is found by summing over all the polymer segments: The distribution of the monomeric solvent S simply follows from the Boltzmann weight: When the segment potentials are normalized to zero in the pure solvent phase, which is in equilibrium with our system, C S = 1. The Helmholtz energy, which is needed for the Monte Carlo moves, is given by (10) where Q is the partition function of the system. We calculate Q by using the ideal gas approximation:
Journal of Chemical Theory and Computation
The first term is the contribution from the polymers, while the second term comes from the solvent. Here n S is the number of solvent molecules. The single molecule partition function of the polymers q P is calculated by summing the end point distribution function, over all positions r.
For the solvent, the single molecule partition function is given by q S = ∑ r e −u S (r) .
In the second term of eq 10, we correct for the use of the Lagrange parameter. We have previously made two potentially conflicting assumptions. We have assumed that the system is incompressible, and by defining C S = 1 we assume that the system is in equilibrium with a pure solvent phase. If, for example, we would place a solvophobic wall in our system, the volume fraction of the solvent would be lower near the wall than in the pure solvent. To make sure that the volume fraction next to the wall is the same as that in the pure solvent, which is required for an incompressible system, we introduced the extra potential α(r). There is of course no physical origin for this potential, and to get the correct Helmholtz energy for the given volume fractions, α(r) has to be subtracted from the Helmholtz energy.
Gel Description within SF-SCF Theory. Here and below we will express the Helmholtz energy in units k B T and measure the distances in lattice units. The polymers are represented by a chain of N = 50 segments B, which represents the middle block, and one segment A at each end, representing the end groups. We forced the end groups of these polymers to stay together, like in the micelles, by defining small volumes, called nodes, with a size of 3 by 3 by 3 lattice sites. By setting the Boltzmann weights for segments A to zero outside the nodes, the end groups are forced to stay within the nodes. The set {r n } thus encompasses all lattice sites that lie within the nodes. These nodes will be moved using a Monte Carlo scheme. Because the number of nodes that we can model is limited, we use periodic boundary conditions, so there is no interface in the system.
The following values for the various parameters were used as defaults in these experiments. The Flory−Huggins parameter χ was 0.4, and the polymer volume fraction φ was 0.25. The number of nodes M was 125 with f = 5 polymers per node, thus 625 polymers in total. We investigated the effect of changing several of these parameters on the structure of the gel. The volume fraction of the polymer φ was varied from 0.5 to 0.031. The effect of the Flory−Huggins parameter was studied by doing additional calculations for χ = 0.0 and χ = 0.5. Calculations were also done for 2.5 and 10 polymers per node. The number of polymer ends in each node is not fixed but can fluctuate around the average value depending on the statistical weights of the conformations starting and ending at this node.
In practice, these fluctuations in the number of polymer ends in each node are limited due to the steric hindrance between the polymers. This is similar to real systems in which the number of polymers per node can also fluctuate. It also allows for slightly different compositions of the dilute and concentrated phases when phase separation occurs. In Figure 2, we show a few examples of the probability density function f(N e ) of the number of end groups per node N e . This distribution clearly becomes wider as the concentration increases. At high density, the steric hindrance between the polymers is less because the density around the nodes quickly drops to the bulk density and the polymers from the same node repel each other only over a short distance. It is therefore not so disadvantageous to put more than the average number of polymers on a node.
To see to which extent the outcome of the simulation was affected by the limited number of nodes, we also did some simulations with 8, 27, and 64 nodes. For some systems, the radial distribution function had not flattened out at a distance of half the box size. We therefore also did simulations with 512 nodes. A more detailed overview of the calculations performed can be found in the Supporting Information.
Monte Carlo Protocol. A basic Monte Carlo simulation consists of doing a Monte Carlo step, which is a trial move in the parameter space, and an acceptance rule which determines whether or not to accept the move based on the change in (free) energy. In our case, the trial moves consisted of picking a number of nodes at random and moving them by one lattice site in a random direction. A node could be selected multiple times during a single Monte Carlo step and can thus also move multiple sites. The number of nodes that are moved is adjusted during the equilibration part of the simulation, such that the acceptance ratio is about 25%. After the nodes have been moved, the distribution of the polymers is calculated again and the new Helmholtz energy F new is compared to the old Helmholtz energy F old . The reason for using the Helmholtz free energy is that when a node is moved, not only the interaction energy changes but the conformational entropy of the polymers is changed as well. If the new Helmholtz energy F new is lower than the old Helmholtz energy F old , the move is accepted. If it is higher, it is accepted with the probability: At the start of the simulation, the nodes were ordered in a simple cubic ordering filling the whole cubic simulation volume. We aimed to do m = 40 000 Monte Carlo steps in each simulation. This is long enough for the system to equilibrate provided that the density remains homogeneous.
Article
To demonstrate that the system is equilibrated well within 40 000 steps, we show the SF-SCF Helmholtz energy as a function of the number of Monte Carlo steps in Figure 3.
At first sight it may seem puzzling that the Helmholtz energy increases as the system relaxes. The entropy of the nodes is, however, not included in the Helmholtz energy presented in Figure 3. At the start, the nodes are in a highly ordered state. By distributing themselves more randomly over the volume, the entropy of the nodes is increased. This results in a lower Helmholtz energy for the system as a whole even though the Helmholtz energy of the polymer chains has increased. In principle, the entropy of the nodes can be calculated from the radial distribution function and higher-order particle correlation functions using Green's entropy expansion. 16 In our case, the three-particle correlation function was still rather noisy and it was therefore not possible to accurately determine the entropy of the nodes.
Coarse Grained Simulation. To show that the hybrid Monte Carlo SF-SCF method describes the system better than Monte Carlo simulations with coarse grained nodes, we performed Monte Carlo simulations with M = 125, f = 5, and χ = 0.4. In these simulations, the nodes with their polymers have been coarse-grained to a single particle. We used an effective interaction potential, calculated as described in our previous article, 9 as the interaction potential between these particles. To determine this effective pair potential, we first calculated the free energy per node for a simple cubic arrangement for different distances between the nodes. Subsequently, we calculated the effective pair potential so that it gives the correct free energy for all distances. The resulting potential is shown in Figure 4. The depth of the well is 0.33k B T.
Data Analysis. We calculated the radial distribution function of the nodes to see how much ordering there is in the system. This was done by splitting the range of possible interparticle distances in a number of subranges called bins. The width of these bins is dr. Next, we loop over all particle pairs and Monte Carlo steps m and count how many particle pairs have an interparticle distance that would fall in each bin b. "nint" indicates that number is rounded to the nearest integer.
In this equation, V is the volume in the number of lattice sites, V r is the number of lattice sites that fall within the bin b(r) at radius r, and m is the number of Monte Carlo steps over which the radial distribution function is averaged. M is the number of nodes, and r n is the position of the node.
To be able to compare the results of these simulations to experiments, we also calculated a structure factor S(ξ) based on the radial distribution function using In this equation, g(r) is the radial distribution function, r the distance, ρ the number density of the nodes, and ξ the spatial frequency.
Because of the finite size of our system, the radial distribution function does not go to exactly unity for large distances. This can, for example, be seen in Figure 5, where the dotted curve for M = 125 stays just above unity. The explanation is that if a particle has an excluded volume, the volume remaining for the other M − 1 particles is a bit smaller and the radial distribution function will be a little bit higher than unity far away from the particle. Similarly, if the interaction between the nodes is attractive, the concentration close to the node will be higher and far away and it will be a bit lower. In that case, the radial distribution far away will be a bit less than unity. As a result, a peak shows up around ξ = 0 in the structure factor. As the osmotic compressibility is effectively determined by extrapolating the structure factor to zero, we need a way to suppress this peak at ξ = 0.
Recently, Dawass et. al 17 wrote an article comparing several methods for correcting some of these finite size effects. The
Journal of Chemical Theory and Computation
Article best method according to them was the method of Ganguly and van der Veght. 18 They adjusted the radial distribution function at distance r based on the excess amount within distance r. To us this did not seem optimal, as the excess amount fluctuates considerably as a function of the distance. As a first-order approximation, the value of the entire radial distribution function will be increased due to the local excluded volume of a particle. We therefore think that a correction that is more uniform would be better at approximating the real radial distribution function. The most logical thing to do would thus be to multiply the radial distribution with a small factor such that the radial distribution function goes to exactly 1 at long distances. For small simulation volumes, the radial distribution function is however not yet entirely flat at a distance of half the box size. It is thus not so easy to determine what the right correction factor is. Ideally, we get a smooth curve near a spatial frequency ξ = 0. If we, however, get it wrong, there is a significant spike in the structure factor close to ξ = 0. It therefore seemed reasonable to choose this correction factor such that the magnitude of the second derivative near ξ = 0 is minimal, although a different derivative may work as well. To determine whether our method works, we simulated two systems, one with hard spheres and one with the effective interactions we use in the coarse grained simulation. We compared the corrected radial distribution function and compressibility of boxes with 512 particles to those of a simulation box with 13824 particles to see if our method would give a useful correction of the radial distribution function. The corrected radial distribution functions for the systems with 512 particles give Kirkwood− Buff integrals that deviate less than 15% from the value of the large system, while the uncorrected values deviated as much as 60%. For the hard sphere system with 512 particles, the value differs by about 5% from the theoretical value obtained with the K-equation of state, 19 and for the system with 13824 particles, our correction reduced the deviation from 5% to 1.5%. To our knowledge, this method has not been described in the literature and we hope to soon write a short communication in which we compare this method to other methods for correcting finite size effects. The values of the correction factors ranged from 0.987 to 1.008. With this corrected radial distribution function, we calculated the osmotic compressibility κ according to ■ RESULTS AND DISCUSSION In Figure 6, an example of the simulation volume is shown. The nodes are clearly visible as dark-red cubes with a slightly lighter core. Because of the steric repulsion between them, the polymers push each other away from the node and so drag their anchoring groups to the outside of the node. This results in a relatively low density within the core of the node. In Figure 5, we show the radial distribution function for systems with different numbers of nodes M and thus also different volumes. All other parameters have their default values. For M = 8 and M = 27, the radial distribution functions deviate significantly from the ones for M = 64 and M = 125, which are very similar. It thus seems that our default conditions using 125 nodes gives results that do not deviate too much from an infinite system, although there is still some effect of the limited size of the simulation volume. The system should not be much smaller, as the peak of the second coordination shell has barely ended at a distance equal to half the box size. With ϕ = 0.5 and f = 10, the radial distribution function shows peaks well beyond half the box size, and for this system as well as several others, we performed simulations with M = 512 nodes. This still is not optimal, but the computational costs are too high to simulate even larger systems.
The dependence of the radial distribution function on the overall polymer volume fraction is shown in Figure 7. As the polymer volume fraction is increased from ϕ = 0.125 to ϕ = 0.5, the peak of the radial distribution function shifts inward. Hence, at high concentrations, the polymers are pressed into each other as there is not enough space to place all nodes at their optimal distances. As the volume fraction is reduced, the distances between the nodes increase until the optimal distance is reached at a volume fraction of about ϕ = 0.125. There is no strong ordering in the sample, and only two relatively weak coordination shells are visible in the radial distribution function. The system is thus expected to behave in a liquidlike manner on time scales that are long compared to the relaxation time of an individual bridge.
At first sight, it may be surprising that the level of ordering of the nodes does not increase with increasing node concentration. One would expect that due to the strong steric repulsion the nodes would order themselves. Similar to polymeric solutions, however, the environment starts to look more like a polymer melt, as the polymer concentration is increased. The polymers are therefore distributed more homogeneously over the volume. As a result, the steric
Journal of Chemical Theory and Computation
Article hindrance experienced by the nodes will depend less on their position and the system can thus remain unordered.
As the polymer volume fraction is decreased from ϕ = 0.125, the peak of the first coordination shell rises, suggesting that the strength of the attraction increases. This is in line with our earlier finding that the interaction between two nodes depends on their surroundings. 9 As the system is diluted, the number of neighboring nodes decreases and the attraction with the remaining neighbors increases. For dilute systems, the binding energy can be estimated by taking the logarithm of the peak height of the radial distribution function. In this case, the height is about 2.7 for ϕ = 0.0078, which corresponds to a binding energy of roughly 1 k B T. This binding energy and the position of the peak are the same as we found before. 9 Let us next consider the effect of solvent quality. In Figure 8, radial distribution functions are shown for different values of χ.
At a volume fraction of ϕ = 0.50, shown in Figure 8a, there is practically no difference between the different solvent qualities. At such a high polymer volume fraction, the swelling of the polymer corona does not significantly decrease the number of unfavorable interactions between the polymer segments because they would swell into the corona of the next micelle. The size of the micelles in a good solvent is therefore the same as that in a theta solvent, and the radial distribution function is therefore also practically the same. This is illustrated in Figure 9, where the interaction potential ΔF between two isolated nodes is plotted for different background polymer concentrations. At a background polymer volume fraction of ϕ b = 0.5, the curves for χ = 0.0 and χ = 0.5 are practically the same.
As the polymer volume fraction is decreased to ϕ = 0.25, the radial distribution functions for the different solvent qualities start to differ. The peaks of the radial distribution functions shift outward, most strongly for the good solvent. For the theta solvent, the radial distribution function is otherwise similar to that at ϕ = 0.50. For a good solvent, the height of the peaks increases, as the steric repulsion is strongest in a good solvent and it thus gives the most ordered structure.
When the volume fraction is lowered further to ϕ = 0.125, we observe that for χ = 0.5 the radial distribution function no longer goes to unity at large distances. This is most likely because phase separation occurs: the cross section of the gel in Figure 10 clearly shows a dilute and a concentrated region.
In addition, the first peak for χ = 0.5 is higher than the peak for χ = 0.0. For χ = 0.5, the interactions are now attractive and they become stronger as the gel becomes more dilute, while for χ = 0.0 there is still a net repulsion between the nodes which decreases as the gel becomes more dilute.
Finally, in Figure 8d, the polymer concentration has been lowered to ϕ = 0.0625 and now the radial distribution function does go to 1 for χ = 0.5. This, however, does not mean that the system is already below the lower binodal. At the start of the simulation, the nodes are distributed homogeneously over the volume. They will initially clump together in small clusters. These clusters, however, diffuse much slower than individual nodes. It will thus take a long time before all the clusters and nodes have aggregated by diffusion and Ostwald ripening and formed a dense phase. The simulation was therefore too short to fully equilibrate the system. The radial distribution function does show a slight dip at a distance of about 45 lattice sites, which is also visible for χ = 0.4. As the individual nodes and clusters diffuse around, they stick to other clusters. The concentration of micelles and other clusters near this cluster
Journal of Chemical Theory and Computation
Article therefore decreases, leading to a zone with a relatively low concentration. This process may not only happen for complete phase separation but also in the case that the clusters have not yet reached their equilibrium size distribution. This is illustrated by the change in the radial distribution functions as the number of Monte Carlo steps is increased. The more Monte Carlo steps have been taken the further out the dip lies and the deeper it becomes. The system is thus not equilibrated within the simulated number of Monte Carlo steps. At the end of the simulation, there is also a clear void visible within the gel.
It is possible to improve the rate at which the system equilibrates by occasionally making Monte Carlo moves that displace micelles over large distances. We, however, intended to study the homogeneous phases of these micellar solutions and therefore did not implement such large Monte Carlo moves.
At a volume fraction of ϕ = 0.0625, the distance between the nodes is so large that, for all χ, the interactions are no longer repulsive at the average intermicelle distance. The peak for χ = 0.4 is therefore higher than that at χ = 0.0 because the height of the peaks is now determined by the strength of the attraction between the micelles. Now we turn to the effect of the number of polymers per node f, as shown in Figure 11. For f = 2.5, the radial distribution function has just one peak just like a gas. In contrast, there are many peaks visible for f = 10. For the highest concentration ϕ = 0.5, these peaks occur beyond half the box size. It is therefore likely that in this case the radial distribution function is still influenced by the size of the simulation volume. A striking difference between f = 10 and the lower functionalities is that the height of the peaks increases as the concentration is increased from ϕ = 0.25 to ϕ = 0.50. This suggests that as the number of polymers increases, the micelles will behave more like hard particles and crystallization may be possible for nodes with even more polymers. Figure 11d shows that, for f = 10, the radial distribution function drops a bit below unity at large distances, although the deviation is not as large as in Figure 8c. There are some interconnected cavities visible within the gel. It is therefore possible that this gel will also undergo phase separation even though this is not yet clearly visible. The number of Monte Carlo steps taken is relatively small, and the gel may not have had enough "time" to phase separate. Now that we have discussed the radial distribution functions for different parameters, we can compare them with the radial distribution functions calculated with Monte Carlo simulations in which we coarse-grained the nodes as single particles. In Figure 12, radial distribution functions from the MC-SCF simulations and the Monte Carlo simulations with effective pair potentials are shown.
At high densities (see Figure 12a), the MC simulation with effective pair potential gives much sharper peaks than the MC-SCF model. This is probably caused by an overestimation of the repulsive force between the particles. When two nodes approach each other closely, the polymers can move out of the way if there are no other particles nearby. However, if the nodes have many close-by neighbors, the polymers can not move out of the way and the repulsion is thus stronger. The MC-SCF model can distinguish between these cases. A pairwise interaction, however, cannot, and instead an assumption has to be made about the surroundings of the nodes. In the way we determined the effective pair potential, it is assumed that the other nodes are at the same distance from the interacting nodes as the interacting nodes are from each other. When a particle is closer than the typical distance between a particle and its nearest neighbors, the average distance to the other nodes is underestimated and the repulsive force is too strong. Because of this, the nodes cannot approach each other as closely as in the MC-SCF model and therefore appear as harder particles, resulting in the sharper peaks.
At low concentrations, as seen in Figure 12c, the opposite problem arises. Here the peak of the first coordination shell is much higher for the MC-SCF model. Not only does the
Journal of Chemical Theory and Computation
Article effective pair model overestimate the repulsion between the particles, it also underestimates the attraction. When a node already has many neighbors, adding another one increases the number of polymer conformations relatively little compared to the total number of potential conformations. If instead a node has no neighbors, the relative increase in the number of polymer conformations is much larger. The change in free energy when a neighbor is added will therefore be larger when a node has fewer neighbors. The attraction at low concentration will therefore be stronger. With the effective pair potential, it is assumed that there are neighboring nodes at the same distance as the interacting nodes. This results in an underestimation of the attraction at low concentration.
At intermediate concentrations (Figure 12b), the effective pair interaction gives roughly the same radial distribution function as the MC-SCF model, although the repulsion between the micelles is still overestimated at short ranges.
On the basis of these calculations, it is clear that a Monte Carlo simulation with a single pair potential does not correctly describe the behavior of the nodes at a wide range of concentrations, although some improvement should be possible as the short-range repulsion appears too strong at all concentrations. An option would be to use a custom potential for each density. For systems in which the density remains homogeneous, this would be an improvement. If the density is, however, not homogeneous, the result would be even worse than with the effective pair potential we have used here.
To validate our MC-SCF simulations, we need to make predictions that can be compared to experimentally obtained results. We therefore determined the structure factor and the osmotic compressibility κ according to eqs 16 and 17.
The structure factors are shown in Figure 13 and the compressibility is plotted in Figure 14. Close to spatial frequency ξ = 0, the uncertainty in the structure factor is relatively large. As the structure factor near ξ = 0 is closely related to the compressibility, the accuracy with which the compressibility can be calculated is also limited.
In two of the presented cases, the structure factors are negative at ξ = 0. For systems in equilibrium, this is physically unrealistic and it most likely results from the limited size of our simulation volume. In several other cases, increasing the number of nodes from 125 to 512 caused the structure factor to become positive. This would probably also be the case for these two systems if we could run the simulations with more Monte Carlo steps and nodes.
The effect of the number of polymers per node f on the osmotic compressibility (eq 17) is shown in Figure 14a. The values shown are relative to the compressibility of an ideal gas with a particle concentration that is the same as the concentration of nodes in our simulations. At high polymer volume fractions, the steric repulsion of the polymer coronas prevents the nodes from coming close to each other and the osmotic compressibility is therefore much smaller than that of an ideal gas. At low concentration, the attraction causes the nodes to form clusters and the osmotic compressibility is therefore higher than that of an ideal gas because the number of freely moving particles is reduced. The relative compressibility gives a lower limit for the number of nodes that form a cluster. For χ = 0.4, f = 5, and ϕ = 0.03, the average cluster size should be, for example, at least 3.
For ϕ = 0.5, the nodes with the fewest polymers per node seem to have the highest relative compressibility. The confidence intervals, however, still overlap so the difference is not significant. If we had instead looked at the real compressibility, the order would probably be reversed. The higher the number of polymers per node, the higher the concentration of polymers close to the nodes, while the polymer concentration halfway between the micelles is lower and thus the steric repulsion will be lower as well.
As the concentration is lowered, the order quickly changes. Because the nodes with more polymers have more attraction between them and thus form larger clusters, these systems are more compressible at low volume fractions.
In Figure 14b, the compressibility as a function of the Flory−Huggins parameter χ is shown. As expected, the system with χ = 0 is the least compressible; as the corona swells the most, the steric repulsion is the strongest for this case. Above, we concluded, on the basis of the radial distribution function, that phase separation occurs for the combination of f = 5 and χ = 0.5. According to theory, the compressibility should therefore go to infinity. Our system, however, has a limited number of particles and therefore the value the compressibility can reach is limited. Furthermore, we used the entire radial distribution function to calculate the compressibilities. This, however, includes the part of the radial distribution function
Journal of Chemical Theory and Computation
Article far away from the particle where it is below unity. This lowers the calculated value of the compressibility even further. The values obtained for χ = 0.5 and ϕ = 0.13−0.03 are thus incorrect and therefore not shown in Figure 14b.
One of the experimental studies in literature to which we can compare our results is by Filali et al. 1 They investigated a system of swollen surfactant micelles to which they added PEO polymers with hydrophobic end groups. Under the conditions used, the Flory−Huggins parameter for PEO is between χ = 0.4 and χ = 0.5. 20,21 Although the polymers had about 120 Kuhn segments and were thus longer than the polymers we simulated, our results should show a fairly good match, as the effective pair potential is almost identical for 50 and 100 segments when the distance from the core is rescaled. 9 In addition, the core of the micelles is larger than our nodes. However, compared to the volume of the coronas, the cores are still relatively small. The experimental results should therefore be in between our results for χ = 0.4 and χ = 0.5.
Filali et al. 1 observed phase separation for f ⩾ 6, which is not much higher than the f = 5 for which we observed phase separation with χ = 0.5. In this respect, their results fit nicely with our findings.
The authors did not report the structure factor separately but did show the total scattering intensity. As the form factor goes to unity for small ξ, we should be able to make a qualitative comparison between our structure factor and their scattering intensities at small ξ. To get realistic values for our spatial frequency, we need to choose a value for the lattice size. We chose a value of 7.4 Å because this coincides with the Kuhn length of PEG. 22 On the basis of the Daoud Cotton model, 23 the system of Filali et al. with an oil droplet volume fraction of 7% should best match our simulations with a polymer volume fraction of ϕ = 0.03. When going from large ξ to small ξ, there is a dip after the peak, indicating the average distance between the nearest neighbors followed by a steep increase in both cases. These features are less pronounced than in our simulations. This is probaly because we used f = 5, while the experimental system had on average four polymers per micelle. For higher concentrations, the structure factors do not match because the surfactant micelles in the experimental system are charged and repel each other at these concentrations.
Francois et al. published several experimental articles on telechelic polymers with PEO middle blocks. 24,25 In contrast to us, they found cubic phases at high concentrations using X-ray scattering. 24 Because the number of polymers per micelle was not reported, a one-on-one comparison with our simulations is difficult. Probably the number of polymers per micelle is higher than in our simulations. This may explain why they observed a cubic phase and is corroborated by the fact that for longer middle blocks, for which the number of polymers per micelle is lower, crystallization was not found.
Another factor that may have contributed is that at least 10% of their polymers had only one functionalized end, which increases the repulsion between the micelles. They also observed phase separation for systems with relatively short middle blocks (PEO M w ⩽6000 g/mol) but not for polymers with long middle blocks (M w ⩾10000 g/mol). As explained before, systems with longer polymers have fewer polymers per micelle and therefore less entropic attraction due to bridge formation.
Sprakel et al. studied the rheological and phase behavior of solutions of the same type of telechelic polymers, both experimentally 4 and with computer simulations. 8,26 In contrast to our simulation, they did not observe phase separation in their experimental study. 4 Although the number of polymers per micelle was not reported, it was probably larger than 6, which was estimated by Filali et al. to be the lower boundary for phase separation. A possible explanation for not observing phase separation is that about 10% of the polymers had only one associative end group. This increases the steric repulsion between the micelles, and the net attraction, which causes the phase separation, is thus reduced.
In a second paper, Sprakel et al. 26 addressed the phase behavior of the system with a SF-SCF model in which the micelle is modeled in a 1D spherically symmetric system with a reflecting boundary condition. The number of polymers per micelle was not fixed, but instead the grand potential was optimized to determine f. Phase separation was predicted for all the combinations of middle block and end-block lengths they used in their study. The minimum number of polymers per micelle they found was about eight, but their polymers were much longer than those in our study. As they used χ = 0.5, all their systems lie above the line from f = 10 with χ = 0.4 to f = 5 with χ = 0.5. Their predictions are thus in line with what we found here.
In a third study, 8 they coarse-grained the micelles to single particles. In this case, no phase separation was found. They do not mention the Flory−Huggins parameter, but they wanted to reproduce the above-described experimental system 4 and the value of χ should thus be between 0.4 and 0.5. Given that the simulated micelles have f = 25 polymers each, phase separation would be expected based on our results. However, the interaction potentials in the coarse-grained model do not take into account that the attraction will increase with a decreasing number of neighbors. 9 Moreover, a relatively small well depth of 0.38 k B T was used, comparable to the well depth we found for f = 5 and χ = 0.4 (about 0.34 k B T). Considering that the well depth for an isolated pair of micelles roughly scales with f 0.5 , 9 the well depth expected for this system with f = 25 would be about twice as large. These combined factors explain why they did not find phase separation from the coarse-grained modeling.
■ CONCLUSION AND OUTLOOK
We successfully combined a Monte Carlo algorithm with the Scheutjens−Fleer self-consistent field theory. With it, we were able to calculate the radial distribution function, structure factor, and compressibilities for solutions and gels of ABA triblock copolymers with varying properties over a range of densities. For f ≤ 5 polymers per node, we found, somewhat counterintuitively, that as the polymer volume fraction ϕ increases from ϕ = 0.25 to ϕ = 0.5 the amount of ordering in the system is decreased. We argue that this is because at high volume fractions the background concentration of the polymers of the other nodes becomes more homogeneous. The amount of steric repulsion therefore depends less on the position of the node. We further discovered that for χ = 0.5 and f ≥ 5, phase separation occurs. We were, however, not able to determine the compositions of the coexisting phases as the number of simulated particles was small and there should thus be a considerable effect due to the interface. Simulating such a large volume that the effects of an interface would be negligible would take far too much computation time. To avoid the effect of the interface, the Gibbs ensemble 27 can be used. Because we used a lattice, we can, however, not change the volume by
Journal of Chemical Theory and Computation
Article arbitrary small steps but only by one lattice layer at a time. The larger the simulation volume, the larger the change in volume and thus in free energy will be. The chance that an exchange in volume would be accepted would therefore become smaller for larger and larger systems. This limits the system size we can use in combination with the Gibbs ensemble. Instead, it may be possible to "simulate" a Gibbs ensemble by simulating two volumes which would be representative for larger volumes of the simulated Gibbs ensemble. By moving particles in and out of the simulated volumes, the density could be adjusted to that of the simulated volumes of the Gibbs ensemble. To our knowledge, such an approach has not been described in literature yet. Another approach would be to coarse-grain the micelles while maintaining the dependence of the interacting potential on the surroundings of the interacting micelles. This method would also enable us to study dynamic properties of the system.
The limited system size may have affected some of our simulations at high polymer volume fractions where the radial distribution function had not completely flattened out by half the box size. The next generation of GPUs, however, promises to have 10 times more computation power as those we used. This allows larger simulation volumes and more Monte Carlo steps, making a Monte Carlo SCF hybrid model a feasible tool for future studies. By comparing the results of coarse-grained Monte Carlo simulations with those of the hybrid MC-SCF model, we have shown the shortcomings of using only one pair potential to describe the interactions between the nodes. The MC-SCF hybrid method is therefore a useful tool to model systems of flower-like micelles and telechelic networks over a wide range of concentrations.
|
2018-11-11T01:39:44.619Z
|
2018-10-26T00:00:00.000
|
{
"year": 2018,
"sha1": "6d8dab13415b959ef728e99580bbe2193ea073b1",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jctc.7b01264",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d8dab13415b959ef728e99580bbe2193ea073b1",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
5822581
|
pes2o/s2orc
|
v3-fos-license
|
A case of septicaemic anthrax in an intravenous drug user
Background In 2000, Ringertz et al described the first case of systemic anthrax caused by injecting heroin contaminated with anthrax. In 2008, there were 574 drug related deaths in Scotland, of which 336 were associated with heroin and or morphine. We report a rare case of septicaemic anthrax caused by injecting heroin contaminated with anthrax in Scotland. Case Presentation A 32 year old intravenous drug user (IVDU), presented with a 12 hour history of increasing purulent discharge from a chronic sinus in his left groin. He had a tachycardia, pyrexia, leukocytosis and an elevated C-reactive protein (CRP). He was treated with Vancomycin, Clindamycin, Ciprofloxacin, Gentamicin and Metronidazole. Blood cultures grew Bacillus anthracis within 24 hours of presentation. He had a computed tomography (CT) scan and magnetic resonance imagining (MRI) of his abdomen, pelvis and thighs performed. These showed inflammatory change relating to the iliopsoas and an area of necrosis in the adductor magnus. He underwent an exploration of his left thigh. This revealed chronically indurated subcutaneous tissues with no evidence of a collection or necrotic muscle. Treatment with Vancomycin, Ciprofloxacin and Clindamycin continued for 14 days. Negative Pressure Wound Therapy (NPWT) device was applied utilising the Venturi™ wound sealing kit. Following 4 weeks of treatment, the wound dimensions had reduced by 77%. Conclusions Although systemic anthrax infection is rare, it should be considered when faced with severe cutaneous infection in IVDU patients. This case shows that patients with significant bacteraemia may present with no signs of haemodynamic compromise. Prompt recognition and treatment with high dose IV antimicrobial therapy increases the likelihood of survival. The use of simple wound therapy adjuncts such as NPWT can give excellent wound healing results.
Background
Anthrax is believed to be the cause of the 5 th plague described in the book of Exodus (chap 9:3). It is a zoonosis affecting most herbivores, however, transmission from human to human has never been documented [1][2][3]. It is caused by Bacillus anthracis, a gram positive spore forming bacillus. It occurs when Bacillus anthracis' endospores enter the body either through breaks in the skin, ingestion or inhalation. Anthrax characterisation is based upon its original mode of transmission; cutaneous, gastrointestinal and inhalational. Anthrax is derived from the Greek word meaning coal after the black skin lesions seen in its cutaneous form [1].
Anthrax remains relatively rare with between 20,000 and 100,000 cases occurring in the world annually [4]. It is predominantly related to occupational exposure as seen in farmers, veterinarians and people handling wool [5]. Only a handful of anthrax cases have been seen in Britain over the last decade. In 2006, a fatal case of inhalation anthrax was reported in the Scottish borders [6]. This was the first case of anthrax seen in nearly 30 years. In 2000, Ringertz et al described the first case of systemic anthrax caused by injecting heroin contaminated with anthrax [7].
In 2008, there were 574 drug related deaths in Scotland, of which 336 were associated with heroin and or morphine [8]. Of the 574 deaths, 197 (34%) occurred in Greater Glasgow & Clyde. NHS Greater Glasgow and Clyde covers the city of Glasgow and a population of approximately (1.2 million) [9]. In 2000, it was estimated that 6809 people in Glasgow were intravenous drug users (IVDU) [10]. Due to the risky behaviour associated with illicit drug use, mortality rates in IVDU patients have been estimated to be 12-22% higher than an ageadjusted population [11][12][13].
In December 2009, the first case of fatal anthrax relating to intravenous drug abuse was documented in the British media [14]. In total there were 47 confirmed cases and 13 anthrax related deaths during this outbreak [15,16]. Systemic anthrax infection is associated with high mortality rates with 45% of patients with inhalation anthrax ultimately dying from the infection [17,18]. Cutaneous anthrax infection, however, is not associated with high mortality rates [19].
We present an interesting case of septicaemic anthrax, in a patient with no evidence of septic shock, caused by injecting heroin contaminated with anthrax in the UK. In our case report, we discuss the clinical and pathological aspects of the case, the use of wound therapy adjuncts in promoting wound healing and review the current literature.
Case Presentation
A 32 year old male who was a known IVDU, presented with a 12 hour history of increasing swelling in his left leg. He had a purulent discharge from a chronic sinus in his left groin.
On examination, he had warm erythematous sinuses in both groins. The left sinus was discharging foul smelling pus. He had a tachycardia 120 bpm and pyrexia 38.2 degrees C, blood pressure (BP) 136/86. He was alert and orientated to time, place and person. Blood tests revealed a white cell count (WCC) of 15.9 × 10 9 /l, a C-Reactive Protein (CRP) of 21 mg/dl and blood cultures were taken. He was treated with broad spectrum antibiotics Vancomycin, Clindamycin, Ciprofloxacin, Gentamicin and Metronidazole in line with the latest advice from microbiology because of a recent outbreak of anthrax in the local IVDU population. Within 24 hours of collection, all 6 blood culture bottles grew gram positive organisms suggestive of Bacillus anthracis and he was referred for consideration of surgery. The cultured Bacillus anthracis organism was susceptible to all the above antibiotics. Clinically there was no abscess to drain so he had a computed tomography (CT) scan of his abdomen, pelvis and thighs performed. This showed loculated fluid and inflammatory change anterior to the left psoas muscle extending down to the iliacus [ Figure 1]. In the pelvis, the left pectineus and adductor magnus were oedematous possibly reflecting muscle necrosis [ Figure 2].
Given that he was clinically well his conservative management and monitoring continued. Over the next five days there were signs of improvement with a reduction in the inflammatory markers and no evidence of organ dysfunction. In consultation with colleagues in microbiology, infectious diseases and public health a further CT scan was performed to ensure that there was no collection to drain as the limited local experience suggested that drainage and debridement of collections and necrotic tissues resulted in optimal management. This scan showed that the retroperitoneal and thigh changes had improved slightly. Within a further 24 hours he had developed an area of cellulitis over the left thigh and lower abdomen and a Magnetic Resonance Imaging (MRI) was carried out. This showed diffuse oedematous change in the muscles and subcutaneous fat of the left leg from the level of the inguinal region to below the knee. After contrast, there were areas which did not show evidence of contrast enhancement with an irregular enhancing margin. The absence of discrete high signal on the STIR images was thought to represent areas of non-perfused musculature, possible undergoing necrotic change [ Figure 3].
He underwent an exploration of his left thigh. This revealed chronically indurated subcutaneous tissues in keeping with his long history of injecting into the area. There was some oedematous fluid but no evidence of a collection or necrotic muscle. Post-operatively he made a steady recovery.
Treatment with Vancomycin, Ciprofloxacin and Clindamycin continued for 14 days. Negative Pressure Wound Therapy (NPWT) device was applied utilising the Venturi™ wound sealing kit. This followed the therapy application technique first described by Chariker et al using saline-moistened gauze, a silicone drain and clear, semi-permeable adhesive film, together with lower pressures, usually between 60-80 mmHg. 15 The dressings were changed three times per week and wound exudate collection canisters changed as required when full. He was discharged on a 4 week course of oral Ciprofloxacin 400 mg twice a day and continued with NPWT. Following 4 weeks of treatment, the wound dimensions had reduced from 300 cm3 [ Figure 4] to 68 cm3 [ Figure 5], a reduction in size of 77%. He remains well and attends regular review at outpatient clinic.
Conclusions
We present an interesting case of septicaemic anthrax in an intravenous drug user, who despite having B. anthracis bacteraemia and severe cutaneous infection, displayed no evidence of septic shock and made an uneventful recovery following IV antibiotic therapy and surgical debridement. This case was part of a large outbreak of septicaemic anthrax infection in an IVDU population of 47 patients with a mortality rate of 27% [15].
Mortality associated with systemic anthrax infection is well known to be high with 40% of the patients in the 2001 bioterrorist attacks dying from the infection [17]. Of the 10 patients in their series, 7 patients presented with severe sepsis and grew B. anthracis on blood culture. Of the 7 patients with bacteraemia 4 died of acute circulatory collapse. This suggests the presence of positive blood cultures foretells a high risk of mortality.
Doganay and colleagues described a series of 22 patients presenting with cutaneous anthrax infection over a 7 year period [19]. Of these, 10 patients had severe infection and 2 patients suffered from septic shock. In contrast to our case, all patients with positive blood culture had evidence of severe infection and septic shock. Of interest, none of the patients in their series died from anthrax.
It is believed this outbreak resulted from exposure to a batch of contaminated heroin. This is suggested by the geographical distribution and the occurrence of contemporaneous outbreaks within the city. McGuigan et al reported a similar outbreak in 2000; however, the aetiological agent is thought to have been Clostridium novyi [13]. Street heroin in Britain is thought to originate from Afghanistan [13,20]. Non-sterile methods for its production, 'cutting' and transportation increase the risk of contamination.
The lethal outbreak of Clostridium novyi in 2000 prompted the production of consensus guidelines for treating severe cutaneous infection in IVDU patients. In the case series reported by McGuigan et al, the commonest organisms seen, were Clostridium novyi, Staphylococcus aureus, group A beta-haemolytic streptococci and anaerobes. Based on this finding they suggested using an antimicrobial regimen of Flucloxacillin, Benzylpenicillin, Gentamicin, Clindamycin and Metronidazole [13].
In 2001, Swartz published guidelines for the recognition and management of anthrax. He suggested using ciprofloxacin 400 mg IV combined with a penicillin for inhalation and severe cutaneous anthrax infection [21]. His recommendations were based on consensus guidelines published by Iglesby et al. The efficacy of ciprofloxacin in anthrax had been poorly studied in humans, however, in animal models excellent recovery has been demonstrated despite the lack of an immune response [22]. This finding led to the recommendation that all patients surviving anthrax should continue antibiotic therapy for 60 days.
The evidence published by McGuigan et al suggested that of all the patients who were classified as a 'definite' diagnosis of Clostridium novyi, approximately 20% grew an alternative organism. This suggests, even during a recognised outbreak, the diagnosis of a particular infection based on clinical parameters alone is unreliable.
All IVDU patients presenting with severe cutaneous infection were treated with Benzylpenicillin, Flucloxacillin, Gentamicin, Clindamycin, Ciprofloxacin and Metronidazole. In the case of penicillin allergy Vancomycin was used in place of Benzylpenicillin and Flucloxacillin. Once a causative agent was found, the antimicrobial therapy was rationalised. The diagnosis of anthrax was confirmed by isolation of Bacillus anthracis in early blood cultures in some patients. This was supported by PCR testing of blood or excised tissues at the Health Protection Agency (HPA) Special Pathogens Reference Unit (SPRU) at Porton Down.
NPWT is a non-invasive technology comprising a negative pressure pump connected by a tube to a dressing that occupies the wound cavity. The dressing is sealed to the peri-wound skin using an adhesive film. This provides a closed system, so that negative (subatmospheric) pressure is generated at the wound/dressing interface. NPWT is suitable for acute, chronic and traumatic wounds as an adjunct to surgery [23]. Despite NPWT appearing in the literature for over 50 years, the physiological and molecular biological mechanisms by which NPWT accelerates wound healing remain largely unknown [24].
Although systemic anthrax infection is rare, it should be considered when faced with severe cutaneous infection in IVDU patients. The presence of Bacillus anthracis on blood culture is normally associated with septic shock and high rates of mortality; however, our case suggests that these patients may not exhibit haemodynamic compromise despite having significant bacteraemia. Prompt recognition and treatment with high dose IV antimicrobial therapy increases the likelihood of survival. Adherence to antimicrobial prescribing guidelines reduces the possibility of missing other potentially lethal organisms. The use of simple wound therapy adjuncts such NPWT can give excellent wound healing results over a relatively short period.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
|
2014-10-01T00:00:00.000Z
|
2011-01-20T00:00:00.000
|
{
"year": 2011,
"sha1": "fa3e9179a015e80fb88436f2e8d241938084df4d",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-11-21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe8ac5375c6603ea455769c90a0857883e736b5f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239540216
|
pes2o/s2orc
|
v3-fos-license
|
Hip Hemiarthroplasty: The Misnomer of a Narrow Femoral Canal and the Cost Implications
Objective Hemiarthroplasty has been identified as the treatment of choice for displaced intracapsular femoral neck fractures. A modular prosthesis is sometimes preferred for its sizing options in narrow femoral canals, despite its higher cost and no advantage in clinical outcomes. Thus, in this study, we investigated the factors affecting surgeons’ choice of prosthesis, hypothesizing that modular hemiarthroplasty is overused for narrow femoral canals compared to monoblock hip hemiarthroplasty. Methods A retrospective study of a regional level 1 trauma center was conducted. Patients who had sustained femoral neck fractures from March 2013 to December 2016 were included in this study. Inclusion criterion was modular hemiarthroplasty for a narrow femoral canal. A matched group of patients who underwent monobloc hemiarthroplasty (MH) was created through randomization. The main outcome measurements were sex, age, Dorr classification, and femoral head size. We measured the protrusion of the greater trochanter beyond the level of the lateral femoral cortex postoperatively. Modular hemiarthroplasty patients were templated on radiographs using TraumaCad for Stryker Exeter Trauma Stem (ETS®). Results In total, 533 hemiarthroplasty procedures were performed, of which 27 were modular for a narrow femoral canal. The ratio of modular to monobloc was 1:18. Average head size was 46.7 mm ± 3.6 mm for monobloc and 44.07 ± 1.5 for modular (P= 0.001). There were four malaligned stems in the monobloc group versus 14 in the modular group (P= 0.008). Unsatisfactory lateralization was noted in 18 patients (7 mm ± 2.9 mm) in the modular group compared with 8 (4.7 mm ± 3.9 mm) in the monobloc group (P= 0.029). Dorr classification was A or B in 24 patients in the modular group and 18 in the monobloc group (P = 0.006). Templating revealed that modular was not required in 25 patients. Conclusions As per our findings, it was determined that patients with a narrow femoral canal intraoperatively should not receive modular hemiarthroplasty. This is especially true for female patients with small femoral head and narrow femoral canal dimensions (Dorr A and B). They would require extensive careful planning. Surgical techniques should be explored through education intraoperatively to achieve lateralization during femoral stem preparation. This may avoid prolonged anesthetic time and achieve potential cost savings.
Introduction
The National Hip Fracture Database (NHFD) reports that 66,313 patients presented with hip fractures in 2018 in the UK, with an estimated cost of £2 billion per year [1][2][3][4]. This is expected to rise with an aging population [4]. Hemiarthroplasty has been identified as the treatment of choice for elderly patients with displaced intracapsular femoral neck fracture, as outlined by the National Institute for Health and Clinical Excellence (NICE) [1][2][3][4]. NICE also recommended the use of a stem with a known track record [3,4]. Modular stems are sometimes favored over single-size monoblock hemiarthroplasty because of the different sizing options they offer to patients with a narrow femoral canal. However, the modular implant is more expensive [4], thus increasing the costs of both inventory and instrumentation sterilization. This has recently been highlighted by the NHFD as well as by Getting It Right First Time, as the implant cost alone amounts to £10.6 million per year.
Our unit uses the monoblock Exeter Trauma Stem (ETS®) (Stryker Corp.; Kalamazoo, MI, USA) femoral stem size 1.5 with an offset of 40 mm. However, when patients are suspected during broaching to have a narrow femoral canal, the standard monoblock ETS® prosthesis is converted to a modular V40® stem (Stryker) with a bipolar head. Modular hemiarthroplasty has been shown to provide no added clinical benefit in patientreported outcome measures, with studies suggesting that it functions biomechanically as a monoblock, with the inner bearing losing its mobility, and that there is no difference in acetabular wear resulting in reoperation [5][6][7][8][9]. The use of monoblock versus modular prosthesis has long been debated. Recent trials have reported no differences in terms of health outcomes between another traditional design, Thompson™ Hemi Hip Stem (DePuy Synthes Inc., Warsaw, IN, USA) monoblock hemiarthroplasty, and a modern cemented hemiarthroplasty, as demonstrated in the WHiTE 3:HEMI randomized control trial and various systematic reviews [9].
In this study, we aimed to investigate the factors affecting surgeons' choice of monopolar hemiarthroplasty in the form of modular hip hemiarthroplasty and to determine whether it was justified. We hypothesized that there was no difference between modular and monoblock hip hemiarthroplasty in relation to the size of the femoral canal. We believe femoral neck fractures in osteoporotic bone do not require smaller-sized stems and, therefore, do not require the use of modular stem for sizing reasons. We hypothesized that the variation in routine practice has implications for implant cost.
Materials And Methods
We retrospectively reviewed all patients who sustained a femoral neck fracture and were admitted to our major trauma center from March 2013 to December 2016. We identified all patients who received bipolar hemiarthroplasty and monopolar Hemiarthroplasty and identified a patient-matched group selected at random who underwent monoblock hemiarthroplasty for the purpose of comparison. Of all the patients that received monobloc hemiarthroplasty, the patients were randomly chosen using a random number generator in that cohort and were found to have similar demographics to the comparative group hence the study design was a matched case-cohort study. We included patients who underwent modular hemiarthroplasty for the indication of a narrow femoral canal. We then retrospectively reviewed the operative notes on demographics, grade of surgeon, and femoral head size. We reviewed all radiographs and classified them using Dorr classification and thickness of the greater trochanter radiologically. On postoperative films, the thickness of the greater trochanter was defined as the bony prominence protruding beyond a line drawn at the medial border of the lateral femoral cortex, as illustrated in Figure 1. Finally, we templated all modular hemiarthroplasty using TraumaCad software (Brainlab AG, Munich, Germany) with the Stryker ETS® template on the contralateral hip using the preoperative pelvic X-ray, as illustrated in Figure 2, in fitting an ETS® prosthesis incorporating a 2-mm cement mantle.
Statistical analysis
Data were collected using the Medway patient administration and electronic patient record (PAS/EPR) system (System C Healthcare; Maidstone, Kent, UK) and radiographs reviewed on Kodak CARESTREAM picture archiving and communication system (Eastman Kodak, Inc./Carestream Health; Gland, Switzerland). Data analysis was performed using GraphPad Prism version 7 for Windows (GraphPad Software, San Diego, CA, USA). The Shapiro-Wilk test was used to determine normality, with parametric data analyzed using Student's t-test and Fisher's exact test. Differences with a p-value less than 0.05 were considered statistically significant. This study did not require ethical approval.
Results
Of the 553 monoblock hemiarthroplasties (female, n = 354, 64.0%) and 30 modular hemiarthroplasties (female, n = 27, 100%) (P = 0.0007) performed between March 2013 and December 2016, 3 were excluded because they were performed for a different indication. After applying the inclusion criterion, which was the indication of a narrow femoral canal, 27 patients were included. This group was then studied in more detail and compared to a randomly matched patient group. The results are shown in Table 1 The ratio of modular to monoblock hemiarthroplasty was 1:18 (or 5.6%). The average head sizes for the monoblock stem and modular stem were 46.7 mm ± 3.6 and 44.07 ± 1.5 (P = 0.001), respectively. There were four malaligned stems in the monoblock group versus 14 in the modular group (P = 0.008). In the monoblock group, 8 patients (4.7 mm ± 3.9) had unsatisfactory lateralization, compared with 18 (7 mm ± 2.9) in the modular group (P = 0.029). There were significantly more patients with Dorr A and B classification in the modular group (n = 24) than in the monoblock group (n = 18) (P = 0.006).
We further analyzed the modular hemiarthroplasty subgroup. We found that there were no significant differences (P = 0.237) between alignment of the femoral stem and lack of rasping on the lateral wall. We also templated all radiographs with TraumaCad to identify whether these patients were suitable for an ETS® implant, as illustrated in Figure 2. We found that all patients had sufficient femoral canal to accommodate an ETS® implant, with only two patients (7.4%) not achieving the minimum 2-mm cement mantle of an appropriately aligned insert.
Discussion
Among the total hip arthroplasty patients, implant survival at 10 years did not differ between patients with a thin cement mantle and those with a standard technique with a 2-mm mantle [10]. The technique involves a thinner mantle, with the canal reamed to the same size as the prosthesis. Hence, we do not think that achieving a 2-mm cement mantle is crucial, particularly in this cohort of patients.
The retrospective radiologic templating we conducted gives insight into the misconception of requiring a smaller-sized stem in patients with femoral neck fracture. Although we do not advocate templating as a routine practice, our study demonstrates that Dorr A and B female patients are more likely to be mistakenly determined to require a smaller-sized femoral stem and converted to a modular stem.
In the UK, hemiarthroplasty for femoral neck fractures is commonly performed by junior surgeons in training under consultant supervision. Surgical technique in achieving good lateralization is often neglected, as illustrated by malaligned stems, often into varus (P = 0.008), as shown in Table 1. We, therefore, encourage the teaching of good surgical technique in femoral preparation, with sufficient lateralization, during young surgeons' early years of training.
In 2017, 9.5% of all hip fractures (both cemented and uncemented) in which the indications were not reported were managed with modular, whereas 33.5% were managed with unipolar hemiarthroplasty, a ratio of 1:3.5. Despite similar clinical outcomes between monobloc and modular hemiarthroplasty reported in a recent meta-analysis and randomized control trials [4][5][6][7][8][9], with respect to monopolar or BH, the NICE guidance on hip fracture management does not favor one over the other [6]. The four-year surveillance 2015 -Hip fracture (2011) NICE guideline CG 124 [1] has, however, addressed the lower cost of monopolar prosthesis with no difference in outcomes between monopolar and modular. In the 2019 NHFD report, the cost difference between cemented hemiarthroplasty implants differed by means of £277 for the ETS® and £747 for the Exeter V40 stem + head + bipolar head), for a savings of £470.
Financial pressure on the NHS is increasing, and innovative service redesign projects are crucial to delivering current service at a lower cost. Using a modular implant will incur extra implant, inventory, and instrumentation sterilization costs. For our surgical department, the price of a monopolar prosthesis is £318.50; that of a modular implant (the bipolar head and Exeter stem) is £809.00, a difference of £571.50 for the prosthesis alone. The modular implant also incurs an additional £57.28 for sterilization of instrument trays. Over the period of the retrospective review, 27 modular implants, and an associated cost of £16,977, could have been avoided within our unit. We recognize that within the UK, practices can vary and indications for the modular implant differ, with some centers using it routinely. This information is currently unavailable in the NHFD2.
According to NHFD 2017 Annual Report, 65,645 patients were admitted with hip fracture, of whom 9.5 % (or 6236) underwent modular hip hemiarthroplasty. Because there are no clinical benefits of modular hip hemiarthroplasty in these patients and monoblock hemiarthroplasty could be implemented instead, this represents a potential cost savings of £2.9 million per year.
The limitations of this present study include its retrospective nature and the small sample size for analysis. In our institute solely monopolar modular hemiarthroplasties for comparison with the monobloc system. However, we were able to show that all patients had sufficient femoral canal size to accommodate an ETS® stem. Furthermore, Dorr classification was assigned using only AP pelvis views. Often, postoperative films were used when preoperative films were inadequate to measure the medullary canal at 10 cm, as suggested by Dorr et al. [11], because of the difficulty in standardizing films in this group of patients. This was compensated for by using any available AP pelvis films, both pre-and postoperatively, to achieve 10 cm.
Lateral femur X-rays are not routinely taken within our unit because of the risk of aggravating the patient's pain.
Conclusions
Female patients with intracapsular femoral neck fracture with small femoral head size, Dorr A and B, require careful preoperative consideration and templating to avoid any unnecessary change of plan and incurring extra cost. We believe the learning curve for stem canal preparation for both trauma patients and patients undergoing hip arthroplasty is steep. Better surgical techniques should be explored through intraoperative education and departmental teaching to achieve adequate lateralization during femoral stem preparation. This can prevent unnecessarily long operative durations and achieve potential cost savings.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Aintree University Hospital issued approval nil. No ethical approval needed. The data collected for this study was acquired as an audit registered in Aintree University Hospital. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2021-10-24T15:14:18.142Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "53c64c6af5dc37be2d60aacfbf4c3f07895ddba6",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/71343-hip-hemiarthroplasty-the-misnomer-of-a-narrow-femoral-canal-and-the-cost-implications.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9298bbc6210e8f7ff9d74833c09906b58e784ab5",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118370633
|
pes2o/s2orc
|
v3-fos-license
|
Self Tuning Scalar Fields in Spherically Symmetric Spacetimes
We search for self tuning solutions to the Einstein-scalar field equations for the simplest class of `Fab-Four' models with constant potentials. We first review the conditions under which self tuning occurs in a cosmological spacetime, and by introducing a small modification to the original theory - introducing the second and third Galileon terms - show how one can obtain de Sitter states where the expansion rate is independent of the vacuum energy. We then consider whether the same self tuning mechanism can persist in a spherically symmetric inhomogeneous spacetime. We show that there are no asymptotically flat solutions to the field equations in which the vacuum energy is screened, other than the trivial one (Minkowski space). We then consider the possibility of constructing Schwarzschild de Sitter spacetimes for the modified Fab Four plus Galileon theory. We argue that the only model that can successfully screen the vacuum energy in both an FLRW and Schwarzschild de Sitter spacetime is one containing `John' $\sim G^{\mu}{}_{\nu} \partial_{\mu}\phi\partial^{\nu}\phi$ and a canonical kinetic term $\sim \partial_{\alpha}\phi \partial^{\alpha}\phi$. This behaviour was first observed in (Babichev&Charmousis,2013). The screening mechanism, which requires redundancy of the scalar field equation in the `vacuum', fails for the `Paul' term in an inhomogeneous spacetime.
I. INTRODUCTION
Regardless of whether one accepts the standard ΛCDM cosmological model or believes that the observed late time acceleration is due to some exotic and as yet unknown dynamical energy component [2], the cosmological constant problem remains an open issue. The magnitude of the vacuum energy ρ Λ is generically fixed by the UV cut-off of the underlying effective field theory in the matter sector. However the vacuum energy will gravitate, and cosmological constraints on its magnitude can be placed. The ∼ O(10 60 ) order of magnitude discrepancy between the value of ρ Λ allowed by cosmological observations and its particle physics expectation value can only be ameliorated by fine tuning the 'bare' value of the cosmological constant appearing in the Einstein Hilbert action. However, this fine tuning is not stable and must be repeated at each order when calculating loop contributions to the vacuum. Such an instability is an indicator that the actual value of ρ Λ is sensitive to the full UV completion of the effective field theory in the matter sector (see refs. [3,4] for a thorough review).
There have been a number of novel approaches to resolving this issue -see refs. [5][6][7][8][9][10][11][12] for a non-exhaustive list. In this work we focus on an interesting recent proposal in which an attempt is made to dynamically cancel the effect of the vacuum energy using a scalar field non-trivially coupled to the spacetime curvature [11,12]. This effectively renders the cosmological constant problem moot, as we no longer care about the value that ρ Λ takes since it will not gravitate. The starting point is the Horndeski action -the most general covariant scalar tensor action that gives rise to second order field equations. This action is passed through a theoretical filter, in which three conditions are imposed on the underlying theory -that Minkowski space should be a solution to the field equations regardless of the value of the cosmological constant, that this solution should persist through a discontinuous change to the vacuum energy, and that the equations should admit non-trivial dynamics away from the Minkowski vacuum solution. After applying these conditions to the Horndeski Lagrangian, the authors arrived at the following action [11,12] where G µν is the Einstein tensor, andĜ and P µναβ are the Gauss-Bonnet scalar and double dual of the Riemann tensor respectivelŷ The four potentials V J,P,G,R (here abbreviated) are arbi-trary functions of the scalar field φ.
The theory was originally designed to screen the cosmological constant in an FLRW spacetime. An important property of any attempt to screen the vacuum energy is that the mechanism must be applicable in both a cosmological and an inhomogeneous spacetime, such as the Schwarzschild metric. The cosmological constant will gravitate not only cosmologically but also locally, introducing a modification to the Newtonian potentials leading to potentially observable effects on the motion of celestial bodies (for example, see ref. [4] for a nice review). The resulting upper bound on ρ Λ , while considerably weaker than the cosmological one, remains extremely small relative to particle physics scales.
The persistence of the self tuning mechanism in the 'Fab Four' model in non-vacuum spacetimes is not trivial, and in this work we investigate the conditions under which the metric potentials in a spherically symmetric spacetime remain independent of the magnitude of the vacuum energy, and how we have to modify the original theory to preserve the screening property. The work will proceed as follows. In section II we write the covariant Fab Four field equations and discuss the original self tuning mechanism expounded in [11,12]. In section II A we generalize the model, introducing Galileon terms and show that such an introduction can give rise to screened de Sitter vacuum states as opposed to Minkowski space. In section III we search for spherically symmetric solutions to the field equations in which the metric potentials are independent of the cosmological constant, considering two Fab Four terms individually. We summarize our results in section IV.
II. FIELD EQUATIONS
We restrict ourselves to the simplest version of the Fab Four that can give rise to self tuning solutions for an FLRW spacetime. Specifically, we take as our starting point the following action, That is, we fix V George = constant = 1/16πG, V John = constant = c J /M 2 and V Paul = constant = c P /M 5 . c J,P are dimensionless constants and M is the mass that fixes the strong coupling scale of the scalar field. Since we are considering the simplest case in which the four potentials are constant, the Gauss-Bonnet contribution (Ringo) will simply reduce to a boundary term. The scalar field and Einstein equations read where and we have introduced M 2 pl = 8πG. The Fab Four scalar field equation (3) has a particular property that allows the theory to screen an arbitrary cosmological constant ρ Λ . To observe the mechanism, let us write the scalar field and Friedmann equations for an FLRW metric where we assume spatial flatness throughout this work for simplicity. There are two important qualities associated with the scalar field equation (7). The first is that it trivially vanishes on approach to the Minkowski vacuum state, in whichḢ, H 2 → 0. As the equation is redundant at the vacuum solution, we must use the Friedmann equation to determine the dynamics of φ at this point. The second is that it contains time derivatives of H, allowing for a dynamical approach to the vacuum. The final condition imposed on the Horndeski action -that the 'self tuning' solution persists during a piece-wise continuous change in the vacuum energy, ensures that the Minkowski vacuum solution is an attractor. If we define a vacuum solution asḢ = 0,φ = 0, then there are two vacuum states that solve equations (7, 8)φ = 0, H = H 0 and H = 0,φ = α 0 , where α 0 , H 0 are constants. The former case is the standard General Relativistic vacuum, where H 2 = H 2 0 = 8πGρ Λ /3 is fixed by the Friedmann equation andφ = 0 solves (7). The latter solution is a vacuum state regardless of the magnitude of ρ Λ , with the scalar fieldφ expectation value at the vacuum determined by the Friedmann equation. We note that on approach to the Minkowski fixed point,φ will approach a constant value and so φ remains dynamical.
A. de Sitter Self Tuning Solutions
The original Fab Four model was designed to possess a Minkowski vacuum state. However, one can modify the theory slightly and search for different vacua, demanding only that the expansion rate of the spacetime is independent of the cosmological constant. To this end, let us write the scalar field and Friedmann equations for a slightly modified variant of the Fab Four, namely For an FLRW metric, the relevant equations take the form Once again, there are two vacuum solutions to this system of equations. SettingḢ =φ = 0, the two vacua are the standard GR case in whichφ = 0, where α 0 is an unimportant constant obtained from the Friedmann equation. Now, we have two de Sitter vacuum states -the standard General Relativistic one and a second, in which H 0 is completely independent of ρ Λ . This de Sitter solution was discussed in [13], and see also [14,15].
One can perform the same trick with the c P term, if we again modify the Fab Four action slightly. Taking as our starting point the following action where c 3 is the cubic Galileon, we again derive the scalar field and Friedmann equations This system of equations also has two de Sitter vacuum states -H 2 0 = 8πGρ Λ /3,φ = 0 and H 2 0 = 2c 3 M 2 /(3c P ), φ = α 0 . It is a curiosity that the Fab Four can yield de Sitter solutions when taken in conjunction with the c 2,3 Galileon terms. We consider the possibility of deriving similar solutions for the c 4,5 Galileon contributions in future work.
III. SELF TUNING IN SPHERICALLY SYMMETRIC SPACETIMES
The original Fab Four model was concerned solely with self tuning within the context of cosmology. In this work we are interested in the possibility that the screening behaviour might also be present in a spherically symmetric spacetime. It is well known that the cosmological constant problem is not solely the concern of cosmologists. The vacuum energy will introduce a modification to the metric potentials on astronomical scales. Such a modification would, for example, effect observables such as the perihelion advance of Mercury. The resulting upper bound on ρ Λ , while considerably weaker than the cosmological one, remains extremely low relative to typical particle physics scales [4]. Therefore the ability of the model to screen a cosmological constant must be applicable to spacetimes other than FLRW metrics if one wishes to avoid fine tuning problems. For existing work in this direction, we direct the reader to [1,[17][18][19][20].
In this section, we search for static, spherically symmetric solutions to the coupled scalar field and Einstein equations for the general metric We do not linearize in h(r), f (r). We try to keep our discussion as general as possible, discussing the conditions under which a screened solution can exist. At a later stage, we take an ansatz for f (r) and h(r) that is a known solution to the Einstein equations with ρ Λ = 0, and attempt to recover the same solution if we switch on both a non-zero vacuum energy ρ Λ = 0 and scalar field φ(r, t). We do not assume that the scalar field is static, as we anticipate that any screening mechanism must be dynamical in nature.
A. Asymptotically flat solutions with John?
For a central mass M and vacuum exterior, the Schwarzschild metric provides a full non-linear and stable solution to the Einstein equations. We have with constant µ = 2GM . We now show that no Schwarzschild (or indeed any asymptotically flat) self tuning solution exists for the 'John' fab four model. We stress that our statement only applies if we wish to eliminate the effect of the cosmological constant -if we relax this assumption then numerous solutions can be obtained [21][22][23][24][25][26][27][28][29][30][31][32] (but see also [33]).
Our starting point is the action (9), with c 2 = 0 -let us write the scalar field equation for general h(r), f (r) We have already discussed the conditions for self tuning in a cosmological context -principle among them is that the scalar field equation should be redundant on the vacuum solution. If we insert a Schwarzschild ansatz into (17) the equation is trivially satisfied. So far so good, however this is not a sufficient condition for self tuning to occur in this spacetime. We have two additional hurdles to overcome -the first is that the momentum constraint equation is no longer trivial as it is for the FLRW metric. This will impose an extra condition on our solution, which our ansatz might not be able to satisfy. The second is that the scalar field acquires a radial profile which must solve the (r, r) Einstein equation as well as the energy constraint. Simply demanding that the scalar field equation is identically satisfied is not sufficient to guarantee that a consistent solution exists of the form we are imposing.
In this section we are searching for asymptotically flat solutions -to leading order as r → ∞ we take f (r), h(r) → 1 and find the Einstein equations reduce to This system of equations have unique solution where and λ 0,1 are arbitrary constants. This is the form that our solution must approach asymptotically, if we impose Minkowski boundary conditions. Without making any assumptions regarding h(r), f (r), we can solve the (tr) Einstein equation as for arbitrary functions A(t) and B(r) of t and r respectively. This expression was first derived in [1]. For this solution to reduce to (22) as r → ∞, we must fix A(t) = −βt 2 and λ 1 = 0. In addition we fix λ 0 = 0 without loss of generality. Using only the asymptotic flatness assumption and the momentum constraint equation, we have completely fixed the time dependence of φ(r, t). One free function remains -B(r) -which must satisfy B(r) → βr 2 at spatial infinity. If we insert (24) into the Einstein equations, what remains is a system of ordinary differential equations for f (r), h(r), B(r). For our ansatz to be valid, all explicit time dependence must drop out of the (r, r), (t, t) and (θ, θ) Einstein and scalar field equations. Our procedure will be to collect powers of t in the Einstein equations, and by demanding the coefficients of these terms are zero we place constraints on the functions h(r), f (r), B(r). A solution exists if we can completely eliminate the time dependence from the Einstein equations, and if the resulting ordinary differential equations admit a consistent solution for h(r), f (r), B(r).
To eliminate the highest power of t from the Einstein equations (∼ t 4 ), one finds that h(r) and f (r) must be related according to Using this relation, we move to the next to leading order (∼ t 2 ) contributions to the (t, t) and (r, r) equations. Eliminating these terms requires us to fix f = 1. We are always free to define our time coordinate such that h 0 = 1. The condition that all time dependence drops out of the equations forces us to fix h(r) and f (r) to their Minkowski limits over the whole domain. In this case, the Einstein equations collapse to (18 − 21) for all r -this solution is simply the same Minkowski self tuning first constructed in [11,12]. We conclude that no nontrivial self tuning solution exists if we demand that the spacetime is asymptotically flat.
Let us review what has gone wrong. The condition of asymptotic flatness, together with the momentum constraint equation, completely fixes the time dependence of the solution. What remains are the scalar field and Einstein equations, which become differential equations for h(r), f (r), B(r). For these functions to be independent of the time coordinate, all explicit time dependence must drop out of the equations. This can only be achieved by fixing the coefficients of the powers of t in the Einstein equations to zero separately. However, doing so forces us to fix h(r) = f (r) = 1, in which case our solution reduces to the known Minkowski screened solution.
It is clear that the self tuning conditions are more complicated for spacetimes with reduced symmetriesthe momentum constraint places non-trivial conditions on the solution.
B. Schwarzschild de Sitter solutions with John and
Canonical?
Inspired by section II A, let us now search for Schwarzschild de Sitter solutions for the modified action (9), which contains both 'John' and a canonical kinetic term for the scalar field. We search for solutions of the form for arbitrary constants µ, β. The metric potentials are no longer asymptotically flat, so we cannot use equations (18 − 21) to fix the form of φ(r, t) as in section III A. The scalar field equation reads The equation is identically satisfied if we fix c 2 = 3c J β/M 2 . This is the same condition as found in section II A. The momentum constraint equation, after fixing c 2 , is given byφ ′ = 0 -hence our solution must have the form φ = κ(t) + ω(r). Consistency of the gravitational equations forces us to choose κ(t) = κ 1 t for constant κ 1 . This choice ensures that there is no explicit time dependence. The (r, r) equation then reads (28) which has solution if we also fix the constant κ 1 as Somewhat surprisingly, this solution also satisfies the Hamiltonian constraint equation, and hence constitutes an exact solution to the Einstein and scalar field equations. This solution was first obtained in [1]. Let us summarize. The 'John' Fab Four term in isolation can give rise to self tuning Minkowski solutions cosmologically. If we include a canonical kinetic term, then we can also obtain de Sitter solutions starting from an FLRW metric. When we move to a spherically symmetric spacetime, the John term alone cannot give a self tuned Schwarzschild (or any asymptotically flat) solution. However, John with a non-zero c 2 can yield an exact Schwarzschild de Sitter solution, where the asymptotic de Sitter state is independent of the magnitude of ρ Λ . 'John' with a canonical kinetic term can screen the vacuum energy in both homogeneous and inhomogeneous spacetimes.
C. Spherically Symmetric, Self Tuning Solutions with Paul?
Let us now move onto 'Paul' -fixing c J = 0 and c P = 0. Initially we also set c 3 = 0. We begin by showing, as for the 'John' case, that no consistent, self tuned and asymptotically flat solution to the field equations exist (unless the space-time is exactly Minkowski). We begin by noting that for any metric in which h(r), f (r) → 1 in the far field limit, the 'Paul' Einstein equations are asymptotically of the following form where we have used (34) to setφ ′ = 0 in (33). Aside from the 'trivial', GR case φ = constant, ρ Λ = 0, this system of equations has solution This result is no surprise -for an asymptotically flat spacetime any self tuning mechanism must simply reduce to the Minkowski one in which φ = φ(r 2 − t 2 ) as r → ∞.
Returning to the full non-linear equations, the solution to the (tr) Einstein equation that satisfies (35) at spatial infinity is given by for arbitrary functions A(t), B(r). To satisfy the asymptotic condition (35), we must set A(t) = −βt 2 (this function is independent of r, and so is completely fixed by the boundary condition). We have one remaining function, B(r), which approaches B(r) → βr 2 as r → ∞. The scalar field and (t, t), (r, r), (θ, θ) Einstein equations then reduce to a system of ordinary differential equations for B(r), h(r), f (r). As in the case of 'John', consistency of the solution requires that all time dependence must drop out of these equations -by setting successive powers of t to zero we obtain a system of constraints that the ansatz must satisfy. The highest order time dependence appearing is now ∼ t 6 -the requirement that the coefficients of these terms are zero forces us to fix h ′ = 0. This also removes all ∼ t 4 terms. To eliminate the ∼ t 2 dependence, we must further fix f = 1. We arrive at the same conclusion as for 'John' -no asymptotically flat solution exists in which self tuning occurs, other than the original Minkowski case f = h = 1.
Let us now switch on the c 3 term and search for a solution of the form We already know that such a solution should exist -it is simply the same de Sitter state obtained for an FLRW spacetime in section II A recast in static coordinates. Let us derive it. If we impose the following relationship between c 3 and c P c 3 = 3βc P 2M 2 (40) then the scalar field equation is identically satisfied. Furthermore, the (t, r) Einstein equation reduces toφ ′ = 0, with solution φ = κ(t) + ω(r). One can ensure that all time dependence drops out of the remaining Einstein equations by setting κ(t) = κ 1 t. What remains is a system of ordinary differential equations for ω(r). One can show that a consistent solution to the equations exists, where ω ′ (r) satisfies the cubic polynomial if we fix This serves as a useful check on our equations.
Let us finally search for Schwarzschild de Sitter solutions of the form (26), with β and µ independent of ρ Λ . Such a solution was found for 'John', and thus far our calculations involving 'Paul' have closely mimicked this case. However one can show that for 'Paul', one of the key properties of self tuning (redundancy of the scalar field equation) breaks down as soon as we take a nonvacuum metric ansatz.
Let us write the scalar field equation for the action (12) as For a vacuum spacetime,Ĝ and all components of P µα νβ and R α β are constant, and by virtue of their contraction with the symmetric tensor ∇ β ∇ α φ, the tensors P µα νβ and δ µ ν δ α β − δ α ν δ µ β have the same symmetry properties in (43). At a de Sitter point, we can write equation (43) as for constant H 0 . The coefficients of both the (∇∇φ) 2 and (∇φ) 2 are exactly zero if we fix c 3 appropriately, for a de Sitter spacetime. However, for a Schwarzschild de Sitter metric, the functionĜ and components of P µα νβ contain an explicit radial dependence, which cannot be canceled by the constant c 3 term. Furthermore, the components of P µα νβ are not equal, so we cannot write in equation (43). The coefficients of the scalar field kinetic terms (∇∇φ) 2 and (∇φ) 2 are therefore generically non-zero over the whole r-domain (in fact they are singular in the limit r → 0), and there is no redundancy in the equations. This destroys the self tuning condition, which requires the scalar field equation to be trivially satisfied on the screened solution ('on-shell' in the language of [11,12]). In fact on approach to the central singularity r → 0, the scalar field must be fixed by the scalar field equation, independently of both c 3 and ρ Λ , due to the potentially divergentĜ(∂φ) 2 and P µα νβ ∇ ν ∇ µ φ∇ β ∇ α φ terms at this point. For comparison, let us write the covariant scalar field equation for 'John' plus canonical, discussed in section III B. In this case, screened Schwarzschild de Sitter solutions were obtained. The equation reads Again, for a de Sitter state all components of G µ ν are simply constant and G µ ν ∝ δ µ ν . The important difference between this case and 'Paul' is that for a Schwarzschild de Sitter spacetime, the constancy of G µ ν is preserved, and the field equation remains redundant. This allows screened solutions to be obtained.
IV. DISCUSSION
In this work we have searched for spherically symmetric solutions in the 'Fab-Four' class of scalar-tensor field theories. As the spacetimes that we consider are not necessarily vacuum states, it is not clear a priori that the self-tuning mechanism used within a cosmological context in [11,12] will persist. One must check whether solutions can be obtained in which the metric components are independent of the vacuum energy.
We focused on the simplest class of screening solutions in which the Fab Four potentials reduce to constants. We have argued that no asymptotically flat solutions exist for either 'Paul' or 'John' in isolation, in which the metric potentials are independent of the vacuum energy. The requirement of screening coupled to asymptotic flatness forced the equations to collapse to the original Minkowski space solution of [11,12].
When considering Schwarzschild de Sitter spacetimes, we reproduced the same solution as obtained in [1] for 'John' with a canonical term. We also found a new de Sitter solution, involving Paul and the third Galileon term. However we argued that this de Sitter state could not be promoted to a Schwarzschild de Sitter solution, as this spacetime destroys one of the key conditions for screening -that the scalar field equation is identically satisfied at the 'vacuum'.
It is clear that the condition that the screening mechanism must be applicable in an inhomogeneous spacetime further restricts the viable model space, tightening the noose on the ability of scalar-tensor theories to address the cosmological constant problem. However the Fab Four model survives, and it seems that 'John' plus canonical has a special place in the pantheon of scalar-tensor theories, being able to screen a cosmological constant in both an FLRW and Schwarzschild de Sitter spacetime.
A number of extensions to this work can be undertaken. To begin, it would be of considerable interest to test the stability properties of the screened Schwarzschild de Sitter solution. By introducing an explicit time dependence to the metric potentials, and evolving the combined system φ(r, t), f (r, t), h(r, t) from an initially perturbed state, one can deduce whether the solution is an attractor. In addition to the demand that perturbations do not grow, we should also check the boundedness of the Hamiltonian.
In addition, the author would like to introduce matter to the theory, and consider how one might construct a viable cosmology consistent with current observations [16]. As stated in [34], the simplest Fab Four action has the undesirable property that it screens not only the vacuum energy but also dark matter and baryons. Understanding how the scalar field should couple to matter remains an open question.
Finally, one would like to test the quantum stability of the screening mechanism. Although the existence of screened de Sitter solutions is of considerable interest, critics would argue that we have simply swapped one fine tuning for another -forced as we are to set the mass scale M associated with the scalar field to M ∼ O(H 0 ), where H 0 is the observationally determined (small) vacuum energy. Understanding the stability of the scalar field kinetic terms within the context of radiative corrections is really the crux of the issue and will determine whether the Fab Four provides any improvement over the typical ρ Λ fine tuning. This is a direction of future study.
|
2015-03-19T03:10:36.000Z
|
2015-03-19T00:00:00.000
|
{
"year": 2015,
"sha1": "c04a3d4bbe8369b1dd4bf5ba8c909c67ffa277b6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1503.06768",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c04a3d4bbe8369b1dd4bf5ba8c909c67ffa277b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
249963941
|
pes2o/s2orc
|
v3-fos-license
|
Relating instance hardness to classification performance in a dataset: a visual approach
Machine Learning studies often involve a series of computational experiments in which the predictive performance of multiple models are compared across one or more datasets. The results obtained are usually summarized through average statistics, either in numeric tables or simple plots. Such approaches fail to reveal interesting subtleties about algorithmic performance, including which observations an algorithm may find easy or hard to classify, and also which observations within a dataset may present unique challenges. Recently, a methodology known as Instance Space Analysis was proposed for visualizing algorithm performance across different datasets. This methodology relates predictive performance to estimated instance hardness measures extracted from the datasets. However, the analysis considered an instance as being an entire classification dataset and the algorithm performance was reported for each dataset as an average error across all observations in the dataset. In this paper, we developed a more fine-grained analysis by adapting the ISA methodology. The adapted version of ISA allows the analysis of an individual classification dataset by a 2-D hardness embedding, which provides a visualization of the data according to the difficulty level of its individual observations. This allows deeper analyses of the relationships between instance hardness and predictive performance of classifiers. We also provide an open-access Python package named PyHard, which encapsulates the adapted ISA and provides an interactive visualization interface. We illustrate through case studies how our tool can provide insights about data quality and algorithm performance in the presence of challenges such as noisy and biased data.
Introduction
A well known maxim from the Machine Learning (ML) literature is that each ML algorithm has a bias that makes it more suitable for some classes of problems than others. This is stated formally by the No-Free-Lunch theorem (Wolpert, 2002), which asserts that any two algorithms perform equally well on average when considering all classes of problems. Learning which classification technique should be used to tackle a particular test problem or instance can be modeled by Meta-Learning (MtL) (Vilalta & Drissi, 2002), which seeks to learn how to map problem characteristics in the performance of ML algorithms ( Vanschoren, 2019).
Despite discussions on the (in)feasibility of universal predictors and how MtL can help their construction (Giraud-Carrier & Provost, 2005), a common and advisable practice in ML studies is to compare multiple models in a controlled set of experiments, using the same datasets and data partitions. Usually a summary of the results are reported and compared among each other in the form of averages and standard deviation values, across all the instances in a dataset, and for all datasets in the study. While traditional, this type of coarse-grained statistical analysis hinders a more fine-grained evaluation of the strengths and weaknesses of the predictive models obtained. Despite a good overall performance on average on a dataset, a model may be inaccurate on important subsets of instances (observations within a dataset). This can lead to algorithmic biases (Hajian et al., 2016) and deceptive results when some ML models are put into production. In other words, examining performance on individual instances within a dataset can offer a better understanding of an algorithm's true effectiveness and also allows the identification of possible quality issues -inaccuracies and biases -with the dataset.
Recently, a methodology known as Instance Space Analysis (ISA) has been developed for the analysis of algorithms at the instance level, and demonstrated success when analyzing the performance of classification techniques in ML across multiple datasets (Muñoz et al., 2018). There, popular classification datasets from public repositories are described by a set of meta-features and have their predictive performance assessed for multiple ML classification techniques. A 2-D projection relating these characteristics to the performances of the algorithms is then created, presenting linear trends that reveal pockets of hard and easy datasets, that is, datasets that are harder or easier for the algorithms to classify, and how each classifier performs on them. This provides valuable knowledge towards understanding the domains of competence of each classification technique and for aiding automated algorithm selection. It also reveals the lack of diversity of these common benchmarks and the need for generating more challenging datasets. In the previous work however, the analysis was done by considering an instance as an entire classification dataset, and the algorithm performance was reported for each dataset as an average error across all observations in the dataset.
This paper considers a more fine-grained analysis by adapting the ISA framework to the analysis of a single classification dataset, with an instance defined as an observation within the dataset. The idea is to project the original data into a 2-D hardness embedding which can be scrutinized to inspect data quality and to more deeply understand classifier behaviors in a single dataset. This enables closer inspection of observation's (instance) characteristics that each classifier struggles the most with. To this end, we revisit the concept of instance hardness, introduced in the work of for assessing the level of difficulty or probability of misclassification of each instance in a classification dataset. By relating meta-features that describe instance hardness to the predictive performance 1 3 of multiple classifiers, the ISA projections provide valuable information on each classifier's strengths and weaknesses. Furthermore, an analysis of data quality issues in a dataset becomes possible. The main contributions of our paper can therefore be summarized as: -We propose the analysis of a classification dataset and algorithms by a 2-D hardness embedding, which allows the visualization of the data according to the difficulty level of its individual instances; -We adapt the ISA framework to obtain this projection, by relating instance hardness meta-features to the predictive performance of multiple classifiers; -We present and analyze the hardness profile of a few illustrative datasets, including a real dataset of COVID-19 patients with symptoms and comorbidities; -We analyze how the hardness profile of some datasets changes when subject to interventions such as the introduction of label noise; -We provide an open-access Python package named "PyHard", 1 which encapsulates the adapted ISA and provides an interactive visualization interface for relating instance hardness to classification performance.
With our open source software contribution PyHard, we expect to leverage the concept of instance hardness and provide users with the possibility of inspecting their data and algorithmic performance beyond simple descriptive summaries and plots. As shown in the experiments presented in this paper, the developed tool allows for a better understanding of which characteristics pertaining to the training dataset most affect the predictive performance of different ML classification algorithms. The tool also allows the analysis of the effects of typical data quality issues faced in ML. Specifically, the experiments performed seek to ask the following questions: 1. How can we use ISA and instance hardness metrics to understand a dataset at the level of its individual observations? 2. Is it possible to identify and explain any data quality issues by visually inspecting hard instances and their feature values? 3. How robust are the instance hardness metrics and conclusions regarding algorithm strengths and weaknesses in the presence of data quality issues such as label noise?
We show that ISA can help provide evidence and understanding of common issues a data scientist and Machine Learning practitioner face when applying classification models on datasets, and how the biases of a dataset or algorithm can become apparent.
The remainder of this paper is organized as follows. Section 2 summarizes the ISA framework and its main methodological steps. Section 3 reformulates the ISA framework for its application on a single dataset. Section 4 presents how the ISA projections can be used for inspecting data and highlighting classifier strengths and weaknesses in a dataset. Real, benchmark and synthetic datasets are analyzed to this end. Section 5 concludes this work, and provides recommendations for future research.
As originally proposed and described in this section, ISA builds upon the Algorithm Selection Problem (ASP) (Rice, 1976;Smith-Miles, 2009), highlighted as the shaded blue area in Fig. 1. The objective in ASP is to automate the process of selecting good candidate algorithms and their hyperparameters for solving new problems, based on knowledge gathered from similar problems they solved in the past. The following sets from Fig. 1 compose the core of ASP: -Problem space P : all instances from the problem/domain under consideration; -Instance sub-space I : contains a sub-set of instances sampled from P for which the characteristics and solutions are available or can be easily computed; 1 3 -Feature space F : set of descriptive characteristics, also known as meta-features in ML, extracted from the instances belonging to I; -Algorithm space A : contains a portfolio of algorithms that can be used to solve the instances in I; -Performance space Y : evaluating the performance of the algorithms from the set A on the instances from I yields the performance space Y.
The combination of tuples (x, f (x), , y( , x)) , where x ∈ I is an instance, described by meta-features f (x) ∈ F , ∈ A is an algorithm and y( , x) ∈ Y gives the performance of when applied to x, for all instances in I and all algorithms in A , composes a meta-dataset M . A meta-learner S can then be trained in order to select the best algorithm (or a ranking of algorithms) to be recommended for a new instance x based on its meta-features, that is, * = S(f (x)) = arg max ||y( , x)|| . * is the algorithm (or a set of algorithms) with maximum predictive performance for x as measured by y. The Instance Space Analysis (ISA) framework goes further and extends the ASP analysis to give insights into why some instances are harder to solve than others, combining the information of meta-features and algorithm performance in a new embedding that can be visually inspected. To this end, an optimization problem is solved to find the mapping g(f(x)) from the meta-features' multidimensional space into a 2-D space, such that the distribution of algorithm performance metrics and meta-feature values across instances in the 2-D space displays as much of a linear trend as possible to assist the interpretation of hardness directions. The 2-D Instance Space (IS) can then be inspected for regions of good and bad algorithmic performance, with ML techniques used to predict algorithms to be recommended for each instance * = S � (g(f (x))) , providing an alternative approach for ASP as well as the insights permitted by the visualization.
Within the instance space, it is also possible to define areas of strength for each algorithm known as the algorithm footprint (y( , x)) , that is, areas of the IS where the algorithm performs well. A set of objective measures can be extracted from an algorithm's footprint for algorithmic power evaluation in the IS, such as the area of coverage, purity and density, which will be discussed next. Such meta-knowledge can also support the inference on algorithmic performance for other instances z ∈ P which were not in the sub-space I used to build the IS.
Finally, it is possible to examine the diversity of the projected instances and, when applicable, to enrich the IS with carefully designed new instances (Smith-Miles and Bowly, 2015;Muñoz et al., 2018;Smith-Miles et al., 2021). With such a procedure, one might be able to produce more challenging problem instances expanding the boundaries of the ISA.
Summarizing, the application of the ISA methodology requires (Muñoz et al., 2018): i. Building the meta-dataset M; ii. Reducing the set of meta-features in M by keeping only those able to best discriminate the algorithms' performances; iii. Creating a 2-D IS from the meta-dataset M; iv. Building the algorithms' footprints in the IS for measuring algorithmic performance across the IS.
Step (i) is dependent on the problem domain, involving the choice of the problem's instances, meta-features, algorithms and performance measures (the sets I, F, A and Y ). The choice of a subset of the meta-features in step (ii) can be done by employing any 1 3 suitable feature selection algorithm. Here we use a rank aggregation approach, described in Sect. 3. Steps (iii) and (iv) are implemented in the MATLAB language and freely available for use in MATILDA (Melbourne Algorithm Test Instance Library with Data Analytics) tool. 2 MATILDA also includes a feature selection procedure for performing step (ii), although the user is encouraged to explore independent methods to arrive at a strong feature selection. Our work has re-implemented steps (iii) and (iv) using the Python language, which are gathered in a public package named "PyISpace". 3
Instance Space construction
We now consider the problem of finding an optimum mapping from the metadata domain to the 2-D instance space. We follow the Prediction Based Linear Dimensionality Reduction (PBLDR) method proposed in Muñoz et al. (2018). Given a meta-dataset M with n instances and m meta-features, let ∈ ℝ m×n be a matrix containing the meta-features values for all instances and ∈ n×a be a matrix containing the performance measure of a algorithms on the same n instances. An ideal 2-D projection of the instances for this group of algorithms is achieved by finding the matrices r ∈ ℝ 2×m , r ∈ ℝ m×2 and r ∈ ℝ a×2 which minimize the approximation error: such that: where ∈ ℝ 2×n is the matrix instance coordinates in the 2-D space and r is the projection matrix. Essentially, this optimization problem seeks to find the optimal linear transformation matrix r , such that the mapping of all instances from ℝ m to ℝ 2 results in the strongest possible linear trends across the instance space when inspecting the distribution of each algorithm's performance metric, and each feature. The maximization of linear trends for both meta-features and algorithmic performances in the new space is guaranteed by the matrices and in Eqs. (3) and (4).
Assuming that m < n and that is full row rank (or considering the problem in a subspace spanned by ), the following alternative optimization problem is obtained: Muñoz et al. (2021) solve this problem numerically using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm as the numerical solver, which is also used here. From multiple runs, the solution that achieves a maximum topological preservation as proposed in (Yarrow et al., 2014) is chosen, that is, the solution with maximum Pearson Correlation between the distances in the feature space and the distances in the instance space.
Footprint analysis
A footprint is a region in the instance space where an algorithm is expected to perform well based on inference from empirical performance analysis (Muñoz et al., 2018). Two types of footprints are currently output in the ISA analysis. The first indicates regions of the IS where the algorithm shows a good performance according to a given threshold on algorithmic performance. The second corresponds to regions where the algorithm performs better compared to all others contained in the portfolio.
In order to construct an algorithm's footprint of good performance, first the performance measure values contained in Y must be binarized so that the performance label for each algorithm on an instance is either easy (also named good in ISA) or hard (or bad in the ISA terminology) based on a user-defined threshold. This is done for each algorithm in the portfolio A , resulting in a binary matrix bin with instances as rows and algorithms as columns. For each algorithm in A , the DBSCAN algorithm (Khan et al., 2014) is then used to identify high density clusters of easy instances. Next, -shapes are used to construct hulls which enclose all the points within the clusters (Edelsbrunner, 2010). For each cluster hull, a Delaunay triangulation creates a partition, and those triangles that do not satisfy a minimum purity (the percentage of good instances enclosed within it) requirement are removed. The union of the remaining triangles gives the footprint of the algorithm where good performance is expected based on statistical evidence.
The footprint of best performance is built similarly, but taking into account the relative performance of the algorithms in the IS. That is, a Delaunay triangulation is formed for dense regions containing instances where the algorithm performs better than the other algorithms in the pool. The best footprints of multiple algorithms are also compared in order to remove contradicting areas due to overlaps. These footprints are generally smaller and may be absent if there is not a region of the IS where the algorithm performs consistently better when compared to the others.
It is also possible to define some objective measures of algorithmic power for each algorithm across the IS by computing: i. The area of the footprint (A), which can be normalized across multiple algorithms for ease of comparison; ii. The density of the footprint ( ), computed as the ratio between the number of instances enclosed by the footprint and its area; 1 3 iii. The purity of the footprint p, which corresponds to the percentage of good instances enclosed by the footprint.
Larger values for these measures provide evidence of a better performance of an algorithm across the IS. A large A implies the algorithm shows a good performance for a large portion of the IS. A large means such area is dense and contains a large amount of instances. Finally, p is large when most of the instances enclosed in A are good and will be maximum when all instances in A are good. A strong algorithm in the IS is expected to present a large normalized footprint area, with density close to one and purity as close to 100% as the chosen feature set will permit.
ISA for a single dataset
ISA has been used in the analysis of public benchmark repositories in ML and popular classification algorithms in Muñoz et al. (2018) and, more recently, in the analysis of regression datasets and algorithms . In this section we present how we recast the framework for the analysis of a single classification dataset. Our main interest is on the insights which can be obtained by relating data characteristics and meta-features to algorithmic performance. Therefore, some steps of the original ISA framework are not included, such as the algorithmic recommendation module and the generation of new instances.
Given a classification dataset D containing n D instances i ∈ X , with m D input features and labeled in a class y i ∈ Y each, we have: -Problem space P : is reduced to the dataset D; -Instance space I : contains all individual instances i ∈ D; -Feature space F : contains a set of meta-features describing instance hardness, also known as hardness measures (HM) ; -Algorithm space A : comprises a portfolio of classification algorithms of distinct biases; -Performance space Y : records the performance obtained by each algorithm in A for each instance i ∈ D.
The following subsections present the components from our framework, which are summarized in Fig. 2. Accordingly, for each instance i in the dataset D, a set of hardness measures is extracted, and each algorithm in the set A has its performance measured on i , as represented by y( , i ) . This is done in a cross-validation step, where the log-loss error obtained in the prediction of the instance label is recorded for each classification technique. This cross-validated log-loss error is the performance metric stored in Y . A feature selection step considering the power of the meta-features to describe the performances of the algorithms in A is performed, resulting in a reduced meta-feature subset f s (f ( i ), y) . Combining the sub-set of selected meta-features to the predictive performances of multiple classification algorithms allows the construction of the meta-dataset M , from which the 2-D Instance Space projection with coordinates z 1 and z 2 is extracted. These steps will be further explained in Sects. 3.1 to 3.4.
Hardness measures
One important aspect when performing ISA is using a set of informative meta-features that are able to reveal the capabilities of the algorithms and the level of difficulty each individual instance poses. Here we revisit the definition of Instance Hardness (IH) proposed by as a property that indicates the likelihood that an instance will be misclassified. Namely, the hardness of the instance i with respect to a classification hypothesis h is where h ∶ X → Y is a hypothesis or function mapping input features in an input space X to output labels in an output space Y. In practice, h is induced by a learning algorithm l trained on a dataset D = {( i , y i ) | i ∈ X ∧ y i ∈ Y} with hyper-parameters , that is, h = l(D, ) . The authors also derive the instance hardness for a set of representative learning algorithms L . We adopt this expression throughout the work, instantiating the set L to the pool of classifiers A in ISA.
The idea is that instances that are frequently misclassified by a pool of diverse learning algorithms can be considered hard. On the other hand, easy instances are likely to be correctly classified by any of the considered algorithms.
An additional interest of this paper is assessing which characteristics of the data items make them hard to classify. define a set of hardness measures (HM) intended to explain why some instances are often misclassified. These are the measures employed as meta-features in F . Since their objective is to characterize the level of difficulty in the classification of each instance in a dataset, they are natural candidates for describing the algorithm's performances on the same data. Table 1 summarizes the HM employed in this work, with their names, acronyms, minimum and maximum values achievable and references from where they are extracted. We introduced modifications into some of the measures in order to limit and standardize their values. Consequently, all measures are constructed so that higher values are registered for instances that are harder to classify.
For each instance i ∈ D , the hardness measures extracted are: k-Disagreeing Neighbors kDN( i ) : gives the percentage of the k nearest neighbors of i which do not share its label, as described by Eq. (8). As in , the value of k is set to 5.
where kNN( i ) represents the set of k-nearest neighbors of the instance i in the dataset D. The higher the value of kDN( i ) , the harder its classification tends to be, since it is surrounded by examples from a different class. This measure can be computed at a O(n D ⋅ m D ) asymptotic computational cost for a dataset D with n D instances and m D input features. Disjunct Class Percentage DCP( i ) : builds a decision tree using D and considers the percentage of instances in the disjunct of i which share the same label as i . The disjunct of an example corresponds to the leaf node where it is classified by the decision tree.
where Disjunct( i ) represents the set of instances contained in the disjunct where i is placed. Easier instances will have a larger percentage of examples sharing the same label as them in their disjunct. Therefore, we output the complement of this percentage. Building the DT dominates the asymptotic computational cost of this measure and can be performed at O(m D ⋅ n D ⋅ log 2 n D ) steps (Sani et al., 2018). Tree Depth TD( i ) : returns the depth of the leaf node that classifies i in a decision tree DT, normalized by the maximum depth of the tree built from D: where depth DT ( i ) gives the depth where the instance i is placed in the decision tree. There are two versions of this measure, using pruned ( TD P ( i ) ) and unpruned ( TD U ( i ) ) decision trees. Harder to classify instances tend to be placed at deeper levels of the trees and present higher TD values. This measure also requires building a decision tree, with an asymptotic computational cost of O(m D ⋅ n D ⋅ log 2 n D ). Class Likelihood CL( i ) : measures the likelihood of i belonging to its class: where P( i |y i ) represents the likelihood of i belonging to class y i , measured in D, and P(y i ) is the prior of class y i , which we set as 1 n for all data instances. For ease of computation, the conditional probability P( i |y i ) can be estimated considering each of the input features independent from each other, as done in Naive Bayes classification. Larger class likelihood values are expected for easier instances, so we output the complement of this value. The asymptotic computational cost to compute the required probabilities from the dataset is O(m D ⋅ n D ). Class Likelihood Difference CLD( i ) : takes the difference between the likelihood of i in relation to its class and the maximum likelihood it has to any other class.
The difference in the class likelihood is larger for easier instances, because the confidence it belongs to its class is larger than that of any other class. We take the complement of the measure as indicated in Eq. (12). 4 The probabilities can be calculated as in CL, resulting in an asymptotic computational cost of O(m D ⋅ n D ). Fraction of features in overlapping areas F1( i ) : this measure takes the percentage of features of the instance i whose values lie in an overlapping region of the classes as: where I is the indicator function, which returns 1 if its argument is true and 0 otherwise, j is the j-th feature vector in D and: The values max( y i j ) and min( y i j ) are the maximum and minimum values of j in a class y i ∈ {c 1 , c 2 } . According to the previous definition, the overlap for a feature j is measured according to the maximum and minimum values it assumes in the different classes and one may regard a feature as having overlap if it is not possible to separate the classes using a threshold on that feature's values. F1 defines instance hardness according to whether the example is in one or more of the feature overlapping regions in a dataset. Larger values of F1 are obtained for data instances which lie in overlapping regions for most of the features, implying they are harder according to the F1 interpretation. Multiclass classification problems are first decomposed into multiple pairwise binary classification problems, whose results are averaged. The asymptotic computational cost of this measure is O(m D n D ) for binary classification problems and O(m D ⋅ n D ⋅ C) for multiclass problems with C > 2 classes, supposing that each of the classes has the same number of observations, that is, n D C . Fraction of nearby instances of different classes N1( i ) : in this measure, first a minimum spanning tree (MST) is built from D. In this tree, each instance of the dataset D corresponds to one vertex and nearby instances are connected according to their distances in the input space in order to obtain a tree of minimal cost concerning the sum of the edges' weights. N1 gives the percentage of instances of different classes i is connected to in the MST.
Larger values of N1( i ) indicate that i is close to examples of different classes, either because it is borderline or noisy, making it hard to classify. This measure requires first computing the distance matrix between all pairs of elements in D, which requires O(m D ⋅ n 2 D ) operations and dominates the computational cost of this measure. Ratio of the intra-class and extra-class distances N2( i ) : first the ratio of the distance of i to the nearest example from its class to the distance it has to the nearest instance from a different class (aka nearest enemy) is computed: where NN( i ) represents a nearest neighbor of i and ne( i ) is the nearest enemy of i : Then N2 is taken as: Larger values of N2( i ) indicate that the instance i is closer to an example from another class than to an example from its own class and is, therefore, harder to classify. As in N1, the larger computational cost involved in obtaining N2 is to compute a distance matrix between all pairs of elements in D, requiring O(m D ⋅ n 2 D ) operations. Local Set Cardinality LSC( i ) : the Local-Set (LS) of an instance i is the set of points from D whose distances to i are smaller than the distance between i and i 's nearest enemy, as defined in Equation (19) (Leyva et al., 2014). LSC outputs the relative cardinality of such set: where ne( i ) is the nearest enemy of i (Eq. (16)), that is, the example from another class that is closest to i . Larger local sets are obtained for easier examples, which are in dense regions surrounded by instances from their own classes. Therefore, in Eq. (18) we output a complement of the relative local set cardinality. The asymptotic cost of LSC is dominated by the computation of pairwise distances between all instances in D, resulting in O(m D ⋅ n 2 D ) operations. Local Set Radius LSR( i ) : takes the normalized radius of the local set of i : Larger radiuses are expected for easier instances, so we take the complement of such measure. As in LSC, the asymptotic cost of LSR is O(m D ⋅ n 2 D ). Usefulness U( i ) : corresponds to the fraction of instances having i in their local sets (Leyva et al., 2015).
If i is easy to classify, it will be close to many examples from its class and therefore will be more useful. We take the complement of this measure as output. The asymptotic , since the cost of computing the distance between all pairs of elements in the dataset is dominant. Harmfulness H( i ) : number of instances having i as their nearest enemy (Leyva et al., 2015).
If i is nearest enemy of many instances, this indicates it is harder to classify and its harmfulness will be higher. The asymptotic computational cost of H is also O(m D ⋅ n 2 D ).
All measures are computed using the entire dataset. Concerning the computational cost of the measures, all of them are polynomials in the number of features and observations. Although the distance-based measures are the most costly, one must observe that the matrix of pairwise distances between all elements must be computed only once and it can be reused afterwards to compute all of these measures (namely N1, N2, LSC, LSR, U and H). The same reasoning applies to the measures that require building a decision tree model, which can be induced only once and have its information extracted for computing DCP and TD, and for measures based on the Naive Bayes classification rule (CL and CLD).
Algorithms and performance assessment
The candidate classification algorithms considered in this work are: Bagging (Bag), Gradient Boosting (GB), Support Vector Machines (SVM, with both linear and RBF kernels), Multilayer Perceptron (MLP), Logistic Regression (LR) and Random Forest (RF). They are representative of different learning paradigms commonly employed in the ML classification literature. But alternative algorithms can be easily added to the pool.
For assessing the classifiers' performances in a dataset D, we first split D into r folds ( r = 5 by default) according to the cross-validation (CV) strategy, such that each instance belongs to only one of the r test sets. At each round, r − 1 folds are used for training the classifiers and the left-out fold is left for testing. Therefore, for each instance and algorithm combination we have one performance estimate. Repeating this process yields an interval estimation, which may be more reliable and can be set by the user.
At first, we could simply record whether the classification algorithms classify the instances correctly or not. But a more fine-grained evaluation can be obtained if the confidences the classifiers have in their predictions are considered. Therefore, we opted for a measure which takes into account the probabilities associated to each class, namely the log-loss or cross-entropy performance measure (Eq. (23)): where C is the number of classes the problem has, y i,c is a binary indicator of whether the class c is the actual label of i (1) or not (0) and p i,c is the calibrated probability the classifier attributes i to class c. Platt scaling is employed for calibrating the probability values (Platt, 1999;Böken, 2021).
An hyper-parameter optimization step was added in our setting, acting as an inner loop for each of the training sets of the outer CV. Within this inner loop, a candidate set of parameters is evaluated through cross-validation upon the training data from the outer loop. We employ a Bayesian optimization (Bergstra et al., 2011;Snoek et al., 2012;Bergstra et al., 2013) algorithm from a range of hyperparameter values for each classifier in the pool. The objective is to get closer to the best predictive performance achievable for the given data instances and classification algorithms. Within this inner loop, a candidate tuple of parameters is evaluated through cross-validation. Optimizing hyper-parameters is not so common in meta learning studies due to the high computational cost associated when many datasets and algorithms are used, but we consider that it can bring a significant improvement in classification performance. Nonetheless, one may opt for disabling the hyper-parameter optimization when convenient. Figure 3 shows a schematic representation of the complete process described previously.
Feature selection
According to Muñoz et al. (2018), it is advisable to maintain just the most informative meta-features in the meta-dataset M before the IS projection is generated. We performed a supervised meta-feature selection in M , based on the continuous response value y( j , i ) , that is, the log-loss performance of the classifiers for the instances in D , where j is the j-th classifier in the pool. Since there are seven classification algorithms in the pool, a ranking of meta-features for each one of them is obtained. Next, a rank aggregation method is employed to merge these subsets, as suggested in (Prati, 2012).
Taking the hypothesis that no previous knowledge about the data domain is available, a more general criterion for feature ranking is preferred. Information-theoretic methods offer this domain agnostic characteristic, being independent of any learning algorithm, and capable of capturing linear and non-linear relationships present in data (Li et al., 2017;Gao et al., 2015). A general formulation is presented in Eq. (24). where k represents the k-th feature vector, j is the response variable for the j-th algorithm and S j is the set of selected features for the j-th algorithm. The term MI( k ; j ) is the mutual information, and MI( i ; k | j ) is the conditional mutual information. eval is an arbitrary function and different options for it will lead to different methods. The feature set S j is initially empty, and the first feature chosen is the one showing maximum mutual information with the response variable j . When a next feature is selected, according to its score J( k ) , it is added to S j , and the process continues until |S j | = n f , a desired number of selected features is reached, in a forward feature selection process. By default, we set n f = 10 , which is the maximum recommended number of meta-features for the ISA projection tool.
The method employed for evaluating the meta-features is the Minimum Redundancy Maximum Relevance (MRMR), described in Eq. (25). It is a criterion that gradually reduces the effect of feature redundancy as more features are selected (Li et al., 2017). The rationale for this choice is that some meta-features may be redundant, since they are built on similar assumptions about the source of difficulty of the instances. There is a tradeoff between minimizing the number of redundant meta-features in the selected set as an attempt to diversify it, and keeping the most relevant features. The MRMR method is an interesting choice, since it rejects redundant features at first, but tolerates the redundancy as it becomes more difficult to select informative features.
Since we have seven different classification algorithms, there will be seven feature sets, S j (j = 1, … , 7) , ranked according to their importance as measured by Eq. (25). They are joined by a Instant Runoff Voting ranking aggregation method (Hillinger, 2004). 5 The top n f meta-features in this aggregated ranking are kept, whilst the remaining ones are discarded.
Instance space representation and footprints
Given the meta-dataset M with a selected subset of meta-features and the log-loss classification performances of the seven algorithms considered in this paper, further steps of the work employ the ISA functionalities of generating the 2-D IS and the footprints of the algorithms using the PyISpace implementation. 6 We included in the former package a rotation step in order to standardize the interpretation of the IS projections. The rotation is performed so that the hard instances are always placed towards the upper left corner of the space, whilst the easier instances are placed towards the bottom right corner of the space. To achieve such a transformation in the instance space, a standard rotation as presented in Eq. (26) is applied, since the original IS is always centered at the origin. This rotation preserves the distances between instances as in the original instance space, so that there are no topological changes.
To proceed with this rotation step, we first need to find the angle of the original IS relative to the abscissa. For this, we consider the vector pointing to the centroid of the hard instances. In order to locate this centroid, we use the same binarized performance matrix bin used for building the footprints of the algorithms (Sect. 2.2). This matrix indicates, for each instance and algorithm combination, whether a good or bad predictive performance was attained when compared to a threshold. Next, we calculate the mode of the categorization each instance has as either good or bad relative to the algorithms' performances. An instance for which the majority of the algorithms achieve a good predictive performance is categorized as easy. In contrast, the bad instances are those for which the majority of the algorithms do not attain a good predictive performance, corresponding to the hard instances. Once we know the location of the bad (hard) instances in the original IS, it is straightforward to find their centroid in this space. Lastly, the rotation angle is = 135 • − bad , where bad is the angle of the vector pointing to the centroid of the instances for which a bad predictive performance was achieved most of the times. This angle assures that hard instances are placed towards the upper left of the IS and the easy instances are placed in the bottom right.
The definition of what constitutes a good and bad predictive performance of the classification algorithms according to the log-loss metric for each individual observation, required for building the binarized matrix bin , is based on the results of the following proposition, whose proof is enclosed in Appendix A: Proposition 1 (cross-entropy bounds) For any classification problem with C classes there is a lower bound L lower and an upper bound L upper for the cross-entropy loss (aka log-loss) such that: if logloss( i ) < L lower , the prediction was correct; if logloss( i ) > L upper , the prediction was incorrect; and if L lower ≤ logloss( i ) ≤ L upper , the prediction can be either cos − sin sin cos correct or incorrect , where logloss( i ) is the log-loss of instance i . Specifically, these bounds can be set as L lower = − log 1 2 = log 2 and L upper = − log 1 C = log C.
Therefore, if for an instance i the measured log-loss value is lower than log 2 , one can be certain this instance was correctly classified. On the other hand, a measured log-loss value larger than log C implies the instance was certainly misclassified. However, if the value of the log-loss metric falls in the interval between log 2 and log C , nothing can be said about the prediction, neither whether it was correct or incorrect. Based on the previous proposition, as a heuristic the log-loss performance of an algorithm for a given instance i is considered good if its value is lower than the harmonic average of log 2 and log C . The idea is to include as many correctly classified instances as possible, while avoiding the inclusion of too many misclassified instances, however stricter or smoother threshold values can be set if desired. Computationally, as defined in Eq. (23), the log-loss function calculation requires the actual label of the instance and the probabilities of classification assigned by a predictor to each of the classes.
Finally, we also introduce in this paper the concept of "instance easiness footprint", by taking the values from Eq. (7) as input and defining the easiness of an instance according to a threshold on IH, which is by default set as 0.4, implying that on average the probabilities assigned to the correct class of an instance by the pool of classifiers is 0.6, a value that can be made stricter or smoother if desired. 7 With such an approach, it is possible to obtain an indication of regions of the instance space in which the instances consistently receive a good classification score and are, therefore, easier to classify. As a footprint, objective measures of area, density and purity can be extracted from these regions. Therefore, larger areas are expected for datasets for which a larger portion of the instances across the instance space are considered easier.
Experiments
One of the main contributions of our work, along with instantiating the IH-ISA framework as presented previously, is PyHard, a Python implementation of the framework, which is hosted in PyPI. 8 PyHard is a self-contained solution, which encapsulates the ISA framework and runs an application to explore the results visually and interactively. This visualization app allows the user to interact with the IS of a dataset and select regions to be further inspected by interactive plots or saved. Throughout the configuration file, it is possible to choose the set of meta-features, the algorithm portfolio, to enable and to configure the hyper-parameter optimization and feature selection steps. In this section we show the potential insights obtained by constructing the ISA of a dataset through case studies. 9
Case Study: ISA for inspecting a COVID prognosis dataset
Whenever an individual is tested for COVID-19 diagnosis in the Brazilian territory, some information must be entered into specialized government systems. They include the presence or absence of symptoms commonly associated with COVID-19 and comorbidities which can impact the severity of the cases. The São José dos Campos municipal health department gathers this information and joins outputs from multiple governmental systems in order to follow up on cases and formulate public health strategies. This is a large city from the São Paulo state with a population around 750,000 inhabitants, an industrial economy and a high human development index according to Brazilian standards. In a partnership with the health department of this city, part of this data was formatted for predictive analysis to support public health decision making.
Here we present an analysis of a dataset containing anonymized data from citizens diagnosed with COVID-19, collected from March 1st, 2020 to April 15th, 2021. The dataset involves predicting whether a citizen will require hospitalization or not taking as input the following attributes: age, sex, initial symptoms (fever, cough, sore throat, dyspnea, respiratory distress, low saturation, diarrhea, vomit and other symptoms) and comorbidities (chronic cardiovascular disease, immunodeficiency-immunodepression, chronic kidney disease, diabetes mellitus, obesity, chronic respiratory diseases and other risks). The idea is to take information routinely supplied during COVID testing for estimating the amount of resources from the city's health system that may be required, supporting public heath management policies.
The "hospitalization" dataset used here has data from 5,156 citizens, half of which were hospitalized. Our objective is to analyze the hardness profile of this dataset and to extract some insights from the visualization and interaction with its IS. Figure 4 presents the ISA of the hospitalization dataset. Each point corresponds to an observation of the dataset, that is, a confirmed COVID case. In Fig. 4a, the observations are colored according to their IH values, with harder observations colored in red and easier observations colored in blue. Using our rotation step, the hard instances are concentrated in the upper left of the plot and easier instances are placed towards the bottom right. In Fig. 4b the same observations are colored according to their original labels, where red points correspond to hospitalized The combined analysis of these plots already provides us with some interesting insights: most of the observations are easy to classify correctly by most of the algorithms, but the group of hospitalized citizens in most of the cases has either an easy profile, being placed in the bottom of the IS, or a hard profile, being colored in red regarding IH. In contrast, patients who were not hospitalized had mostly an intermediate hardness level, which contains observations of low and medium IH values. Nonetheless, there is an specific cluster of non-hospitalized subjects that were very hard to classify correctly (with z 1 coordinates lower than -2 and z 2 coordinates between 1 and 2), which will be referred as "anomalous non-hospitalized group" (acronym ANH) hereafter. The hospitalized individuals placed near the instances of the non-hospitalized class in the bottom left of the intermediate cluster of points in the IS (with z 1 coordinates lower than -1 and z 2 coordinates between 0 and 1) are also worth investigating, since their hardness profile is more similar to that of observations of the opposite class. They will be referred as "anomalous hospitalized group 1" (AH1) from here on. A second group of interest from the hospitalized class is composed of the hard instances in the top of the IS (with z 1 coordinates between -2.5 and -1 and z 2 coordinates between 2 and 3) and will be named "anomalous hospitalized group 2" (AH2), since most of the data from this class is regarded as easy. Note that the anomalous term refers to the expected hardness profile of the instances of both classes, as evidenced by the ISA projection. Figure 5a to f show the IS projection of the hospitalization dataset colored according to the values of some of the meta-features used, those which were more explanatory of the hardness profile of the data. The following interesting aspects can be highlighted: -Many of the observations with high IH values at the top of the IS have also a low likelihood of belonging to their own classes (as measured by CL in Fig. 5a) and are placed in disjuncts with elements which do not share their labels (measured by DCP in Fig. 5b). They include mostly instances from the ANH and AH2 groups, although the DCP measure also highlights the instances from the AH1 group as hard. The high values for these measures provide evidence that the observations from these groups have characteristics which overlap with those from the other classes; -Most of the instances with high IH values at the top of the IS are also close to elements from the opposite class (measured by N1 in Fig. 5c), have lower inter class distance compared to their intra class distance (measured by N2 in Fig. 5d) and have a high proportion of nearest neighbors with labels which differ from their own (measured by kDN in Fig. 5e). But there are also other hard observations within the left borders of the different groups of instances in the ISA according to these measures. They can be borderline cases, that is, observations near the decision frontier required for separating the classes. The groups ANH and AH1 are highlighted as hard according to these measures too, but the group AH2 has mixed results. N2 in particular indicates that some of the observations from AH2 are closer to an instance of their class than to their nearest neighbors from another class; -The TD P values are high for instances from the non-hospitalized class with exception of those in the ANH group. They are also high for a subset of the instances from the hospitalized class, mostly those in the AH2 group. Interestingly, the cluster ANH has a low TD P value. This measure involves building a pruned decision tree from the data and we can see that the observations from this group are classified at depths similar to those of the hospitalized observations. The same happens for the AH1 and AH2 instances, which are placed at depths similar to those of the observations of the non-hospitalized class. Combining these results to those of the previous meta-features, we can infer these groups probably contain noisy or outlier instances, which have input data characteristics similar to that of the opposite class.
In Fig. 6 we present the good (in green) and best (in purple) footprints of the classifiers which attained the largest and the smallest footprint areas in the IS built for the hospitalization dataset. The easiness footprint encompassing the instances which are easier to classify by most of the algorithms of the portfolio is also shown in Fig. 6c. Bagging (Fig. 6a) had a normalized footprint area of 0.856, with density of 1.055 and purity of 0.972. The MLP (Fig. 6b) had a normalized footprint area of 0.926, with density of 1.007 and purity of 0.992. Therefore, MLP showed a good predictive performance in a larger area of the IS, which was also slightly purer and encompass good instances in 99% of the cases. Indeed, the MLP showed a good predictive performance for most of the instances except from those of the ANH and AH2 groups, while Bagging was not so consistent for instances in the nonhospitalized class. But it is interesting to notice that Bagging had some areas of best performance for some hard instances in the top of the IS when compared to other algorithms. In fact, Bagging was the algorithm with largest best normalized area in our portfolio. The easiness footprint area for this dataset is 0.859, with a density of 1.06 and a purity of 0.998 and encompasses only the easiest instances from both hospitalized and non-hospitalized classes. This excludes all instances from groups ANH, AH1 and AH2. Therefore, most of the observations from the hospitalization dataset are easy to classify, except for those in the former groups, which are more challenging. Recommending a particular algorithm for new observations is beyond the scope of this paper, but one might expect an algorithm to perform well for instances of similar characteristics to those encompassed in its footprint, given its high purity level. The PyHard tool allows users to save and to inspect the characteristics of the instances from a selected footprint for supporting such studies.
The PyHard tool also allows to plot the values of the raw input attributes along the IS. Figure 7 shows the distributions of some of the attributes with interesting patterns in the IS which can help to understand the hardness profile of the data. Age is a well known feature with influences on COVID severity, impacting hospitalization. According to Fig. 7a, older people (in warmer colors) are predominant in the easy group of hospitalized individuals, but also imposes some higher level of difficulty in classifying some of the non-hospitalized individuals. Indeed, the easiest cases of non-hospitalized citizens tend to be younger. Some hospitalized people from the AH2 group, which are hard to classify, are also younger, while from the groups AH1 and ANH are older. According to the medical literature (Barek et al., 2020), older people tend to evolve to worse cases of COVID, therefore one might expect cases requiring hospitalization to be more frequent among elderly people. This can partially explain why some instances from the ANH and AH2 groups are hard to classify, since they have conflicting age patterns to what is commonly expected.
All other raw attributes analyzed are binary, indicating that if a citizen has either reported the presence of some symptom (colored in red) or did not report such symptom (colored in blue), such as low saturation (Fig. 7b) and respiratory distress (Fig. 7c) or some comorbidity, namely obesity (Fig. 7d), diabetes (Fig. 7e) and other risks (Fig. 7f). All patients who did not require hospitalization, except from some from the ANH group, had no saturation or respiratory distress issues and did not report obesity or other risks. The ANH group had some patients with low saturation, respiratory distress and other risks reported (there is also one case of obesity in this group), which can influence why they are harder to classify, since these are patterns of symptoms and comorbidities observed more commonly in hospitalized cases. In contrast, the individuals from the AH1 and AH2 groups have a good saturation, no respiratory distress, no obesity and no other risks. Therefore, they also have contradicting patterns regarding these aspects while requiring hospitalization. Diabetes did not influence much on the difficulty level of the instances and individuals with and without diabetes are evenly distributed in the IS. This also happens for other features, which are omitted here.
Our analysis allows us to highlight some groups of observations from the dataset with conflicting raw attributes values considering their expected outcomes. These are cases worthy of closer examination by a domain expert and may either correspond to outliers or were wrongly labeled.
Based on these insights, follow-up investigation of the raw data from the observations in ANH identified three groups of individuals: people who were cured and did not require hospitalization, despite their symptoms and comorbidities; people who quickly progressed to death and did not seek hospitalization in time; and people who were actually hospitalized, but missing information on their hospitalization date has lead to an erroneous data labeling. While the first two groups were correctly labelled and can be considered outliers, the latter are incorrectly labeled and should be discarded from the dataset or corrected for building a more reliable prediction model. The group AH1 has many individuals with few and mild symptoms which were hospitalized. Whilst they are anomalous regarding the expected characteristics from the hospitalized class, it is possible that their forms have missing information on some of the symptoms, bringing noise to the input attributes values. AH2 can also present some noise in the input attributes, but they are correct yet atypical observations considering the general patterns of the input attributes from the other observations of the class they belong to.
Summarizing, the prediction of hospitalization risk from standard information collected from forms filled by the population when they are tested is practicable and can be used to support public health decision making. Nonetheless, some of the observations may have been wrongly filled or are incomplete, which impacts the hardness level achieved in their classification. There is a general need for more careful data collection in Brazil, which is often neglected but can be of great value in fighting the pandemic in one of the most affected countries worldwide. Our tool allows us to highlight such problematic observations and to explore the main reasons why they are hard to classify, whether due to labelling errors in the inputs, outputs or anomalies. Such insights offer a richer analysis as the foundation for building trusted ML predictive models in practical and critical contexts.
Case Study: ISA for detecting algorithmic bias using the COMPAS dataset
This section builds an IS for the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) dataset. 10 It presents data regarding crime recidivism and is commonly employed in the literature for analyzing sample and algorithmic biases (Khademi & Honavar, 2020). The dataset has 5,278 instances and 13 input attributes, where 47% of the instances are from the two-year recidivist class and is quite balanced. Some of the attributes are nominal, but since they are binary, we treat them as numerical flags of either 0 or 1 values.
Algorithmic bias can occur whenever sensitive input attributes from a dataset influence the predictive results when they should not. In the case of recidivism data, aspects such as race and gender are protected attributes and should not be decisive in determining if an offender is likely to commit new crimes (Corbett-Davies & Goel, 2018). In this case study we show how ISA can be used to identify potential biases in the errors of the predictors induced by the COMPAS dataset. For this, we will compare how low IH and mainly high IH instances differ when running the analysis with race as an input attribute, and when removing it for being a protected attribute. Our reasoning for this is that even though a protected attribute is not used to train a model, it may still influence the model indirectly due to an inherent data collection bias. If we train a model without protected attributes, but evaluate that there are differences within the error rates for different races or genders, then we can suppose that bias is present.
In one analysis, we employed all 13 attributes of the COMPAS dataset as input to the ISA, while in the other we removed race_African-American and race_Caucasian from the attributes list used as input to the classifiers. As a result, two instance Figure 8 presents the instance spaces of the COM-PAS dataset with (in Fig. 8a) and without (in Fig. 8b) race as an attribute, where the points are colored according to their IH values. The IH values are quite linearly distributed in each IS, and they clearly delimit areas of good and poor predictive performance. The IS projection of Fig. 8a is similar to that of Fig. 8b, offering the first evidence that the hardness profile of the dataset is maintained despite the usage or not of the race attribute. It took 20 minutes and 35 seconds to generate the IS of the original COMPAS dataset and 19 minutes and 26 seconds to obtain the IS of the dataset counterpart which eliminates the race attribute. The experiments were run on a laptop with an 2.40 GHz Intel i75500U processor using 8 GB of RAM, running Ubuntu version 20.04.
Using PyHard's Lasso tool we manually selected regions of good/easy observations (blue points in the bottom of the IS) and bad/hard observations (red points in the top of the IS) from both instance spaces and compared their distributions. These selections are presented in Appendix B. The interpretation of these selections is as follows: an area with points predominantly in red represents instances that were most likely misclassified, whereas an area with points predominantly in blue represents instances which were most likely classified correctly. Therefore, in the case of binary classification, a selection of good points is a surrogate for true positives plus true negatives ( TP + TN ), and a selection of bad points is a surrogate for false positives plus false negatives ( FP + FN ). Those sums can be further broken down using class information, if necessary. The focus of our analysis will be on the most relevant attributes of the dataset considering potential algorithmic bias: race_African-American, race_Caucasian, and in addition priors_count, age and sex.
First, we show in Fig. 9 the distribution of race values considering the entire dataset and the selections with low IH and high IH values, respectively, with and without race as an attribute. Comparing the distributions, what stands out the most is the fact that the graphs present an inversion in the proportion of the classes (recidivism) when comparing low IH and high IH values. That is, observations from the no-recidivism class are more frequent for low IH values and observations of the recidivism class are more frequent in the high IH selections. This provides evidence of greater difficulty in classifying repeat offenders. The tendencies when race is and is not used as an attribute are also similar in the plots, with an exception seen for instances with low IH values of Fig. 9 Distributions of attributes race_African-American and race_Caucasian for the entire dataset (left) and in different data selections on ISA built with (center) and without (right) race as an attribute the race_African-American where the proportion of non-recidivists is lower than that of recidivists.
Interestingly, the proportions of low IH instances per class and per race in Fig. 9 are very similar despite the usage or not of race as an input attribute. High IH observations have some minor variations, with an increased proportion of instances from the recidivism class for African American individuals when the race attribute is disregarded. This is an additional indication that the hardness profile is very similar for both scenarios, that is, with and without taking the race attribute as input to the classification models.
In Table 2 we focus on the extract of hard to classify instances and show the summary statistics of recidivists and non-recidivists instances with high IH values, for both classification scenarios, namely with and without using race as an input attribute. Plots with the distributions of the same attributes per class are presented in Appendix B. Among the instances with high IH value, the actual recidivists ( _ _ = Yes ) have a lower average of prior offenses which leads them to be wrongly predicted to be non-recidivists by the classification models. However, it is actually more interesting to check non-recidivists ( _ _ = No ) with high values of IH, because they are a surrogate for FP. In practice, this can lead to the conviction of a person who is in fact innocent if the classification models are used. In this group, the instances represent, on average, younger male African-Americans with a higher average number of prior offenses than the average of the non-recidivists along the entire dataset, which also explains why they are harder to classify.
We notice that the classification algorithms continue to be biased for non-recidivist high IH instances even when the race attribute is omitted from the set of input attributes. The average profile of the hard instances from the recidivists and non-recidivists is maintained, despite a reduction in the percentage of instances with race_African.American = 1. Therefore, whilst FN is most likely for individuals of the Caucasian race, FP has occurred more frequently for the African American individuals. Algorithmic fairness occurs when groups of instances pertaining to the same protected attribute have the same chance of being misclassified, that is, both groups have the same FP and FN rates (Corbett-Davies & Goel, 2018). Therefore, even when disregarding the race attribute in the classification models, there is no fairness concerning this sensitive attribute.
As an additional validation of our previous analysis, we have evaluated the FP and FN rates using an aggregated confusion matrix, taking into account the outputs of the multiple classifiers in our pool A . To build this matrix, we count, for each instance, how many algorithms in our pool have achieved a good log-loss predictive performance according to the threshold specified in Sect. 3.4. This information can be obtained by summing up the rows from matrix bin . If the majority of the algorithms achieved a bad log-loss predictive performance for an instance, an incorrect prediction is accounted. By relating this information with the true class of the instance, we can identify whether it was a FP or a FN. Next, we have decomposed the FP and FN errors by race, as shown in Table 3. Confirming what was observed in our previous lasso selections, FP is more common among the African American individuals, even when the race attribute is explicitly disregarded. The FN rates are slightly superior for the Caucasian individuals, but this difference is smaller than that observed for the FP rate.
We now further refine our analysis by focusing the Lasso selection tool on two specific regions of the ISA of the COMPAS dataset without race as an input attribute (projection shown in Fig. 8b). The reason for analyzing this projection only is that the recommended procedure for avoiding undesirable biases is to delete protected attributes from the input set of the ML techniques. Two selections of the given ISA are examined: (i) the easiest to classify instances (with z 1 coordinates between 1 and 2 and z 2 coordinates between -3 and -2); and (ii) the hardest to classify instances (with z 1 coordinates between -4 and -3 and z 2 coordinates between 1 and 2). These areas were chosen based on the fact that the more difficult instances are placed towards the upper left corner of the IS, whilst the easier instances are in the bottom right corner.
Fig. 10
Boxplots of the raw feature values referring to prior and juvenile offenses for three sets of instances of the COMPAS dataset without race as an input attribute: the easiest to classify (in blue), the hardest to classify (in red) and all the other instances (in green)
3
The easiest data excerpt contains 27 instances, all of whom are male recidivists ( _ _ = Yes ) with less than 45 years of age. 88.9% of the individuals of this group are African Americans. They also have a quite high number of prior offences, which varies between 12 and 28 (median of 18). Therefore, the ML algorithms in our pool found it easier to classify correctly young male recidivist individuals with a high number of prior offences, the majority of whom are also African Americans.
The hardest selection has 25 male non recidivist individuals ( _ _ = No ). Most of them have less than 45 years of age too, except for one individual with 51 years of age (who has a high number of prior offenses and two juvenile convictions). The number of prior offences varies between 1 and 21 (median of 8) and 76.0% are African American individuals. All individuals in the hardest set had prior juvenile convictions registered too, while this characteristic was not observed for the majority of the easiest to classify instances (21 out of the 27 easiest instances had no juvenile offenses registered). Therefore, in general the algorithms faced difficulties in classifying young male individuals who were not labeled as recidivists but have prior offences registered in their adult or juvenile criminal records. The race attribute had a lower prominence in this group than that verified for the easiest set. What stands out the most in the hardest to classify observations are the unexpected raw features values which contradict the overall patterns of the non-recidivist class, especially regarding the _ (number of prior crimes committed), _ _ (number of juvenile felonies), _ _ (number of juvenile misdemeanors) and _ _ (number of other prior juvenile convictions) attributes. Figure 10 contrasts the raw feature values referring to prior and juvenile offenses of the easiest (in blue), hardest (in red) and all the other instances (in green) of the COMPAS (without ) dataset. The plot from Fig. 10b sums up the counts of the _ _ , _ _ and _ _ attributes as a total count of juvenile offenses registered for each individual. As shown in Fig. 10a, the easiest data excerpt contains individuals with high numbers of prior crimes (all recidivists) and the others data excerpt has in general minor amounts of prior offenses (although there are many outliers, since this set contains individuals from both recidivist and non-recidivist classes). The hardest to classify data excerpt has an intermediary number of prior offenses registered, despite being originally labeled as non-recidivists. Taking the total number of juvenile offenses (Fig. 10b), the contrast is more evident. Whilst the "easiest" and "others" data excerpts have in general a Fig. 11 Boxplots of some of the meta-features values for three sets of instances of the COMPAS dataset without race as an input attribute: the easiest to classify (in blue), the hardest to classify (in red) and all the other instances (in green) low number of juvenile offences (with first, second and third null quartiles), the hardest to classify individuals clearly have a pattern of more juvenile offences registered.
The values of some of the meta-features (boxplots from Fig. 11) in the easiest to classify (in blue), hardest to classify (in red) and others (in green) set of instances also evidence the contradicting patterns of the hardest to classify instances. The hardest set shows: high CL values (Fig. 11a), indicating the instances in this set have a low likelihood of pertaining to their class; high DCP values (Fig. 11b), indicating that such observations tend to be placed in disjunctures with a majority of examples from the other class; and very high kDN values (median at the maximum of 1.0, Fig. 11c), indicating that most of these observations are close and surrounded by instances from the other class. In contrast, the easiest to classify instances show low hardness measures values, whilst the other instances have intermediary hardness measures values. All of these results corroborate that the hardest to classify instances are either noisy or very atypical. As observed in our refined analysis, Rudin et al. (2020) also report data inconsistencies in COMPAS and the risk of dangerous individuals being released to society.
The analysis performed on the COMPAS dataset shows how ISA can be used for analyzing algorithmic bias, through the inspection of the predominant characteristics of groups of instances which are hard to classify and surrogates for FN and FP. Particularly, for the COMPAS dataset we can notice that even when the sensitive attribute is omitted from the input set, the hardness profile of the instances is similar to that observed when the whole set of attributes is used. In both cases, the FN and FP rates differ for different populations, pointing out an intrinsic bias regarding the race attribute. But a closer inspection of the hardest to classify instances when contrasted to the easiest to classify instances also evidence that other types of problems can be present in this dataset, such as data inconsistencies. It would be worth investigating how the usage of fairness-enhancing strategies (Friedler et al., 2019) and data cleasing strategies can change the hardness profiles identified in the COMPAS dataset when using ISA.
Case study: ISA of datasets with label noise
Real world datasets are subject to data quality issues, such as the presence of noise due to errors in data collection, storage and transmission. According to Zhu & Wu (2004), a classification dataset has two possible types of noise: in the predictive input features or in the target attribute. The later type of noise, also known as label/class noise, usually implies more severe problems for the supervised ML techniques, since they rely on optimizing some loss-function based on reproducing the labels of the training data (Garcia et al., 2015). Here we study the effects of the presence of different levels of label noise in the ISA of a dataset. For such, we use a controlled synthetic dataset for which the absence of label noise can be granted and progressively introduce random label noise to it, at increasing rates. Figure 12a shows the base dataset used in these experiments. It is generated using the mlbench package from the R language and consists of two classes described by Gaussians with spread of 0.8, with 250 data items each. The classes are fairly linearly separable, with a little degree of overlap. Figure 12b presents the same dataset with data items colored by IH values. We can notice that the hardest instances in the original space are those in the boundary and overlap of the classes.
Taking the dataset from Fig. 12a as base, we randomly flip the class labels at the following rates: 5%, 10%, 20%, 30%, 40%, 50%, 70%, 90% and 100%. Since the choice of the examples to be corrupted is random, this process is repeated 10 times for each noise rate. Although high noise rates are quite unrealistic, studies suggest that even controlled real datasets have at least 5% of noise rate (Maletic & Marcus, 2000).
The ISA is run for the original dataset and for the corrupted versions, using default hyperparameter values for the classifiers in order to reduce the computational burden. Next, we computed the average areas of the easiness footprints and of each of the seven classification techniques considered in this work. Figure 13a shows the average easiness footprint area registered in the instance spaces produced for the different noise levels. Recalling that the easiness footprint corresponds to regions of the IS containing instances that are easier to classify, as expected the easiness footprint areas decrease as more noise is introduced Fig. 13 Instance easiness and classifiers' footprint areas for different noise levels until reaching the 40-50% noise levels. For 40 and 50% of noise levels, the footprint areas of instances for which a consistent good performance is reached is null. This clearly shows how the IH values and footprint areas are able to reflect the harmful effects of label noise in classification performance. From this point on, the classification problems become complementary to those of the lower noise levels. In the extreme case of 100% noise level, the labels of all data points are flipped, so that the classes are inverted and the classification problem is therefore equivalent to that of the noiseless version.
The same effect can be verified in the footprint areas of the classification techniques (Fig. 13b). Indeed, the regions of the IS where the techniques have a good classification performance is reduced for increasing noise levels until the 50% level is reached, when the areas begin to increase again. But it is also interesting to notice that some of the classification techniques are more robust than others to the presence of the different noise levels. For the original dataset, all classification techniques are quite effective, as we have a simple classification problem. This includes the linear models (Logistic Regression and linear SVM), since this is a fairly linearly separable problem. For increasing noise levels, the performance of the ensemble techniques (Bagging, Gradient Boosting and Random Forest) degrades to a large extent, whilst the linear predictors remain quite robust. Boosting algorithms focus on hard instances in their iterations, which can justify the loss of performance, since the hard instances will correspond to the noisy ones. But interestingly, Boosting was far more affected by a noise level of 40% and not so much in the case of 50% of label noise. One must observe, however, that all algorithms have a large decrease of footprint area for high noise levels, indicating that they perform well on very few instances of the IS. The linear predictors remained quite robust compared to other algorithms for most of the noise levels, despite their simplicity. Bagging, on the other hand, was in general the most affected algorithm for all noise levels as far as the footprint area is concerned.
It would be worth evaluating in the future how the employment of noise cleansing techniques affect the ISA of a noisy dataset, by monitoring the differences of footprint areas for both noisy and clean versions of a same dataset.
Conclusion
In this paper we have presented an approach to build and analyze an embedded space of instance hardness for a given classification dataset. We have also launched PyHard, a new analytical tool intended to address a gap in meta-learning studies regarding instance hardness for a single dataset. The problem was introduced recalling the ISA framework and linking its formulation to our problem setting. ISA plays a central role here, finding a transformation that reduces the dimensionality of a meta-dataset in a space with linear trends with respect to the difficulty of the individual instances and traces regions of good performance for different classification algorithms.
We have shown projections of sample datasets, including a real COVID prognosis dataset, and insights that can be obtained through their ISA visualization and inspection, such as highlighting observations with potential quality issues and making the strengths of different classification techniques more explicit. The tool also includes functionalities in order to better support the end-user in the analysis of their datasets, including visualizing features distributions for different selections of the dataset. Although a visual inspection does not offer irrefutable proof, it can give valuable insights and guide some descriptive analysis, as demonstrated.
Regarding the computational cost of our analyzes, the step which demands more time is tuning the hyperparameter values of the classification techniques. This hyperparameter tuning step can be easily withdrawn from the analysis, although it is more interesting to evaluate instance hardness considering the best performance achievable by each classification model for a particular dataset. One may also run the predictive evaluation before-hand using another set of desired classifiers and meta-features and use PyISpace to obtain the IS projections from his/her own meta-dataset, while the PyHard visualization application can be used to inspect and interact with the obtained IS afterwards.
As future work, we plan to consolidate the visualization tool and validate its usage in the analysis of datasets of increasing complexity levels and scale. We can also study how the ISA reflects the effectiveness of data pre-processing approaches for improving data quality. This is the case of data cleaning approaches for dealing with label noise, a proper missing data imputation and removing possible sample biases. Dealing with other types of problems, such as learning with imbalanced datasets, is also of interest. We also intend to assist the user in relating classification performance to the meta-features values by obtaining rules aimed to describe situations where each algorithm shows good predictive performance.
More hardness meta-features can be devised and added to the tool too. In fact, most of the meta-features (hardness measures) used in this work rely on class overlapping as a main source of difficulty for classification. But other perspectives can also be regarded, such as the density and structure of the input space. One promising approach is to model the data as a proximity graph from which centrality based features can be extracted.
Another worthwhile strategy in order to validate the hardness embedding for a given dataset is to separate a validation set and build the IS projection only on the remaining data points. By projecting the left-out validation instances in the IS built, we can assess whether hard/easy instances are placed in proper regions of the IS and how the included ML algorithms are expected to behave in their classification.
While we focused on 2-D projections for obtaining visual insights and delineating the footprints of the algorithms more easily, it is also possible to generate hardness embeddings of higher dimensions. The higher the dimension, the less information on the original meta-features values is lost, but the visualization appeal is also hindered. The usefulness of such higher-dimensional embeddings needs to be characterized and understood in future work.
Finally, we have not fully explored all the functionalities included in the MATILDA tool 11 in this work. For instance, there is a module for automatic algorithm selection for different regions of the instance space that was not included in our analyses. This information is interesting to better characterize the domains of competence of the algorithms and will be explored in the future. We can also consider ways to generate new data instances at targeted regions of the instance space, which can be useful for data augmentation, including new instances with increased measures of difficulty to drive algorithmic advances.
Appendix A Proof of proposition
Proposition 1 (cross-entropy bounds). For any classification problem with C classes there is a lower bound L lower and an upper bound L upper for the cross-entropy loss (aka log-loss) such that: if logloss( i ) < L lower , the prediction was correct; if logloss( i ) > L upper , the prediction was incorrect; and if L lower ≤ logloss( i ) ≤ L upper , the prediction can be either correct or incorrect , where logloss( i ) is the log-loss of instance i . Specifically, these bounds can be set as L lower = − log 1 2 = log 2 and L upper = − log 1 C = log C.
Proof Given a multiclass setting consisting of C classes, the outcome of a classifier is the predicted probability vector [p 1 , p 2 , … , p C ] , and the predicted class is defined as arg max j p j .
And y j,c ∶= I j=c indicates the true class c. We first prove that if the classifier succeeds in correctly predicting the class of instance i , then logloss( i ) < L upper and that the value of L upper is − log( 1 C ) . Without loss of generality, suppose j = 1 is the correct class, since the classes can be always reordered so that each one of them becomes the first in the set. In that case, arg max j p j = 1 , which implies that:
Summing all those inequalities results in
On the other hand, ∑ j p j = 1 . Therefore, However, if logloss( i ) < L upper it does not necessarily imply that the instance i was correctly classified. We show this by counterexample: take the particular predicted probability vector 1 2 − , 1 2 + , 0, … , 0 , and define the right class as c = 1 . For this vector, the classifier predicts the class 2, since it has the highest probability. The log-loss value is If we choose 0 < < 1 2 − 1 C , which is always possible for C ≥ 3 , then y j,c log p j = −y 1,c log p 1 = − log 1 2 − Specifically for binary problems with only two classes, if logloss( i ) < − log 1 2 , then So, in the particular case of binary classification problems, L lower = L upper . Thus, To find L lower , we first show that a multiclass problem can be reduced to a binary classification problem in the sense of log loss metric. Herewith, the vector [p 1 , p 2 , … , p n ] is equivalent to [p 1 , p � 2 ] , with p � 2 = ∑ C j=2 p j . In both cases, the log loss value is the same. Thus, we assume L lower = − log 1 2 and prove it by contradiction. Assume that logloss( i ) < − log 1 2 , that the correct class is c = 1 and ∃p k ∶ p k > p 1 (classification error). Then, Therefore, L lower = − log 1 2 . ◻
Appendix B Additional figures
Additional figures from the analysis of the COMPAS dataset are presented here. They show the Lasso selections of easy and hard instances (Fig. 14) and distributions of some other attributes besides race, namely number of priors (Fig. 15a), age (Fig. 15b) and sex (Fig. 15c). Median values are represented by vertical dashed lines. A summary of these results is presented and discussed in Sect. 4.2 using Table 2.
Fig. 15
Distribution of different features values for the entire COMPAS dataset, low IH and high IH data points, with race as an input attribute
|
2022-06-24T15:21:48.664Z
|
2022-06-22T00:00:00.000
|
{
"year": 2022,
"sha1": "d802bdb1fc405c0068c00164ac853e5eb62d75f3",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10994-022-06205-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c43794be4f7926befb9ff39a5b0de83253eff62",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
230558343
|
pes2o/s2orc
|
v3-fos-license
|
Recent Advances in Surface Plasmon Resonance Optical Sensors for Potential Application in Environmental Monitoring
Surface plasmon resonance (SPR) optical sensors are among the most promising sensors for use in various fields of sensing. Owing to their advantages, SPR optical sensors have attracted tremendous attention, especially for monitoring environmental pollutants, e.g., heavy metal ions, phenol, and pesticides. To further enhance their sensitivity and selectivity towards a specific target pollutant, the development of various active layers on top of a metal surface has been explored over the past two decades. This paper provides up-to-date information on advances in SPR optical sensors for detecting heavy metals, phenol, and pesticides, which have been discussed and summarized in chronological order. The systematic information on the detection of these pollutants using SPR optical sensors will give researchers guidelines for future developments in this area.
Introduction
Optical sensors are known to be among the most versatile sensing tools, with the ability to detect a wide range of targets such as temperature, pressure, radiation level, force, electric field, pH, strain, chemical concentration, displacement, liquid level, displacement, humidity, magnetic field, acoustic field, and many more. An optical sensor works by measuring the changes in a light beam due to the alteration of the intensity of light, which may change its optical properties of phase, wavelength, polarization, and spectral distribution. In short, it is a device that measures the quantity of light and translates it into some form that is readable by the instrument. Surface plasmon resonance (SPR) sensors are versatile optical sensors that have received considerable attention since early 1990s. The SPR phenomenon was first observed by Wood in 1902, when he observed a pattern of white and dark bands as he shone a monochromatic polarized light on a mirror with a diffraction grating on its surface. (1) A complete physical interpretation of this phenomenon was not available until 1968, when Otto and Kretschmann independently reported the excitation of surface plasmons. Since then, SPR has slowly made its way into the limelight owing to its practical applications in sensitive sensors, first emerging in a paper on gas sensing and biosensing in 1983. (2) The Kretschmann configuration is the most common setup used for SPR optical sensors. In this configuration, monochromatic and p-polarized light is used to form a surface plasmon that propagates along a metal surface. At a certain angle known as the resonance angle, the intensity of the reflected light decreases owing to resonance, which occurs when the momentum of the surface plasmon wave is equivalent to that of the incident light. An SPR optical sensor works by measuring the refractive index near the metal surface. Any changes in refractive index will also change the resonance angle. Hence, SPR can be used as an optical sensor. However, SPR cannot be used to distinguish solutions with the same refractive index. Over the past two decades, research has been carried out to improve the sensitivity of SPR sensors for sensing applications either by the modification of the SPR system or by combination with various sensing methods. (3)(4)(5)(6)(7)(8) Another approach has been to modify the metal thin film surface with an active layer or a sensing element. The development of an improved active layer is crucial as it determines the optical sensor sensitivity, selectivity, and other parameters of sensors. SPR sensors with different active layers have been studied for various sensing applications including environment monitoring and clinical diagnosis. (9)(10)(11) This paper reviews the development of active layers for SPR optical sensors as potential optical sensors for sensing environmental pollutants including heavy metal ions, phenol, and pesticides, as illustrated in Fig. 1. The detection limit, sensitivity, and selectivity of these active layers in combination with SPR sensing for environmental pollutants have been reviewed and summarized.
Advantages of Using SPR for Sensing
The most practical application and commercial use of SPR are based on the Kretschmann configuration, where a gold thin film is placed at the interface of two dielectric media, i.e., a prism and air or the solution of interest. The gold thin film with a thickness of 50 nm is deposited on a 24 × 24 mm glass cover slip. The SPR setup (Fig. 1) consists of a He-Ne laser with a wavelength of 632.8 nm, a prism with a high refractive index (greater than 1.60), an optical stage driven by a stepper motor (Newport MM 3000), a photodetector, and a lock-inamplifier (SR 530). The SPR optical sensor measures the resonance angle of the reflected light from the gold thin film. Any changes in the refractive index of the thin film surface will also change the resonance angle. By modifying the gold thin film surface with an active layer, any binding interaction between target analytes and the active layer can be detected rapidly in real time. This also gives the SPR optical sensor many advantages that include costeffectiveness, simple sample preparation, no need for a reference solution, label-free sensing, excellent sensitivity, and high selectivity towards target analytes. Moreover, information on the concentration of the analytes, the kinetics, and the affinity of the interaction can be obtained from the binding rates and levels of the SPR sensor. Other existing optical sensors, such as colorimetric, electrochemiluminescence, fluorescence, and optical fiber sensors, have great performance in detecting low concentrations of environmental pollutants, (12,13) yet have one or more disadvantages among high cost, cumbersome operation, and time-consuming sensing. SPR also offers more advantages than conventional methods, such as atomic absorption spectroscopy (AAS), anodic stripping voltammetry (ASV), inductively coupled plasma-mass spectrometry (ICPMS), and X-ray fluorescence (XRF) spectroscopy, which have limitations including time-consuming sensing, complicated processing, and high-cost instruments. (14) It is thus believed that SPR is more practical for the in situ remote sensing of environmental pollutants and chemical and biological analyses.
Heavy metal ion detection
Heavy metal ions are very harmful to biological systems and can lead to short-term and long-term diseases in humans even in trace amounts. Therefore, it is crucial to detect trace amounts of heavy metal ions in environmental water through continuous monitoring. Heavy metal ions are present in many industries such as the plating, machine, and chemical industries. SPR sensors are optical sensors with many advantages such as simple sample preparation, rapid measurement, and cost-effectiveness. The most important finding in the development of SPR sensors has been the development of an active layer on top of a gold thin film surface to improve the sensitivity of sensors for heavy metal ion detection. Work on the development of the active layer for metal ion sensing from 2001 to 2017 was previously discussed in detail. (15) A wide range of materials with positive-ion adsorption properties such as chitosan, carbon, graphene oxide, metal oxide, conducting polymers (e.g., polypyrrole and polyanilline), and ionophores have been exploited as base materials for active layer development. As a result, the developed active layers have improved the sensitivity of SPR optical sensors for metal ion detection down to the ppb level. Nevertheless, studies on novel active layers for SPR sensing are still ongoing to further improve the sensitivity and selectivity of SPR sensors. For instance, Saleviter et al. (2018) deposited a 4-(2-pyridylazo)resorcinol-chitosan-graphene oxide composite on top of a gold thin film as an active layer for an SPR optical sensor. This material has high sensitivity to Co 2+ . Their optical sensor was able to detect Co 2+ at a concentration of as low as 0.01 ppm with a sensitivity of 0.00069° ppm −1 . (49) In the same year, Saleviter et al. employed a different active layer, namely, a cadmium sulfide quantum dot-graphene oxidechitosan composite, on a gold thin film in a further attempt to detect low Co 2+ concentrations. (50) The optical sensor was also able to detect Co 2+ at a low concentration of 0.01 ppm. In another interesting work, Daniyal and coworkers (2018) developed an SPR optical sensor by incorporating a nanocrystalline cellulose-graphene oxide nanocomposite into an SPR system. (51,52) This optical sensor was able to detect Cu 2+ and Ni 2+ at a concentration of 0.01 ppm and had sensitivities of 3.271 and 1.509° ppm −1 for Cu 2+ and Ni 2+ , respectively.
In 2019, Ramdzan et al. attempted to sense one of the most toxic metal ions, Hg 2+ . (53) They developed a chitosan/carboxyl-functionalized graphene quantum dot thin film and combined it with an SPR optical sensor. The sensor has high potential for sensing Hg 2+ with a detection limit of 0.5 ppm. During the same year, Roshidi et al. also investigated Hg 2+ detection (54) using a graphene oxide/poly(amidoamine) dendrimer as an active layer, which was very good for sensing Hg 2+ , with a reported detection limit of 1 ppm. They then developed another material, namely, a chitosan-poly(amidoamine) dendrimer, to detect Pb 2+ using an SPR optical sensor. (55) The incorporation of the chitosan-poly(amidoamine) dendrimer in the SPR optical sensor improved its performance in sensing Pb 2+ , where the detection limit was 0.1 ppm. On the other hand, the use of a hydrous ferric oxide-magnetite-reduced graphene oxide active layer was reported by Al-rekabi et al. (56) The thin film they developed was used to detect As 3+ and As 5+ at a very low concentration of 0.1 ppb. Their SPR optical sensor also had a higher sensitivity to As 3+ than to As 5+ . They also investigated the selectivity of the developed active layer. The sensor had high selectivity towards As 3+ when tested with a mixture of other heavy metal ions that included Cr 2+ , Ni 2+ , Zn 2+ , and Mn 2+ . Also in 2019, Daniyal et al. broadened their research from that in the previous year to detect other metal ions. (57) Their SPR optical sensor using nanocrystalline cellulose-graphene oxide as an active layer was found to also detect Zn 2+ at a concentration of as low as 0.01 ppm in addition to Cu 2+ and Ni 2+ .
In the following year, Zhao et al. deposited a germanium selenide (GeSe)-chitosan composite on top of a gold thin film. (58) Their SPR optical sensor was able to detect Pb 2+ at a concentration of 0.097 nM. Soon after that, Wu et al. demonstrated the sensing of Pb 2+ at a much lower concentration. (59) Using DNAzyme-gold nanoparticles as an active layer, their SPR optical sensor was used to detect Pb 2+ at a concentration of 80 pM and also had high selectivity; Pb 2+ was distinguished from other metal ions that included Co 2+ , Cd 2+ , Cu 2+ , Fe 2+ , Ba 2+ , and Ni 2+ . More recently, Anas et al. investigated Fe 3+ detection using SPR. They modified a gold thin film with CTAB/hydroxylated graphene quantum dots to improve the SPR sensitivity, (60) and their SPR optical sensor was able to detect Fe 3+ at a concentration of 0.18 µM. In an additional work, they reported that their sensor can also be used for sensing Zn 2+ and Ni 2+ with a detection limit of 1.8 µM. (61) Bakhshpour and Denizli (2020) deposited a Cd(II) ion-imprinted (IIP) thin film on three different thin films comprising poly(hydroxyethylmethacrylate) (pHEMA), pHEMA-based nanoparticles (poly-NPs), and gold nanoparticles (AuNPs) for the detection of Cd 2+ . (62) The IIP pHEMA, poly-NPs, and AuNPs were able to detect Cd 2+ with detection limits of 4.45, 0.89, and 0.089 nM, respectively. Moreover, all three thin films had high selectivity towards Cd 2+ in the presence of other metal ions (Cr 2+ , Pb 2+ , and Zn 2+ ). Later that year, Sadrolhosseini et al. improved the sensitivity of the SPR optical sensor by depositing polypyrrole-graphene quantum dots on a metal thin film surface. (63) Owing to the excellent properties of polypyrrole-graphene quantum dots, the SPR sensitivity was enhanced and their sensor detected As 3+ , Hg 2+ , and Pb 2+ with detection limits of 0.67, 0.25, and 0.24 nM, respectively. In summary, SPR optical sensors have encouraged the development of new materials as the active layer for metal ion sensing. The active layer is very important as it can enhance the sensitivity and selectivity of the sensors. Table 1 shows all the recent active layers that have been developed for sensing heavy metal ions using SPR from 2018 to 2020.
Phenol detection
Phenol is one of the toxic phenolic compounds that can be harmful to plants, animals, and humans. (64)(65)(66)(67)(68)(69) Exposure to phenol over its permissible level can lead to symptoms such as vomiting and diarrhea, and further exposure can cause kidney, lung, and liver malfunction. (70)(71)(72)(73)(74)(75) Consequently, it has drawn researchers' attention, leading to the rapid development of sensors for its detection. SPR sensors have emerged as novel sensors with high potential for detecting phenol.
The first work on the use of SPR for phenol detection was introduced by Singh et al. in 2013, who incorporated tyrosinase entrapped in polyacrylamide gel coated on a silver thin film as the sensing layer with a fiber-optic-based SPR optical sensor. A wide linear response was observed from 0 to 1000 µM and a detection limit of 38 µM was obtained. (76) Next, in 2020, Hashim et al. coated a layer of tyrosinase mixed with graphene oxide on a gold thin film and used it in a prism-based SPR sensor for sensing phenol. The shift in resonance angle had a linear relationship with the phenol concentration in the range of 0-100 µM with a detection limit of 1 µM. (77) Table 2 shows the findings on phenol detection using an SPR sensor.
Pesticide detection
Over the past few years, pesticides such as synthetic insecticides have been widely used in agriculture, medicine, and industry and by consumers. The release of untreated effluent from these industries into the environment can lead to the accumulation of toxic pesticides, endangering both humans and the environment. (78) Among the pesticides, insecticide residue analysis is a particularly important requirement to ensure food quality and safety, and protect ecosystems and humans from potential hazards. (79) SPR optical sensors have emerged as effective sensors for sensing insecticides since 2006, when sensors were first used to detect carbaryl, DDT, and profenofos. (80)(81)(82) Since then, the detection of other widely used insecticides such as chlorpyrifos, dimethoate, carbofuran, carbendazim, and fenitrothion has attracted the attention of researchers. For instance, Thepudom et al. (2018) demonstrated the detection of chlorpyrifos using SPR enhanced by photoelectrochemical sensing. AuNPs were deposited with a poly(3-hexylthiophene)-titanium dioxide (P3HT-TiO 2 )-functionalized gold grating layer then used to generate an SPR signal. Using the hybrid SPR enhancement system, chlorpyrifos was detected at a concentration as low as 7.5 nM. (83) In 2019, Li et al. reported the use of an SPR biosensor with an antibody-oriented assembly as an active layer to rapidly detect residues of chlorpyrifos in agricultural samples. They used a covalent-oriented strategy in which staphylococcal protein A (SPA) was covalently bonded to the surface of a gold thin film for the monitoring of chlorpyrifos residues. The SPA-modified biosensor had a low detection limit of 15.973 nM for chlorpyrifos. It also exhibited excellent specificity for chlorpyrifos in cross-reactivity studies on a series of structural and functional reported an SPR biosensor for carbendazim that had enhanced performance due to the use of a Au/Fe 3 O 4 nanocomposite as an amplifying label on the surface of a carboxymethyldextrancoated gold layer of the sensor. To realize a sensor for the real-time detection of carbendazim, the surface was further modified with a monoclonal antibody. According to their report, the sensor had good specificity in carbendazim determination when tested with benzimidazole, 2-(2-aminoethyl)benzimidazole, 2-benzimidazole propionic acid, and 2-mercaptobenzimidazole. The limit of detection obtained for carbendazim was 2.301 nM. (86) Lastly, Kant (2020) conducted a study on fenitrothion detection using SPR. They used tantalum(V) oxide nanoparticles sequestered in a nanoscale matrix of reduced graphene oxide as the sensing layer. Instead of a gold thin film, the sensing layer was deposited on top of a thin layer of silver. The limit of detection they obtained for fenitrothion detection was 0.038 µM. The selectivity for fenitrothion was obtained by comparing the shift attained at the resonance wavelength corresponding to the minimum (generally the blank concentration) and maximum concentrations of the target analyte with other interferents. The results showed that there was no appreciable influence on selectivity even when the interferents had a 10-fold higher concentration. (87) Table 3 shows all the active layers of SPR optical sensors used for the detection of insecticides such as chlorpyrifos, dimethoate, carbofuran, carbendazim, and fenitrothion in chronological order.
Conclusion
This paper has reviewed recent trends in SPR optical sensors for the potential sensing of environmental pollutants. Various modifications of a metal thin film surface with an active layer to improve the sensitivity and selectivity of the optical sensors have been discussed in detail. To conclude, SPR optical sensors have attracted interest and encouraged the development of new materials as active layers for detecting environmental pollutants owing to the advantages of SPR. The development of new active layers is very important as they determine the sensitivity, selectivity, and other parameters of optical sensors. SPR optical sensors have high potential in sensing environmental pollutants at concentrations as low as pM to µM. Moreover, the concentration of pollutants can be measured in real time with small samples. We expect that further research on SPR optical sensors will improve their sensing capabilities, enabling costeffective, rapid, sensitive, and selective analysis to be widely used in environmental monitoring.
|
2020-12-17T09:12:07.407Z
|
2020-12-16T00:00:00.000
|
{
"year": 2020,
"sha1": "010f84f051425aeedcfe714f35e592e2cdfcd914",
"oa_license": "CCBY",
"oa_url": "https://myukk.org/SM2017/sm_pdf/SM2404.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eae8afb425867f4d3652ccc575e364f5b15c2833",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
219909865
|
pes2o/s2orc
|
v3-fos-license
|
Vibrios from the Norwegian marine environment: Characterization of associated antibiotic resistance and virulence genes
Abstract A total of 116 Vibrio isolates comprising V. alginolyticus (n = 53), V. metschnikovii (n = 38), V. anguillarum (n = 21), V. antiquarius (n = 2), and V. fujianensis (n = 2) were obtained from seawater, fish, or bivalve molluscs from temperate Oceanic and Polar Oceanic area around Norway. Antibiotic sensitivity testing revealed resistance or reduced susceptibility to ampicillin (74%), oxolinic acid (33%), imipenem (21%), aztreonam (19%), and tobramycin (17%). Whole‐genome sequence analysis of eighteen drug‐resistant isolates revealed the presence of genes like β‐lactamases, chloramphenicol‐acetyltransferases, and genes conferring tetracycline and quinolone resistance. The strains also carried virulence genes like hlyA, tlh, rtxA to D and aceA, E and F. The genes for cholerae toxin (ctx), thermostable direct hemolysin (tdh), or zonula occludens toxin (zot) were not detected in any of the isolates. The present study shows low prevalence of multidrug resistance and absence of virulence genes of high global concern among environmental vibrios in Norway. However, in the light of climate change, and projected rising sea surface temperatures, even in the cold temperate areas, there is a need for frequent monitoring of resistance and virulence in vibrios to be prepared for future public health challenges.
non-O139 V. cholerae can also cause infections. The virulence factors of non-O1 and non-O139 include a heat-stable enterotoxin, repeat in toxin (rtx) and El Tor hemolysin (hlyA) (Kumar, Peter, & Thomas, 2010). In contrast, the pathogenicity of V. parahaemolyticus is linked to their ability to produce a thermostable direct hemolysin (TDH), or a TDH-related hemolysin (TRH), encoded by tdh and trh genes (Raghunath, 2015). For V. vulnificus, virulence is related to the production of a polysaccharide capsule and lipopolysaccharide (LPS), flagellum, hemolysin, and proteases (Roig et al., 2018). The genetic basis for human virulence is only partially known, although several studies suggest that all strains of V. vulnificus, regardless of their origin, may be able to cause infections in humans (Roig et al., 2018). Several other Vibrio spp., such as V. alginolyticus, V. fluvialis, V. mimicus, V. metschnikovii, V. furnissii, V. hollisae, and V. damsela, can occasionally cause infections in humans (Austin, 2010;Baker-Austin et al., 2018).
Vibrio infections in humans typically occur as a result of ingestion of contaminated seafood, through the handling of raw seafood or by exposure of wounds to seawater during recreation (Iwamoto, Ayers, Mahon, & Swerdlow, 2010). The human pathogenic vibrios show strong seasonality and are more abundant when the water temperature exceeds 18°C and the salinity drops below 25 ‰ (Vezzulli et al., 2013). In the last decades, an increase in infections caused by Vibrio spp. has been reported, also in colder regions of South America and Northern Europe, including Norway, where this was previously rare (Baker- Austin et al., 2016). One of the primary effects of climate change is increased sea surface temperatures (SSTs), and this may facilitate the spread of seawater associated diseases (EEA, 2017). The temperature is predicted to increase further in northern temperate waters (EEA, 2017), and new areas may become more favorable for the pathogenic vibrios.
The role of the marine environment in the development and dissemination of antimicrobial resistance is largely unknown. Vibrios are indigenous to the sea (Banerjee & Farber, 2018), and in recent years, the occurrence of resistance genes in Vibrio spp. has been examined. Genes encoding resistance to β-lactams like penA, bla (Letchumanan, Chan, & Lee, 2015), and bla VCC-1 (Hammerl et al., 2017;Mangat et al., 2016), chloramphenicol resistance genes, such as floR, catI, and catII, and several tet genes encoding resistance to tetracycline (Letchumanan et al., 2015), have been detected in Vibrio spp. Clinically important mobile resistance genes like qnrVC and qnrS have originated in Vibrio spp. (Fonseca, Dos Santos Freitas, Vieira, & Vicente, 2008). This makes Vibrio spp. a good model organism for the studying antibiotic resistance in the marine environment.
Although V. parahaemolyticus, V. cholerae, and V. vulnificus have previously been isolated from Norway (Bauer, Ostensvik, Florvag, Ormen, & Rorvik, 2006), there is limited knowledge on the prevalence of different Vibrio spp. and associated resistance and virulence markers in the Norwegian marine environment. This study aimed to examine the prevalence of different Vibrio spp. in the Norwegian marine environment and to characterize associated virulence and antibiotic resistance genes among these. We here present a detailed account of taxonomy, resistance, and virulence genes detected based on phenotypic culture-based methods and whole-genome sequence (WGS) analysis.
| Sampling
Water samples were collected from four different locations (A-D) at the West coast of Norway (Oceanic temperate zones) at five different depths (0, 2, 5, 7, and
| Isolation of Vibrio spp.
From each water sample, three aliquots of 100-250 ml were filtered through 0.45 µm filters (Merck Millipore, Germany) using the EZ-fit Manifold 3-place system (Merck Millipore, Germany) connected to a vacuum pump. Each filter was transferred to thiosulfate-citratebile-sucrose (TCBS) agar (Oxoid, UK) plates and incubated at 37°C for 24-48 hrs. Also, an enrichment step was performed in duplicates on 500 ml water adding 50 ml concentrated (360 mg/ml) alkaline peptone water (APW) with 2% sodium chloride (NaCl). The enrichment cultures were incubated at 42 ˚C for 18 hr. After incubation, 100 µl of the enrichment cultures was streaked on TCBS agar and incubated at 37 ˚C for 24-48 hr. Typical colonies were picked from the plates and restreaked for obtaining pure cultures.
Isolation of Vibrio spp. from fish and bivalve molluscs followed a method based on NMKL method no. 156 (NMKL, 1997). The method takes advantage of the vibrios alkaline and halophilic properties (Vezzulli et al., 2013) and applies APW supplemented with 2% NaCl and 42°C as incubation temperature for selective enrichment of human pathogenic species (NMKL, 1997). For isolation of Vibrio spp., TCBS is a widely used medium. The alkaline pH (8.6), bile salts, and NaCl concentration in the agar inhibit the growth of Enterobacteriaceae and Gram-positive organisms (Donovan & van Netten, 1995). From herring collected in June 2018, samples were taken from the skin with muscle, gills, and intestine. From each tissue type, 20 g was homogenized in 180 ml APW with 2% NaCl and APW with 2% NaCl supplemented with polymyxin B (250 IU/ml) for 30 s. using a stomacher. The homogenate was incubated at 42 ± 1°C for 18 ± 2 hrs. After incubation, 10 µl of the enrichment cultures was streaked on TCBS agar and incubated at 37 ± 1°C for 24 ± 3 hrs.
From mackerel collected in September, samples were taken from the skin with mussel following the same protocol as described previously. Samples were also collected from gut content and homogenized in phosphate-buffered saline (PBS) (Sigma-Aldrich), and tenfold dilution series were made. From each sample, 100 µl was spread on TCBS and incubated at 37 ± 1°C for 24 ± 3 hrs. From herring collected in November, samples were collected from the skin with muscle and prepared following the same method as described previously.
From bivalve molluscs, 100 g soft tissue and intravalvular fluid from at least 10 individual bivalves were homogenized in sterile plastic bags and 20 g was transferred to new sterile bags. Enrichment followed the same protocol as for fish samples. Additionally, from the homogenate tenfold dilution series were made using peptone water (bioMerièux, France). From dilutions and undiluted samples, 100 µl was spread on TCBS and Vibrio ChromoSelect agar (VCS; Sigma-Aldrich) and incubated at 37°C for 24-48 hrs followed by a selection of typical colonies.
| Biochemical identification
Isolates were grown overnight on plate count agar (PCA) (Oxoid, UK) supplemented with 2% NaCl and characterized biochemical using the
| Identification by MALDI-TOF-MS
All isolates were grown overnight on PCA supplemented with 2% NaCl and sent to the Norwegian Veterinary Institute (NVI) in Bergen for identification by matrix-assisted laser desorption ionization timeof-flight mass spectrometry (MALDI-TOF-MS) (Bruker, Germany). The obtained peptide mass fingerprints (PMFs) were compared to spectra in the commercial MALDI-TOF-MS database (MALDI Biotyper, Bruker, Germany) and to spectra in an in-house generated database containing spectra from Vibrio spp. known to be associated with marine fish.
| Whole-genome sequencing and sequence analysis
Eighteen isolates were subjected to whole-genome sequencing (WGS). DNA was extracted from isolates using the DNeasy Blood & Tissue kit (Qiagen, Germany). An additional lysis step was performed by resuspending the samples in 180 µl lysis buffer and incubating them at 37°C overnight. After incubation, DNA extraction was done as described by the manufacturer (Quiagen, 2006). The purity (260/280 and 260/230 ratios) and concentration in the DNA was measured using Nanodrop ND-1000 (NanoDrop Technologies, USA) and Qubit 2.0 broad range dsDNA kit (Invitrogen, USA).
| Species identification of WGS
Raw forward and reverse reads in the FastQ format were uploaded to The Microbial Genomes Atlas (MiGA) (Rodriguez et al., 2018) web server in the TypeMat mode. In this mode, the sequences are trimmed, assembled, and aligned to give the closest relatives found in the MiGA Reference database.
| Antimicrobial susceptibility testing
Antimicrobial susceptibility testing of isolated Vibrio spp. was con-
| CarbaNP test
Isolates showing reduced susceptibility to imipenem by the disk diffusion method were grown overnight on tryptic soy agar (TSA; Merck, Germany) at 37°C and examined for carbapenemase production by the CarbaNP test as described by Dortet, Poirel, Errera, and Nordmann (2014).
| Prevalence and identification of Vibrio spp
Among the species considered to be opportunistic human pathogens (Austin, 2010;Baker-Austin et al. 2018), V. alginolyticus was isolated from water, herring, and bivalves, while V. metschnikovii was isolated from herring and water samples. On the other hand, species harboring virulence genes but not known to cause human disease, like V. antiquarius (Dahanayake, De Silva, Hossain, Shin, & Heo, 2018;Nur et al., 2015) and V. fujianensis (Fang et al., 2018), were isolated from water only. V. anguillarum, a well-known fish pathogen F I G U R E 2 Physical parameters in seawater samples collected during herring fisheries, locations A, B, C, and D ( Figure 1). (a) Measured temperature (°C). (b) Measured salinity (‰), note: missing measurement at 10 m from location B. (c) Number of colony-forming units (cfu)/100 ml water on TCBS plates incubated at 37°C for 24-48 hrs.
Global mapping of the sequenced isolates of V. alginolyticus and
V. anguillarum (Figures A1 and A2) showed that Vibrio isolates from
Norway had high similarity to strains from other countries and continents, including the United States and China, indicating a global presence of these strains.
| Hemolytic activity on blood agar
None of the 53 V. alginolyticus isolates displayed hemolysis on blood agar. All 38 V. metschnikovii isolates were hemolytic on both sheep and human blood. On sheep blood, five V. metschnikovii isolates were β-hemolytic, while the remaining isolates were α-hemolytic on both media.
The most prominent virulence genes detected in this study were related to hemolysins. All Vibrio species examined had genes coding for the Aeromonas-related hemolysin type III (Hemolysin (Table 2).
| Examination of carbapenemase production
Among the 116 Vibrio isolates examined, resistance to imipenem was observed in all V. anguillarum isolates, while two V. alginolyticus isolates and one V. fujianensis isolate were intermediately susceptible to the agent. These imipenem-resistant isolates were also resistant to ampicillin but susceptible to meropenem. All but one V. anguillarum isolate (B4-12) was susceptible to cefotaxime. CarbaNP test was negative for all isolates, suggesting the absence of carbapenemase with high hydrolytic activity.
| Genetic characterization of resistance determinants
The sequenced genomes revealed the presence of β-lactamases
| D ISCUSS I ON
To the best of our knowledge, this study is the most comprehensive assessment of vibrios from the Norwegian marine environment describing the prevalence of Vibrio spp. in Norwegian pelagic fish, bivalves, and seawater, and their characteristics concerning antimicrobial resistance and virulence.
| Prevalence of Vibrio spp. in the Norwegian marine environment
The highest plate count of aquatic bacteria was observed in the water samples collected closest to the shore, where the measured temperature was highest and the salinity lowest (Location A). A total of 67% of isolated V. alginolyticus were isolated from these samples, where the temperature was measured to above 15°C and the salinity to ≤25 ‰, close to the preferred conditions for vibrios (Vezzulli et al., 2013;Vezzulli, Pezzati, Brettar, Höfle, & Pruzzo, 2015 where the seas are influenced by the North and Atlantic Ocean. As a result, the sea temperature in these areas is normally low and the salinity is high. It is well known that the human pathogenic vibrios are most abundant at elevated sea temperatures, >18°C, and at lower salinity levels, <25‰ (Vezzulli et al., 2013). This may explain the absence of the major human pathogenic Vibrio spp. in this study. The risk of increased numbers of vibrios due to elevated temperatures is greater in the east coast of Norway and closer toward the Baltic sea (Escobar et al., 2015) where the seas are less affected by the open oceans. isolated during this study were phenotypically susceptible to tetracycline, doxycycline, meropenem, sulfamethoxazole/trimethoprim, ciprofloxacin, florfenicol, mecillinam, and azithromycin.
| Antimicrobial susceptibility
Consistent with previous reports, a high prevalence of resistance to ampicillin was observed in all Vibrio spp. isolates in our study (Banerjee & Farber, 2018;Chiou, Li, & Chen, 2015;Hernández-Robles et al., 2016;Li et al., 1999;Pan et al., 2013), and this resistance is usually due to the presence of a bla CARB gene (Chiou et al., 2015;Li et al., 2016). The bla CARB -like genes have been found in V. cholerae predating the introduction of penicillins (Dorman et al., 2019).
In this study, the bla CARB genes were detected in V. alginolyticus, V. metschnikovii, and V. antiquarius. Genes encoding ampC β-lactamase were found in V. alginolyticus, V. anguillarum, and V. fujianensis, which is conflicting to the results from phenotypic susceptibility testing as all these isolates were susceptible to cephalosporins. This may indicate that the breakpoints used in this study are insufficient for detection of these enzymes by a phenotypic method. This also highlights the need for establishing breakpoints for environmental Vibrio species. However, differences between phenotype and genotype may also be caused by a variable expression of genes in tested isolates (Sundsfjord et al., 2004). Although all isolates in our study were susceptible to both tetracycline and doxycycline, the tetracycline enzymatic inactivation gene tet34 (Akinbowale, Peng, & Barton, 2007) and efflux encoding gene tet35 were frequently detected within the examined genomes in the current study.
Resistance to oxolinic acid has been reported in V. alginolyticus (Scarano et al., 2014), and the prevalence of reduced susceptibility was quite high in this study. All examined isolates of V. alginolyticus carried the qnr gene. It has been suggested that the marine bacteria may constitute the origin of plasmid-mediated quinolone resistance (PMQR) genes (Poirel, Cattoir, & Nordmann, 2012) and vibrios might act as a reservoir for these genes (Poirel, Liard, Rodriguez-Martinez, & Nordmann, 2005).
Genes encoding chloramphenicol resistance are frequently found in examined Vibrio spp. (Letchumanan et al., 2015), and in the current study, V. metschnikovii and V. anguillarum harbored the catB-like acetyltransferase able to inactivate chloramphenicol. This gene, however, does not give resistance to florfenicol (Schwarz, Kehrenberg, Doublet, & Cloeckaert, 2004), which was the only amphenicol tested in our study. However, none of these isolates produced positive results in the car-baNP test indicating another resistance mechanism than the production of a carbapenemase, or an imipenem hydrolyzing enzyme with a slow turnover rate (Verma et al., 2011). The observed resistance is likely caused by an alteration in porins, the presence of low-affinity penicillin-binding proteins or overexpression of ampC (El Amin et al., 2001;Nordmann, Dortet, & Poirel, 2012;Zapun, Contreras-Martel, & Vernet, 2008). One V. anguillarum isolate carried gene encoding a VarG subclass B1-like lactamase, an enzyme with the ability to hydrolyze most β-lactam antibiotic, including cephalosporins and carbapenems (Lin et al., 2017). This isolate was, however, susceptible to both meropenem and cephalosporins.
| Virulence
Members The hemolysins produced by V. metschnikovii is known to lyse cells from several animals, including humans, sheep, and horse (Miyake, Honda, & Miwatani, 1988). All the V. metschnikovii isolates were α-hemolytic on tryptic soy agar (TSA) with 5% human blood and on TSA with sheep blood, except five isolates that were β-hemolytic on TSA with sheep blood. The results indicate that sheep erythrocytes are more susceptible to these hemolysins, even though a previous study showed the opposite, where human cells were more susceptible to the hemolysins produced by V. metschnikovii (Matté et al., 2007).
RTX is a pore-forming toxin found in several pathogenic Gramnegative bacteria (Lee, Choi, & Kim, 2008), while HlyA, also known as V. cholerae cytolysin (VCC), is a hemolysin and cytolysin with activity against a range of eukaryotic cells (Ruenchit, Reamtong, Siripanichgon, Chaicumpa, & Diraphat, 2017) and is found in both V. cholerae O1 and non-O1/non-O139. The cytotoxic activity has previously been described in V. metschnikovii isolated from a leg wound (Linde et al., 2004). Even though V. metschnikovii have caused infections in humans, it is poorly described with regard to virulence factors, and the presence of these genes may indicate a pathogenic potential.
Horizontal gene transfer can mediate transfer not only antibiotic resistance genes but also virulence factors. V. cholerae virulence encoding genes, for example, zonula occludens toxin (zot), are encoded by prophages, and it has been suggested that the transfer of zot encoding phages occurs frequently in the Vibrio community (Castillo et al., 2018). Similarly, fragments of V. cholerae pathogenicity islands have been detected in V. alginolyticus, V. anguillarum, and V. metschnikovii, indicating that important virulence genes can be present in environmental Vibrio spp. (Gennari, Ghidini, Caburlotto, & Lleo, 2012).
The API20E between two genomes (Kim, Oh, Park, & Chun, 2014). MiGA can discriminate between closely related species (Rodriguez et al., 2018) and the reference database includes a large number of genomes, including the Vibrio spp. proposed by MALDI-TOF-MS (http://micro bial-genom es.org/proje cts/20). Hence, the results from identification by MiGA should be considered most reliable.
| CON CLUS ION
To the best of our knowledge, this study presents the most comprehensive assessment of vibrios from the Norwegian marine environment, where potentially human pathogenic species like V. alginolyticus and V. metschnikovii were detected. Although the low frequency of multidrug-resistant isolates was observed, several clinically important resistance genes were detected in the Vibrio spp.
isolates. These environmental vibrios could act as a reservoir of resistance genes in the marine environment.
E TH I C S S TATEM ENT
None required.
ACK N OWLED G EM ENTS
We are grateful for samples provided for this study by the Norwegian Food Safety Agency and the research cruises monitoring pelagic fisheries organized by Dr Arne Levsen. We also thank Tone Galluzzi and Hui Shan Tung for help during the processing of samples and analysis. We also want to acknowledge Hanne Nilsen at the Norwegian Veterinary Institute for help with identification of Vibrio spp. by MALDI-TOF-MS.
CO N FLI C T O F I NTE R E S T S
None declared. F I G U R E A 1 ML phylogenetic inference of Vibrio anguillarum strains included in this study. Genome used as reference is red shaded, while the genomes from this study are in green. Blue dots show nodes with bootstrap values above 85%
|
2020-06-20T13:06:44.656Z
|
2020-06-17T00:00:00.000
|
{
"year": 2020,
"sha1": "eae6cc5f19198111d764f38fa9057856248717fb",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mbo3.1093",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcf509e62fb497af6d9acb9fa20af9132c8292c5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
1457279
|
pes2o/s2orc
|
v3-fos-license
|
Modified Differential Renal Function Measurement Revised by Renal Cross Sectional Area in Children with Ureteropelvic Junction Obstruction
Purpose Diuretic 99mTc-diethylenetriaminepentaacetic acid (Tc-DTPA) renal scans may show false-negative or false-positive results in children with ureteropelvic junction obstruction (UPJO). We evaluated whether modified differential renal function (DRF) revised by the renal cross-sectional area on imaging study may be a more valuable predictor than conventional DRF on a renal scan for deciding on a proper interventional time. Materials and Methods Between September 2001 and January 2008, we reviewed the diuretic renal scan results of 29 pediatric patients who underwent pyeloplasty due to unilateral UPJO. Diuretic renal scans using the standard 99mTc-DTPA protocol and imaging studies for renal unit measurement area were done. Conventional DRF measurement and modified calculation of DRF per unit area were done. Conventional DRF was classified into group I (below 40%) and group II (above 40%). Results The mean age of all patients was 42.6±52.6 months (range, 3-198 months). The mean cross-sectional areas of the UPJO kidney and of the normal contralateral kidney were 62.1±29.2 cm2 and 41.3±22.5 cm2, respectively (p<0.01). The conventional and modified DRF of the UPJO kidney were 45.2±9.2% and 35.2±9.5%, respectively (p<0.01). Thirteen children (62%) in group II (n=21) were classified in group I by the modified DRF measurement. Conclusions The modified DRF measurement calculated according to cross-sectional area showed fewer false-negative results and may be a valuable method for deciding on pyeloplasty under equivocal circumstances.
INTRODUCTION
There are controversies surrounding the role of diuretic renal scans when deciding on conservative therapy or surgery in children with ureteropelvic junction obstruction (UPJO). It remains difficult to choose an optimal time for surgery as a result of the high variability in renal function, the degree of obstruction, the extent of damage, and the potential of regeneration in a growing kidney [1,2]. In addition, the relative paucity of collagen in the neonatal renal pelvis helps to alleviate the effect of high obstructing pressure [1]. However, an unrecognized obstruction may result in renal damage and renal failure. To date, diuretic renal scans provide a reliable diagnostic tool for guiding patient management. However, the value of this investigation in children has been questioned, because of its inherently high false-positive and false-negative rates [3]. Specifically, false-negative results are clinically important because they can result in missed optimal surgical opportunities. More reliable assessment tools are therefore required to aid in decision making regarding the optimal surgical time.
We retrospectively compared a conventional differential renal function (DRF) measurement with a new DRF measurement that assesses the renal parenchymal areas from imaging studies in children who underwent pyeloplasty due to unilateral UPJO. From September 2001 to January 2008, 29 children underwent pyeloplasty due to unilateral UPJO, and 99m Tc-diethylenetriaminepentaacetic acid ( 99m Tc-DTPA) renal scans and other imaging studies, such as magnetic resonance imaging (MRI), computed tomography (CT), and renal ultrasonography, were performed. Diuretic renal scans were performed by using the standardized 99m Tc-DTPA protocol, as a result of a discussion between the Society for Fetal Urology (SFU) and the Pediatric Nuclear Medicine Council. On the morning of the study, oral fluids were encouraged, followed by intravenous administration of 15 ml/kg of a 0.9% sodium chloride solution 30 min preceding the scan. A renal scan using 99m Tc-DTPA was then performed under urinary bladder catheterization. The dosage administered was scaled for body weight and was based on an adult dose of 600 MBq. Intravenous furosemide (1 mg/kg) was given when maximum pelvicaliceal distention was observed. This usually occurred between 20 and 30 min after administration of 99m Tc-DTPA [2,4]. We proposed a novel method to calculate the renal parenchymal area and make correlations with DRF as measured by renal scans. All imaging studies such as MRI, CT, and renal ultrasonography were viewed on the Picture Archiving & Communications System (PACS), and all areas of measurement were conducted with the electronic drawing tool provided by the PACS radiographic software. A dedicated urologist conducted all the measurements of the renal parenchymal areas (unit areas). The unit areas of both kidneys were measured by manual tracing of the renal system excluding the extrarenal-pelvic area (region of interest) on the PACS workstation ( Fig. 1). In order to obtain reliable results, we rechecked the images three times in a magnified view (x2), and an average was taken for each set of results.
MATERIALS AND METHODS
We applied the equation, DRF per unit area of UPJO (or normal contralateral kidney; NCK)=DRF of UPJO (or NCK)/ renal parenchymal area of UPJO (or NCK). Modified DRF of UPJO was calculated by using the equation, modified DRF of UPJO (or NCK)=DRF per unit area of UPJO (or NCK)x100/(DRF per unit area of UPJO+DRF per unit area of NCK) (Fig. 2). The data were further analyzed with calculations using the McNemar chi-square test and generalized estimation equation for comparison of the modified DRF group and the conventional DRF group. All statistical tests were evaluated at a 0.05 significance level. The statistical analyses were performed by using SPSS (version 12.0; SPSS Inc, Chicago, IL, USA) computer software.
RESULTS
We reviewed the diuretic renal scan results of 29 pediatric patients (26 males and 3 females) who underwent pyeloplasty due to unilateral UPJO (23 left kidneys and 6 right kidneys). The mean patient age was 42.6±52.6 months (range, 3-198 months). Indications for pyeloplasty were recurrent urinary tract infection or flank pain (11 children), Children were divided into 2 groups on the basis of the results of the initial DRF: group I (n=8) had DRF less than 40%, and group II (n=21) had DRF greater than 40%. Thirteen children (62%) who initially belonged to group II (n=21) were reclassified into group I by the modified DRF measurement. In group II, 7 of 11 children (63.6%) whose modified DRF value was less than 40% had recurrent urinary tract infection or flank pain. A total of 6 of 7 children (86%) showed progressive dilatation on the serial ultrasound ( Table 2). Table 3 compares the conventional DRF and modified DRF of the affected kidneys. The modified DRF measurement demonstrated higher accuracy than the conventional method in DRF assessment, with respect to signs and symptoms, reduction in renal function, and hydronephrosis. Modified DRF was statistically significantly different from conventional DRF (p<0.05). The false-negative rates of conventional DRF and modified DRF were 72.4% and 27.6%, respectively.
DISCUSSION
In the pediatric population, congenital urinary tract ob-struction is the most common fetal anomaly identified in prenatal screening of pregnant women. It is one of the major causes of renal damage in young children [2,5]. Koff proposed that ureteral obstruction be defined as a functional or anatomical obstruction of urine flow from the renal pelvis to the ureter that results in renal damage or manifests as clinical symptoms such as recurrent urinary tract infection and flank pain when left untreated [5]. It is well known that the glomerular filtration rate (GFR) is lower in newborns than in older children, and the GFR increases several times during the initial 6 months of life. In this period, untreated obstruction can lead to early renal atrophy and permanent loss of renal function [6][7][8]. In addition, renal immaturity may lead to misinterpretations during preoperative and postoperative evaluations. Diuretic renal scans have become a popular method for differentiating between obstructive and nonobstructive hydronephrosis [3,9,10]. However, the value of this investigation in children has been questioned as a result of the inaccurate results it entails [3]. To obtain maximum benefits from diuretic renal scans, intravenous hydration should be combined with diuretic administration in order to maximize urine output. Factors such as adequate hydration and diuretic use are crucial in overcoming the reservoir or 'mixing chamber' effect, which may stimulate obstruction in dilated but otherwise unobstructed systems. Consequently, standardized investigation protocols are required with the diuretic renal scan [2,4]. Adequate hydration must be ensured, and there must be sufficient residual renal function to enable diuretic response in order to define the distensibility and volume of the collecting system. Urinary bladder volume and drainage can also affect the response pattern and the clinician's ability to interpret lower ureteric drainage, which explains the use of bladder catheter drainage during the study [2,4]. Nam and Lee emphasized that the factors that help to determine true obstruction, such as renogram curves, diuretic half-lives, serial renal imaging scans, and DRFs, should be taken into account when determining the optimal surgical time in children with UPJO [11].
Another problem with DRF is the so-called supranormal renal function. It remains unclear whether this supranormal function of the obstructed kidney reflects a true increase or merely a measurement error [12,13]. The relatively high incidence (9% to 21%) of this paradoxical function is clinically important because management of hydronephrosis with supranormal function has not been clearly established to date. In our study, supranormal function (55% or greater) was present in 4 patients (13.8%). Ham et al hypothesized that supranormal DRF may occur as a result of increased renal blood flow caused by altered renal hemodynamics [14]. Consequently, there are pressing clinical needs for a more reliable test to assess the appropriateness of surgical intervention in children with UPJO.
To our knowledge, the correlation between differential parenchymal areas on imaging studies and DRF reported on renal scans has not been reported previously. Feder et al suggested that renal parenchymal areas measured by CT strongly correlate with the results of the renal scans [15]. The overall averaged difference in calculating differential function by CT versus that of renal scan was only 4.73% [15]. According to these results, measurement of DRF in kidneys with a significant size difference could be riddled with pitfalls. We propose a new methodology: DRF on the renal scan is proportional to the renal parenchymal area on imaging studies, and DRF per unit area is more accurate. In addition, kidney dimensions can be easily measured on imaging studies and the treating clinician can rapidly assess the degree and site of obstruction. Modified DRF was significantly different from conventional DRF. We reviewed 29 children with UPJO who underwent pyeloplasty, and as intraoperative findings demonstrate the most reliable diagnostic results, we suggest that there are no methodological problems comparing the false-negative results of conventional DRF with that of modified DRF: the false-negative rates of conventional DRF and of modified DRF were 72.4% and 27.6%, respectively. Furthermore, 86% of children with progressive dilatation on the serial ultrasound demonstrated DRF of less than 40% on modified DRF in group II. These results indicate that modified DRF may be a significant predictor of surgical intervention. Modified DRF measurement according to cross-sectional area showed higher diagnostic accuracy, and it may be considered a valuable method for deciding on pyeloplasty in equivocal circumstances.
There is still much debate over how best to manage obstructions in neonates. Early in the debate, a number of authors advocated early intervention to preserve renal function.
There is a risk of deteriorating renal function in the future despite eventual spontaneous improvement or resolution of hydronephrosis. In addition, there is a possibility of refining our diagnostic armamentarium to detect renal decompensation at a reversible stage before the kidney becomes permanently damaged [16]. However, until now, evidence that suggests surgery will improve renal function or at least prevent further renal damage is lacking [17,18]. Increasingly, observation has been recommended for most infants, as many appear to do well without aggressive surgical intervention, and the current trend in the treatment of patients with unilateral UPJO is nonoperative care [17,18]. Koff and Campbell initially observed and subsequently performed surgery in patients with renal function and DRF deterioration [19]. They reported a study in which 104 neonates with unilateral UPJO were managed conservatively and followed up for over 5 years [20]. Only 7% of children required pyeloplasty due to DRF deterioration [20]. However, relief of obstruction is more suitable in the following conditions: DRF of less than 40% or functional reduction at follow-up, recurrent urinary tract infection despite prophylactic antibiotics treatment, or a strong likelihood of recurrent urinary tract infection regardless of the DRF value. Surgery may help to prevent renal parenchymal infection and irreversible renal damage [21][22][23][24]. Moreover, the procedure should not be delayed when indicated, because the surgical risks of pyeloplasty in infants are not as high as those of ureteral re-implantation. Our indications for pyeloplasty were recurrent urinary tract infection or flank pain (11 children), DRF of less than 40% on the affected side (8 children), progressive dilatation of hydronephrosis (7 children), and a greater than 10% decrease in DRF on serial renal scans (3 children). This study was limited by the fact that it was performed retrospectively, and the data were analyzed in selected children who underwent pyeloplasty due to unilateral UPJO. As a consequence, we were not able to analyze false-positive results and specificity. Furthermore, measurements of the unit area were not made with a single imaging tool and therefore measurement error was possible. Finally, our study had a small sample size of 29 children; therefore, additional confirmatory studies are required in the near future.
CONCLUSIONS
Currently, DRF is one of the most important parameters applied to determine the optimal time for surgical intervention for UPJO in children. However, the value of this investigation in children has been questioned because of its high false-positive and false-negative rates. We suggest a modified DRF measurement that takes into account cross-sectional areas. Our modified DRF measurement exhibited a lower false-negative rate and may become a valuable method for deciding on pyeloplasty in children with UPJO in equivocal circumstances.
Conflicts of Interest
The authors have nothing to disclose.
|
2014-10-01T00:00:00.000Z
|
2010-04-01T00:00:00.000
|
{
"year": 2010,
"sha1": "b1d54ddbab0190518a72fe486c4f3975cf740d65",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc2858857?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1d54ddbab0190518a72fe486c4f3975cf740d65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258494161
|
pes2o/s2orc
|
v3-fos-license
|
Wage Income Distribution in Mexico: A Nonparametric Approach *
This paper offers an analysis of wage income inequality for mexico and offers some insights about welfare improvements for several categories of workers. We analyze real wage distributions at different points of time, using mainly nonparametric techniques. Kernel densities and smoothing techniques are used to analyze changes in the distribution of wages and labor supply for the first quarters of 2010 and 2020. We also use stochastic dominance analysis to observe welfare improvements for each category of workers and the Wasserstein distance to confirm changes in wage inequality. our main results show that overall wage income inequality decreased, though the change is small and the categories that improved are those traditionally considered informal and low human capital workers, such as young people, workers with only elementary education and manufacturing or agricultural workers. The welfare of these groups also improved during the same period, yet welfare gains are negative for highly educated and experienced workers with a high level of human capital, including unionized and government or health sector workers. intra-group wage distribution became more unequal for these workers. The results contradict the technological-bias change found during the initial years of free trade and market reforms in the 1980s and 1990s.
INtroDUctIoN
This work offers an alternative analysis of wage inequality, using nonparametric techniques with some insights on possible welfare changes during the ten-year period from 2010 to 2020. We compared changes in the distribution of real wages from the beginning of 2010 with 2020 and observed how real wages have changed over time in some economic sectors. We used stochastic dominance analysis to observe how real wages changed during both the end-of-year and ten-year periods, in order to detect possible welfare gains for certain categories of workers. The objective was to compare different groups of workers that may be affected by both trade liberalization and institutional changes (e.g. end-of-year aguinaldo bonus, minimum wage increase, etc.), and then compare the distribution of log wages. a literature review on wage inequality in mexico reveals general agreement that over the last three decades, wage inequality has increased and later decreased. coincidentally, the period began with structural changes due to the implementation of major free-trade reforms. one accepted explanation for the initial increase in wage inequality is the technological-bias change that increased the demand for skilled workers at the expense of low-paid and low-skilled workers. another important factor is the persistent loss in real value of wages due to post-1980s institutional arrangements. For example, a worker earning a minimum wage now can only obtain 40% of what (s)he could 30 years ago. castro Lugo and Huesca reynoso [12] offer a review and possible reasons behind the rise in wage inequality during the 1980s to mid 1990s. The same authors [12] mentioned three possible explanations for the increasing wage inequality during this period: 1. demand-side sources, 2. supply-side sources and 3. institutions. The first implies a possible technological-bias change: a separate equilibrium for skilled and unskilled workers, with higher wages for skilled and lower for unskilled. The second has to do with changes in demographics in the labor market, such as greater participation of young and female workers, and finally, institutional problems such as labor union bargaining power, minimum wage structure and public transfers, among others.
Wage inequality in mexico can partially be explained by technological-bias change. mexico began free-trade reforms in the mid 80s, first becoming a member of the General agreement on Tariffs and Trade (gatt) in 1986 and culminating with the signing of the north american Free Trade agreement (nafta) in 1994. a wave of privatizations was followed by an increase in foreign direct investment and new technology brought into production. This may explain the increase in income inequality during the 1980s and 1990s as shown by castro Lugo and Huesca reynoso [12]. Using firm level data from the industrial census, Hanson and Harrison [18] concluded that free trade policies affected firms hiring mainly low-skill workers. Similar conclusions can be found in Esquivel and rodríguez-López [15], WaGE incomE DiSTribUTion in mExico: a nonparamETric approacH who found that recent wage inequality can be explained by the wage lag between skilled and unskilled workers caused by rapid technological changes and trade liberalization. Similarly, airola and Juhn [3] explain this phenomenon on the side of the increasing demand for skilled labor. acemoglu [1] provides a relevant theoretical work that explains the reasons behind the increasing wage ine quality caused by technological-bias change. He builds a separate equilibrium model for skilled and unskilled workers produced by skill-biased technical change. His findings are that skilled workers will have their wages increased, while those of the unskilled will decrease and overall unemployment will increase. Such skill-biased technical change can be explained by higher returns to education, specialization and competition, although we may expect the skill premium to decrease over time and the wage spread to stop growing for those workers in the long run. on the side of institutional variables, Fairris [16] and cortez [13] analyze wage inequality induced by changes in union bargaining power. The first study analyzes data from the mexican national Household income-Expenditure Survey (enigh, Spanish initials) to capture the power of unions on wage spread. Fairris [16] concludes that unions have an effect of decreasing wage dispersion. cortez [13] also uses enigh data from different years to observe the returns on both education and unionization. He concludes that changes in labor market institutions are responsible for higher wage inequality, increasing the return on unionization and minimum wages. bell [6] found that minimum wages are not binding for most manufacturing workers due to their low level and lack of compliance in many cases. Fairris et al. [17] present evidence that changes in real and minimum wages are important for changes in overall wage inequality. maloney and méndez [21] and bosch and manacorda [7] focus on analyzing distribution shape and the effect of minimum wages on real wage determination. The former work compares densities by groups of formal and informal workers and uses kernel density estimation for some Latin american countries. Then they use lineal regression analysis to estimate the effect of minimum wages on the real hourly salary. The latter includes an analysis of workers earning minimum wages, using spikes. They use longitudinal micro data from the mexican national Urban Employment Survey (eneu, Spanish initials), which only represents urban workers.
The main objective of this study is to confirm or reject the previous trend of increasing income inequality in groups affected by technological-bias change and debate the possible effects of institutions such as unionization, transfers and minimum wages. We compare changes in wage inequality by worker category so as to observe welfare changes in the last decade and try to find evidence of technological-bias change in those worker categories supposedly more affected by this. We also want to observe changes in wage income for those workers with different amounts of human capital (e.g. formal education) that are also affected by transfers and globalization policies. For example, campos-Vázquez [9] found that the lower wage inequality in recent years is due to labor market effects, where return to higher education is decreasing. campos-Vázquez et al. [11] and campos-Vázquez et al. [8] also support the idea that market forces are behind this lower wage inequality and other institutional factors may not be so relevant.
The first part of the article is an introduction and brief discussion on the sources of wage inequality that may be affecting the labor market in mexico. The second part explains the data and the main techniques used to estimate wage inequality and welfare change. The third part contains the main results and economic analysis, and we end with a short conclusion and final comments.
DATA AND METHODOLOGY
The mexican national occupation and Employment Survey (enoe, Spanish initials) is an improved labor survey that began collecting longitudinal data in 2005. The survey is quarterly, and respondents stay in the sample for five continuous quarters, with quarterly attrition loss of about 1/5. This survey is representative of the whole mexican population and contains detailed information on job conditions, including wages, salaries and other labor income, as well as hours of work, individual and household characteristics. We were able to construct a corrected sample of 92,000 salaried workers, and we use monthly labor income, which includes wages, salaries and fringe benefits from employment from the last quarters of years 2009 and 2019, and the first quarters of years 2010 and 2020. We converted to real wages using the price index estimated by the bank of mexico, with 2018 as the base year. We used some relevant individual characteristics and labor market variables for all wage earners. neither business and self-employment income nor income from capital are included in the sample. before proceeding to our analysis, we decided to use a traditional parametric approach on wages due to the missing data in the wage variable. in order to obtain a corrected sample and to overcome the problem of selection in this type of data, a two-step estimation was carried out. First, we estimated the probability of labor force participation using a tobit regression and then performed a Heckman correction to obtain estimates for the wage regression. The tobit regression on labor participation included total family income, number of children, education level and experience for each individual, as well as other explanatory variables. The Heckman regression was performed on a traditional wage equation, which includes education, experience and other labor market characteristics. after estimation, imputation was performed to produce a new and corrected sample of wages. WaGE incomE DiSTribUTion in mExico: a nonparamETric approacH
KERNEL DENSITY ESTIMATION
Kernel Density Estimation is a nonparametric technique that estimates the real distribution of a data set. The meaning of real is in the context of a model-free distribution as opposed to the parametric family of distributions. The idea is to find a distribution that follows the observed data rather than assuming a specific parametric model that may fit the data properly. Using kernel densities allows us to observe some interesting behavior in the sample, such as clusters or groups around a mode. assumptions on the data are minimal and less rigid than with parametric methods. a density estimation problem is about reconstructing a probability density function p(x) from a given set of data points X 1 , X 2 ,..., X n . instead of assuming a model from any traditional parametric family density functions, we want to find a smooth function that fits the data better: the real distribution. With this in mind, the best approximation to the real distribution is: Where p ̂ (x) is a better fit of the real distribution that depends on the smooth kernel function K. Here the (X i − x) is the distance of every point from a designed test point x divided by a smoothing parameter h. The smoothing parameter is the key for the best fit of the distribution around the points (X i − x), which also interact with the sample size. a simple way to set up the bandwidth h is using a Gaussian kernel density estimator, commonly known as Silverman's rule of thumb: Where iQr stands for interquartile range and σ is the standard deviation of the chosen points. Using kernel density estimations, we are able to get a glimpse of real data distribution, finding modes, the spread and localization of the distributions that may have economic significance.
GINI INDEX AND WASSERSTEIN DISTANCE
a traditional approach for measuring income distribution is the Gini index, defined as the area between the Lorenz curve and the equality diagonal line. a general formula can be constructed defining the Lorenz curve as y = L(x): although the Gini index is a very well known measure, it does not work well when comparing subgroups, as the Lorenz curves may cross. in order to complement the wage distribution analysis, we make use of the Wasserstein distance to find out how different two distributions are at two points in time. The Wasserstein distance compares two measures and is used to solve the transport problem. it is defined as the p th root of the total cost of transporting a mass from one place to another where the cost is defined as the Euclidean distance to move every element (point) of that mass. Let X and Y be two random variables with marginal distributions u and v, respectively X ̴ u and Y~v. We want to move every point x to each y using minimum effort (distance) until all the mass u is moved to the new v, assuming we are in a norm vector space χ where x, y ∈ χ. The Wasserstein distance of order p is defined as:̂( Where ∆(u, v) is the set of probability measures δ that intuitively constitutes a transport plan. Each δ(x, y) informs us of the proportion of mass at point x that must be transported to point y in order to move the total mass u to the new mass v. in our context, we want to transport the real wage income distribution from one year to another and estimate the Wasserstein distance, which is the minimum (infimum) cost to move the whole distribution to another one. Using this measure, we are validating the changes in the distribution already described by the Gini index.
STOCHASTIC DOMINANCE
We use stochastic dominance to observe whether any income distribution is superior to another. We want to compare real wage distribution during a period with low inflation, which may be difficult to observe. Using stochastic dominance analysis, we may be able to observe if the most recent real wage distribution dominates the older one in order to validate possible welfare gains. Stochastic dominance can be explained using a random variable X 1 which may dominate another X 2 if only the cumulative distribution function F 1 (X) is above the other F 2 (X). Strictly speaking, F 1 (X) ≤F 2 (X) for any outcome X on the support [a, b]. if we use the definition of an increasing utility function U (X), the expected utility may be defined as: Where F (X) and f (X) are the cumulative distribution function and density function, respectively. We can compare two expected utilities given two different income distributions X 1 and X 2 in the form: So if U 1 (X) > U 2 (X) then the part (F 2 (X) > F 1 (X)) in the right will be positive for any point X. This is the definition of first-degree stochastic dominance we intend to apply in our comparative analysis. For a better understanding of the direction and magnitude of this dominance, we constructed a piece-wise function of the form: This index ranges −1 < SDI < 1 and counts the amount of times there are more positive values than negatives. The positive sign means that U 1 (X) > U 2 (X), and the negative shows the opposite. The closer to the absolute one |1|, the stronger the stochastic dominance is between the two distributions. a value close to zero means that there is no way to know if one distribution dominates the other.
LOWESS SMOOTHING
We also observe changes in labor supply, using per-hour wages and compare the supply curves over time. Using this information, we estimated a pseudo-labor supply using nonparametric techniques. We used the locally weighted scatter plot smoothing (lowess) to estimate and approach an empirical labor supply curve. Lowess smoothing uses traditional linear and nonlinear regression for a localized data sample. These localized subsets of data are constructed using the nearest neighbor algorithm, and a weighted function is used to give more weight to the closest points, usually a tri-cubic weight function of the w(x) = (1−|d| 3 ) 3 type, where d is the Euclidean distance. Linear and nonlinear regressions are used on these localized samples to find a linear or non-linear fit that is smoothed across the entire data set. The advantage of this method is that it does not demand strict underlying conditions and allows the data to speak for itself but requires a data set that is large enough to be effective. in our analysis, lowess smoothing is implemented by plotting working hours supplied against the log of individual real wages. Smoothing is performed by averaging the nearest observations in the distribution and then performing regression analysis on reduced subsamples. The result is a pseudo-labor supply curve, which is defined by the data (as shown in the appendix). For example, figure 9 in the appendix shows an example of pseudo-labor supply for manufacturing workers in 2020 (blue line) plotted along with the 2010 supply curve (red line). For both years, the supply was elastic and then became inelastic at high wages, even bending backwards for very high wages. This is a common result in economics, predicted by theory. We also confirm that the lowess curve for manufacturing workers in the year 2010 dominates that of 2020. The interpretation is that any improvement in working conditions shows that the lowess curve for 2010 dominates that of 2020, which implies that fewer hours of work are needed to get the same real wage. Then, stochastic dominance can be applied to the lowess-supply curves to observe possible improvements in wage distribution.
ANALYSIS
Kernel density estimations were performed for some worker categories in order to observe the spread and shape of log wages. We are interested in those groups of workers that might be more affected by both free-trade reforms and those prone to changes in institutional conditions. one example may be workers in the manufacturing sector, which may be more affected by the inflow of foreign direct investment and changes in labor conditions from international trade. on the other hand, unionized workers are more affected by changes in public policy and legal reforms. Furthermore, labor market composition has also changed substantially in the last 30 years. The inclusion of younger and female workers with higher formal education may also have an impact on wage dispersion. We compare the kernel density estimations over time for several categories of workers according to their labor market and individual characteristics.
The kernel distributions were constructed using information on the logarithm of monthly real wage income reported by each worker in the first quarters of 2010 and 2020. The red line shows the Kernel estimation for 2010 and the blue line for 2020. Three dotted lines show minimum wages in 2010, and the two separate dotted lines to the right show 2020 minimum wages. The minimum wage lines for 2020 (general minimum wage and the border zone minimum wage on the far right) are closer to the mean and median wage and binding (the minimum wage is in a mode) for all groups, as the most recent increases are relatively large (4% in 2010 compared with 15% in 2020). Figures 1 to 8 in the appendix show the kernel densities for different categories of workers. in each graph we include different kernel estimations for two different points in time (2010 and 2020) and vertical dashed lines to show the real minimum wage in those years. We observe that the unionized distribution is positively skewed while it is negative for non-unionized workers. Furthermore, there are fewer modes for unionized than non-unionized, meaning that there are more clusters or subgroups for workers that do not belong to a union. We also observe that unionized workers are further from the minimum wage lines and the left part of the kernel has no modes, which means that minimum wage cannot be associated or is not binding to these kinds of workers.
in terms of stochastic dominance, Table 1 shows that in the short term (final quarters of 2009 and 2019), there is a welfare gain for unionized and non-unionized workers, but in the long term, the wage distribution of the first quarter of 2010 dominates the fourth quarter of 2019, which means that there is no long-term welfare gain. The minimum wage is binding for some non-unionized workers, as the vertical lines cut the kernel distributions in a mode. in terms of income distribution, inequality is larger for the non-unionized category, but intra-group inequality also increased in a ten-year period for unionized workers (see Table 2). per-hour wages increased for non-unionized workers, while unionized workers saw their hourly wage decrease in a ten-year period, though unionized workers enjoy fairly higher wages (see Table 3). one possible reason is perhaps the reduction in wages and fringe benefits for unionized public workers, which has been a policy under the current federal administration, though a more detailed analysis is needed to support this hypothesis.
young workers (29 years old and younger) and experienced workers (30 years old and older) also have multi-mode distributions, and the 2020 minimum wage seems to be binding for some subgroups. young and non-unionized workers have seen their real mean wage increase, while experienced and unionized workers have seen their mean wages decrease. also, young workers have a real welfare gain as their 2019Q4 distribution dominates that of 2010Q1. but for old workers there is not any clear gain at all. in terms of income inequality, both young and old categories have their intra-group inequality decreased by little. young workers had their hourly wages increased (less labor supply per wage unit) in the ten-year period, while old workers have seen the opposite trend. This result contradicts the technological-bias change hypothesis. perhaps institutional change is the source of these distribution changes (e.g. recent federal government-sponsored programs for unemployed young people).
We also observe that workers with elementary education have a negative skewed distribution with many modes in the left part, while those with tertiary education have a positive skewed distribution in the year 2020 and many clusters (modes) in the right part of their distribution. The new minimum salary seems to be binding for workers with elementary education, but not for workers with higher education. Looking at the stochastic dominance index, workers with tertiary education have a larger real wage than those with elementary education, but their long-term gain seems to be negative, while those with just elementary education made real improvements in welfare in the last decade. intra-group income inequality has decreased for less educated workers, while it increased for highly educated workers. in terms of labor supply, Table 3 shows that younger workers with only elementary education provide less work for the same wage, while the opposite is true for highly-educated people. These findings support the idea of lower returns for higher education found by campos-Vázquez [9]. observing kernel estimations by economic sector, the distribution for agriculture and for manufacturing workers are located to the left of those workers in the government and health services in 2010 (lower mean wages). but in the year 2020, all four distributions are closer to each other. Through a careful examination of the stochastic dominance index in Table 1, we observe that from the first quarter of 2010 to the last quarter of 2019 both agriculture and manufacturing made important welfare improvements (2019-Q4 dominated the wage income distribution of 2010-Q1). The opposite results were found for those in the government sector and health services who experienced a welfare loss in terms of wage income, closing their wage gap with agriculture and manufacturing workers. intra-group wage inequality has increased for health and government workers and decreased for workers in agriculture and manufacturing. in terms of labor supply, the hourly wage decreased for health and government workers (same wage for more work) in the ten-year period, while workers in manufacturing and agriculture had the opposite effect (same wage for less work) as shown in Table 3. Table 1 reveals a positive value for the stochastic dominance index, which shows that the latter quarter dominates the previous one. a positive stochastic dominance WaGE incomE DiSTribUTion in mExico: a nonparamETric approacH index and close to one in the middle column shows that the kernel estimation of the last quarter of 2019 dominated the distribution of the first quarter of 2010. This longterm improvement in welfare was only possible for workers with supposedly low productivity, those in agriculture and manufacturing, and mainly young workers and those with little formal education. The Gini index and Wasserstein distance in Table 2 shows that overall income inequality decreased from 2010 to 2020. but the groups that contributed to this decrease are those traditionally associated with low productivity, such as the young and those with only elementary education, as well as workers in the agriculture and manufacturing sectors. Those workers in sectors that require higher specialization, such as in the health and government sectors, unionized workers and those with tertiary education, have seen their wage distribution becoming more unequal. overall wage income per hour of work barely increased from 2010 to 2020, though the groups that improved their position (fewer hours of work for the same wage) are workers in agriculture and manufacturing, non-unionized workers, young workers and those with only elementary education. Unionized workers, workers in the health and government sectors and workers with tertiary education saw the same toil for less wage income in this ten-year period.
Stochastic dominance analysis on the lowess supply curve shows a negative index for workers whose 2020 labor curve dominated their 2010 curve, which implies that they are supplying more labor for the same real wage. Workers traditionally associated with low productivity are supplying less labor for the same real wage, such as those in the manufacturing and agricultural sectors, as well as those with only elementary education, young and non-unionized workers.
coNclUsIoN AND FINAl coMMeNts
The objective of the present analysis is to open the debate on the possible sources of wage inequality in mexico in recent years. We opted for nonparametric techniques to analyze short-and long-term changes in real wages for several categories of workers and also to observe important trends. one of our major research results shows that workers in groups with traditionally high levels of human capital are not experiencing improvements in their welfare in the long term, and their intra-group wage inequality is increasing. The stochastic dominance analysis also shows that short-term improvements are also becoming difficult to attain. These workers are receiving much lower wages for the same hour of work. on the other hand, workers considered to have low human capital, such as young workers with only elementary education, as well as those in agriculture and manufacturing, are improving in intra-group income inequality as well as welfare in the ten-year period of analysis. Using stochastic dominance, we analyzed possible short-term changes in welfare during the end-of-year changes (bonuses and minimum wage increase) in 2009 and 2019, as well as the ten-year gap from the first quarter of 2010 to the fourth quarter of 2019, using real wage income. We observed that workers in traditionally low specialized sectors, such as young workers, workers in the agricultural and manufacturing sectors and those with only elementary education, are not getting short-term welfare improvement due to changes in their real wages at the end of the year. The end-of-year changes might be due to yearly bonuses (aguinaldo) and institutional changes such as the minimum wage. However, these categories are improving their welfare in the ten-year period from 2010Q1 to 2019Q4.
Workers traditionally associated with low specializations and low human capital improved in their labor supply, receiving relatively higher wages for the same labor, while the opposite was true for highly specialized workers and those with high human capital. non-unionized, agricultural workers and workers in manufacturing, as well as those with only elementary education, increased their product per hour worked. The stochastic dominance and Wasserstein distances of lowess labor supply show possible improvements in productivity for these categories of low specialization and low human capital.
The above trends are difficult to explain using the framework of technologicalbias change and separating equilibrium for low-skilled and high-skilled workers, as observed in the first decades of the 1980s and 1990s. as explained by castro Lugo and Huesca reynoso [12], technical-bias change was a possible reason for the increasing wage inequality during that period. but the current trend seems to be reversed, as many workers with high productivity and higher education have experienced increased intra-group inequality and long-term welfare loss.
|
2023-05-05T15:14:44.518Z
|
2022-09-22T00:00:00.000
|
{
"year": 2022,
"sha1": "5a8c6aa52218932269e3ad5f545716f033735ca6",
"oa_license": "CCBY",
"oa_url": "https://economiatyp.uam.mx/index.php/ETYP/article/download/464/794",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f1a32d9189a628ffd667c34f37530c1b69c03047",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
10775528
|
pes2o/s2orc
|
v3-fos-license
|
Serine Protease PRSS23 Is Upregulated by Estrogen Receptor α and Associated with Proliferation of Breast Cancer Cells
Serine protease PRSS23 is a newly discovered protein that has been associated with tumor progression in various types of cancers. Interestingly, PRSS23 is coexpressed with estrogen receptor α (ERα), which is a prominent biomarker and therapeutic target for human breast cancer. Estrogen signaling through ERα is also known to affect cell proliferation, apoptosis, and survival, which promotes tumorigenesis by regulating the production of numerous downstream effector proteins. In the present study, we aimed to clarify the correlation between and functional implication of ERα and PRSS23 in breast cancer. Analysis of published breast cancer microarray datasets revealed that the gene expression correlation between ERα and PRSS23 is highly significant among all ERα-associated proteases in breast cancer. We then assessed PRSS23 expression in 56 primary breast cancer biopsies and 8 cancer cell lines. The results further confirmed the coexpression of PRSS23 and ERα and provided clinicopathological significance. In vitro assays in MCF-7 breast cancer cells demonstrated that PRSS23 expression is induced by 17β-estradiol-activated ERα through an interaction with an upstream promoter region of PRSS23 gene. In addition, PRSS23 knockdown may suppress estrogen-driven cell proliferation of MCF-7 cells. Our findings imply that PRSS23 might be a critical component of estrogen-mediated cell proliferation of ERα-positive breast cancer cells. In conclusion, the present study highlights the potential for PRSS23 to be a novel therapeutic target in breast cancer research.
Introduction
Bioinformatics approaches have shown that the serine protease 23 gene (PRSS23) is highly conserved in vertebrates and is predicted to encode a novel protease on chromosome 11q14.1 in humans [1,2,3]. Previous expression-profiling studies have suggested that enhanced PRSS23 expression is observed in various types of cancers, including breast [4,5,6], prostate [7], papillary thyroid [8], and pancreatic cancers [9], and the expression of the PRSS23 has been linked with tumor progression in human [1]. In addition, studies in MCF-7/BUS cells revealed that mRNA level of PRSS23 may be stimulated by estrogen and reduced by tamoxifen treatment [5,10].
Estrogen, which are well conserved in vertebrates, represents a group of sex steroid hormones that include estradiol, estrone, and estriol [11]. Although estrogen is the predominant sex hormone in females, its levels are relatively low in males. Along with its role in reproduction, estrogen also affects many cellular functions during development and in adulthood. Ample evidence has shown that estrogen and anti-estrogen agents, such as tamoxifen and fulvestrant, can specifically bind to the ligand binding domain of estrogen receptor a (ERa) to modulate differential expression of downstream transcriptional targets of ERa in breast cancer cells. These findings suggest that ERa could be a vital prognostic biomarker in breast cancer [12,13,14,15,16,17].
Collective evidence suggests that estrogen signaling regulate a variety of biological processes [18]. For instance, estrogen signaling plays a pivotal role in growth and development of mammary glands which is consistent with its role in normal sexual and reproductive functions. Indeed, canonical estrogen signaling affects the expression of specific downstream effector genes that enhance cell survival via anti-apoptotic pathways. In addition, estrogen signaling increases proliferation of breast cancer cells by upregulating expression of cell cycle enhancers (e.g., cyclin D1) and transcription factors (e.g., c-myc and E2F) expression in breast cancer [19,20]. Although importance of novel ERa-related proteases to breast cancer progression is unclear, we hypothesized that estrogen could also enhance breast cancer cell progression through intracellular proteases.
In the present study, we investigated the gene expression of the ERa-related proteases in breast cancers. Our results indicate that there was a high level of PRSS23 expression in ERa-positive breast cancer cells. In addition, in vitro assays revealed that PRSS23 expression was upregulated at the transcriptional level by ERa and was associated with breast cancer cell proliferation. Thus, PRSS23 might be a novel target for adjuvant therapy for breast cancer progression.
We also compared the expression intensities of PRSS23, CTSC, CTSF, and MMP-24 from 52 ERa-positive breast cancer specimens within the van't Veer dataset. The average expression levels (log 10 intensity) of PRSS23, CTSF, CTSC and MMP-24 were 0.779, 0.075, 21.101, and 21.434, respectively (Fig. 1B). In addition to being significantly coregulated with ESR1 expression, the present results suggest that there is greater mRNA expression level of PRSS23 in breast cancer specimen than other well-known cancer-related proteases. Because the expression of PRSS23 in breast cancer has not been clearly characterized, we targeted PRSS23 for further analysis in the present study.
High PRSS23 expression was observed in ERa-positive breast cancer cells from breast cancer patients
To enable the detection of the PRSS23 protein, we raised an antibody against PRSS23 by injecting recombinant GST-PRSS23 protein into a rabbit. After standard purification (the detailed procedure is described in Materials and Methods S1), we validated the efficacy and specificity of this custom anti-PRSS23 antibody by immunoblot of protein from MCF-7 cells with or without ectopic PRSS23 overexpression. Both endogenous and overexpressed PRSS23 could be detected as a double-band pattern around 47 kDa (Fig. S1), which is close to PRSS23's hypothetical molecular weight (43 kDa).
We used the custom anti-PRSS23 antibody to perform immunohistochemical assays on cancer specimens from 56 primary breast tumors collected in Taiwan. Interestingly, PRSS23 expression was detected in the nuclei of malignant breast tumor tissues. To validate the relationship between PRSS23 and ERa expression, we selected 6 representative sets of tumor samples from breast cancer patients that were either ERa-positive ( Fig. 2A, B, C) or ERa-negative ( Fig. 2D, E, F). Upon close examination, PRSS23 expression was found to be much higher in the nucleoplasm of ERa-positive breast cancer specimens (Fig. 2G, H, I) compared with the nucleoplasm of ERa-negative breast cancer specimens (Fig. 2J, K, L).
For systemic comparison, the staining intensity of anti-PRSS23 in 56 Taiwan breast cancer samples was classified as strong ( Fig. S2A), moderate (Fig. S2B), or weak (Fig. S2C). This was performed by comparing the staining intensity in the cancer specimens to the intensity in normal cells in the vicinity of tumor tissues. Specifically, we characterized PRSS23 staining by comparing PRSS23 expression intensities in the nucleoplasm of cancer cells to the expression intensities in normal stromal cells and endothelial cells using the Allred immunohistochemistry score system [22]. Based on the assigned total Allred scores, we grouped the 56 breast cancer specimens into two cohorts: high PRSS23 expression (total Allred score.3), and the low PRSS23 expression (total Allred Score 0-3) (Table 1). Strikingly, we found that nearly 75% of the ERa-positive breast cancer samples from Taiwanese patients are belonged to the group with high PRSS23 expression (Allred score.3). Conversely, over 80% of the ERa-negative breast cancer samples belonged to the low PRSS23 expression group (Allred score#3). Statistical analyses also indicated that increased PRSS23 expression was significantly correlated with ERa status of the cells (n = 56, p = 0.005).
Taken together, the results derived from the clinicopathological and immunohistochemical analyses imply that PRSS23 expression is closely related to ERa expression (Table 1). Interestingly, we did not find any statistical significance between PRSS23 expression and tumor invasion (p = 0.56) or PRSS23 and HER-2 overexpression, which suggests that HER-2 amplification may not affect PRSS23 expression (p = 0.79).
PRSS23 is highly expressed in ERa-positive breast cancer cell lines Based on immunoblotting, expression of endogenous PRSS23 was identified in all cell lines utilized in this assay described above by anti-PRSS23, and endogenous GAPDH staining served as the loading control. The results showed that PRSS23 protein expression was detected in ERa-positive MCF-7 cells, BT-474 cells, and T-47D cells (Fig. 3C). Quantification using densitometry analysis revealed the expression level of PRSS23 to be 1 in MCF-7 cells, 0.18 in BT-474 cells, and 0.11 in T-47D cells (expression was normalized to GAPDH expression in the respective cell line). The results indicated that the expression level of PRSS23 was higher in the other cell lines with ERa expression than those without ERa expression. These data from cell line survey also implicated that ERa might upregulate expression of PRSS23 in agreement with the microarray and immunohistochemical studies.
E 2 upregulates PRSS23 expression in ERa-positive MCF-7 breast cancer cell
After learning that PRSS23 expression was correlated with ERa in breast cancers, we investigated the dynamics of PRSS23 expression induced by estrogen stimulation. We treated the MCF-7 cells with E 2 and Tamoxifen (Tam) to test whether PRSS23 expression could be enhanced by activated ERa. We found that PRSS23 mRNA expression increased significantly in MCF-7 cells from 6, 12, and 24 h after E 2 treatment (Fig. 4A). After 24 h of treatment with 1 nM E 2 , PRSS23 mRNA expression was about 10-fold greater than the vehicle control (0.1% DMSO and 25 ppm ethanol). By comparison, PRSS23 mRNA expression was significantly reduced by 5 mM Tam treatment to a similar level as the vehicle controls. In addition, Tam alone did not upregulate PRSS23 mRNA levels in MCF-7 cells compared with the vehicle control.
To confirm whether estrogen is indeed not able to upregulate PRSS23 expression in ERa-negative cancer cells, we treated MDA-MB-231 (ERa-negative) cells with 1 nM E 2 and measured the mRNA levels of PRSS23 and pS2, with the latter serving as a positive control for estrogen responsiveness [23]. At 0, 6, 12 and 24 h after treatment, no significant correlationship was observed in gene expression levels of PRSS23 in MDA-MB-231 cells treated with 1 nM E 2 compared with vehicle treated control ( Fig. 4B upper panel) as compared to pS2 ( Fig. 4B lower panel). Although the PRSS23 gene expression level in E 2 -treated cells is 3-fold higher than that of vehicle treated control at 12 h. We hypothesized PRSS23 expression might be regulated by alternative signaling pathway in ER-negative MDA-MB-231 cells. Taken together, these data suggest that PRSS23 expression is indeed primarily regulated by estrogen signaling in ER-positive breast cancer cells.
Overexpression of ERa enhances PRSS23 expression in MCF-7 cells
Based on the results described above, we hypothesized that ERa protein level is relevant to the expression of PRSS23. Previous studies have shown that ERa upregulates gene expression of pS2 and CTSD by recruiting estrogen, and E 2 -bound ERa is prone to immediate ubiquitin-dependent degradation by the 20S proteasomes after stabilizing transcription initiation [24,25,26,27]. To assay whether a similar ERa stability issue could affect PRSS23 mRNA expression, we used MG-132 to perturb intracellular proteasome activity in MCF-7 cells. When proteasome activity was not disrupted by MG-132, ERa level appeared to be reduced in E 2 -treated MCF-7 cells due to ubiquitin-dependent degradation (Fig. 5A). Treatment with the proteasome inhibitor MG-132, however, blocked the decrease in E 2 -induced ERa protein levels. Furthermore, Tam could not induce ERa degradation in MCF-7 cells, which was consistent with findings from a previous study [24]. Our results (Fig. 5A) indicated that cotreatment with MG-132 and E 2 for 12 h could significantly increase the PRSS23 protein level in MCF-7 cells (near 1.5-fold) as compared with E 2 treatment alone in the assay. Moreover, we also found the protein level of PRSS23 significantly decreased near 0.5-fold to 0.6-fold in Tam-treated MCF-7 cells whether MG-132 is present in the medium or not. We also found that cotreatment with MG-132 and E 2 for 12 h could increase the PRSS23 mRNA level (3-fold; Fig. 5B upper panel) and pS2 level (1.3-fold; Fig. 5B lower panel) in MCF-7 cells compared with treatment with E 2 alone. Although MG-132 enhanced the PRSS23 mRNA level by 2.5-fold, cotreatment with MG-132 and Tam reduced PRSS23 mRNA to a level similar to untreated MCF-7 cells. These results suggest that the stability of E 2 -activated ERa upregulates PRSS23 mRNA expression, whereas Tam-inactivated ERa does not stimulate PRSS23 expression.
To clarify whether accumulation of ERa contributes exclusively to the upregulation of PRSS23 expression, we ectopically expressed ERa in MCF-7 cells. Fig. 5C shows that the PRSS23 protein level was increased ,1.5-fold in MCF-7 cells when ectopic ERa was overexpressed. As expected, the enhancement was not observed in the vector-only controls. Thus, these data suggest the activity and stability of ERa are important for the regulation of PRSS23 expression in MCF-7 cells.
E 2 activates ERa to upregulate PRSS23 expression through an upstream promoter region
Previous studies have suggested that ERa enhances downstream gene expression through both genomic and non-genomic pathways [19,28]. In addition, Moggs et al. postulated that a consensus estrogen responsive element is located in the upstream promoter region 22840 to 22828 bp from the translational start site of the PRSS23 gene [23] To identify the critical estrogen response region in the promoter region upstream of PRSS23, we used the genomic sequence from the NCBI Entrez Gene Database to design a set of PCR primers, which were used to subclone various promoter regions along with the upstream regulatory region. Fig. 6A shows the luciferase reporter constructs that we generated, which contained various regions across the PRSS23 promoter, including 22914 to 97 bp, 22029 to 97 bp, 21261 to 97 bp, and 2391 to 97 bp. We transfected MCF-7 cells with individual reporter construct containing these variable promoter sequences to screen for the most critical estrogen responsive region. Interestingly, the normalized luciferase activities of the 22914 to 97 bp, 22029 to 97 bp, and 21261 to 97 bp constructs increased by 35%, 40%, and 20% in E 2 -treated MCF-7 cells compared with vehicle-treated cells, respectively (p,0.01, Fig. 6A). By comparison, the normalized luciferase activity of the construct containing the 2342 to 97 bp promoter region did not show significant enhancement in E 2 -treated cells. Interestingly, the difference in the luciferase activities between the 22914 to 97 bp and 22029 to 97 bp constructs was not significant in the presence of E 2 (p.0.05); however, the luciferase activity of the 21261 to 97 bp construct was 11% lower than the activity of the 22914 to 97 bp construct (p,0.05). A more profound difference was observed between the 21261 to 97 bp construct and the 22029 to 97 bp construct (p,0.05), in which the activity of 22029 to 97 bp construct was increased by 15% compared to that of 21261 to 97 bp construct in the presence of E 2 . Taken together, these results suggest that ERa upregulates PRSS23 promoter activity through different elements in the region within 22029 to 2342 bp instead of through the hypothetical ERE (22840 to 22828 bp).
Based on the findings with the promoter region constructs, we used ChIP assays to examine whether ERa directly binds to promoter region upstream of the PRSS23 gene. The pS2 gene served as a positive control. Fig. 6B shows that binding of ERa to the upstream promoter region was enhanced in 10 nM E 2stimulated MCF-7 cells after 60 min of treatment. Compared with vehicle-treated controls, the strength of the interaction of ERa with the upstream promoter region of the pS2 gene was 3-fold higher, and that of PRSS23 gene after 60 min of treatment was 1.5-fold higher, which indicates that ERa upregulates PRSS23 expression through direct interaction via its upstream promoter region.
PRSS23 expression is associated with estrogen-induced proliferation in MCF-7 cells
Our earlier immunohistochemical data revealed that PRSS23 was located in the cell nucleus of breast cancer cells. Thus, we used an RNAi knockdown approach to examine cancer cell function could be affected by PRSS23 on breast cancer cell proliferation. The efficacy of RNAi-mediated PRSS23 knockdown was initially determined by immunoblot analysis (Fig. 7A). We found that PRSS23 protein levels could be reduced by ,77% in cells treated with RNAi directed against PRSS23 compared with cells treated with the non-silencing control (NSC). After confirming the PRSS23 knockdown, we used the PRSS23 knockdown MCF-7 cells in colony formation assays. The cells were cultured in 0.4% soft agar with 10% fetal bovine serum (FBS) without hormone deprivation for 6 days (Fig. 7B, upper panel), and the size of each tumor particle was evaluated by diameter. When sufficient E 2 was present, the average diameter of tumors formed in PRSS23 knockdown cells was 30% less than the average diameter in NSC cells-forming tumors (p,0.01; Fig. 7B, bar graph).
We also performed flow cytometry analysis to map the DNA distribution profile of MCF-7 cells for cell cycle analysis. We initially examined NSC control cells after 24 h stimulation with 20% FBS, either in the absence or presence of E 2 . Compared with the ethanol vehicle-control cells, treatment with 1 nM E 2 decreased cell counts at the G0/G1 phase from 35.91% to 32.20%, which represented a 10% reduction (Fig. 7C). In addition, the S and G2/M phases each showed a 16.5% (15.83%R18.45%) and a 9.7% (26.77%R29.41%) increase, respectively, in the E 2 -treated cells compared with the control cells.
Discussion
The present study investigated which proteases were associated with ERa in breast cancer. Bioinformatic analyses on breast cancer microarray datasets published by van't Veer et al. [21] revealed that PRSS23 is one of the most highly expressed proteases linked to ERa expression. Histopathological assays and surveys of cancer cell lines further confirmed PRSS23 expression was significantly increased in ERa-positive breast cancers, and PRSS23 expression was upregulated by ERa-mediated transcriptional regulation. We also investigated the functional role of PRSS23 and found that PRSS23 may regulate DNA replication during cancer cell proliferation, which highlights PRSS23's potential as a novel target for breast cancer therapy.
Proteases are known to play diverse roles in physiology and pathology. Thus, it would not be surprising if some proteases participated in estrogen-dependent breast tumor cell growth, differentiation, and progression. For instance, cathepsin D (CTSD), which is an estrogen-inducible lysosomal protease identified in breast cancer, is considered to be a critical factor in mediating apoptosis of cancer cells, neurodegeneration, and development regression. Accumulating studies have provided evidence that protein levels of CTSD are an independent biomarker for better prognostic outcome in various cancers [28,29,30,31,32,33]. In addition, the results reported in the present study suggest that PRSS23 expression is upregulated by estrogen-activated ERa in MCF-7 cells. Therefore, it is plausible to hypothesize that protein levels of PRSS23 might also serve as an independent prognostic factor for breast cancer. Due to case number limited case numbers, we were not able to resolve the underlying difference in PRSS23 and ERa across the various subtype that could help to subtype breast cancers with distinct prognostic outcomes; however, we were able to validate the association between ERa status and high PRSS23 expression with statistical confidence. Thus, when a sufficient number of breast cancer cases are available, further investigation should be undertaken to explore the importance of PRSS23 in breast cancer patients with different ERa status and adjuvant chemotherapy.
Estrogen can stimulate the transactivity of ERa to upregulate downstream gene expression either through direct binding to the ERE in target genes or through coregulation with other transcription factors [34,35]. Thus, it is interesting to determine which route is involved in regulation of PRSS23 expression. Our results from luciferase reporter assays indicate that E 2 stimulates PRSS23 expression in MCF-7 cells through the upstream promoter region 22029 to 2342 bp. In addition, the ChIP assays showed that E 2 upregulates PRSS23 promoter activity by activating ERa. Interestingly, previous studies have revealed that DNA binding domain of ERa is dispensable for ERa-mediated upregulation of PRSS23 gene expression in MCF-7 cells while E 2 is present [4]. According to our finding in the promoter activity assay and ChIP assay, the promoter activity of PRSS23 gene induced by E 2 treatment is statistically significant (p,0.05) but not particularly striking like that of canonical estrogen-induced genes, including pS2 and CTSD. However, our results implied that PRSS23 expression is upregulated by ERa through not only the genomic pathway but also other non-genomic pathway, which shall be investigated in future studies. At least, these results suggest that ERa may upregulate PRSS23 expression by interacting with other transcription factors at 22029 to 2342 bp in the promoter region instead of the hypothetical ERE [23] in genomic pathway.
The anti-PRSS23 staining pattern in the immunohistochemical studies of the patient specimens revealed that PRSS23 is found in the cell nuclei of breast cancer cells and in normal stromal and endothelial cells of peripheral tissues. The nuclear localization of PRSS23 has been confirmed by subcellular fractionation studies (unpublished data). Interestingly, another group used yeast twohybrid screening to show that PRSS23 might interact with NCAPD3 (non-SMC Condensin II complex subunit D3), which has been shown to play a significant role in mediating chromosome condensation, segregation, and DNA repair during S phase to prophase of the cell cycle [36,37,38]. Based on these findings, we hypothesized that PRSS23 might be involved in estrogen-driven mechanisms to mediate chromosome replication of ERa-positive breast cancer cells. Although further investigation is needed to resolve the detailed molecular mechanisms and interactions involved, we propose that PRSS23 participates in the regulation of breast cancer proliferation.
In conclusion, the present study demonstrated the close relationship between PRSS23 and estrogen/ERa signaling in breast cancer, which might serve as the basis for developing PRSS23 into a novel prognostic or therapeutic target for breast cancer.
Ethics statement
All human specimens were encoded to protect patient confidentiality and processed under protocols approved by the Institutional Review Board of Human Subjects Research Ethics Committee of Mackay Memorial Hospital, Taipei City, Taiwan and local law regulation. Breast cancer tissues along with their [39].
For transfections, plasmids were delivered with jetPRIME transfection reagent (PolyPlus, Yvelines, France) according to the manufacturer's instructions. The RNAi knockdown system was adopted from the pGIPZ vector-based lentivirus system (Open Biosystems, Huntsville, AL, USA), and PRSS23 RNAi sequence is 59-ACCCAGATTTGCTATTGGATTA-39. The transfection and transduction procedures followed the manufacturer's instructions.
In estrogen treatment experiments, cultured cells were incubated in phenol-red-free RPMI1640 medium (Cassion Laboratories) with 10% dextran-coated charcoal-stripped fetal bovine serum (CDS-FBS) which was prepared with dextran-coated activated charcoal (Sigma-Aldrich) according to the manufacturer's instructions. 17b-estradiol (E 2 ) and tamoxifen (Tam) were all purchased from Sigma-Aldrich Corporation.
RNA isolation, cDNA synthesis and gene expression quantitation
Total RNA was isolated using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. cDNA was synthesized using a SuperScript III reverse transcriptase kit (Invitrogen) following the manufacturer's instructions. Quantitative real-time polymerase chain reaction (qRT-PCR) was carried out with SYBR green PCR master mix (Applied Biosystems, Carlsbad, CA, USA) using an ABI Prism 7500 sequence detector (Applied Biosystems) following the manufacturer's instructions. RPLP0 served as the control for normalization [40]. The sequences of primer pairs are showed in Table S2.
Cloning and site-directed mutagenesis
The open-reading frame of ESR1 (Addgene plasmid 11351 [41]) was subcloned into the pIRES-ZsGreen vector (Clontech, Mountain View, CA, USA). The open-reading frame of PRSS23 was amplified by high-fidelity PCR (primers are listed in Table S1) and cloned into the pIRES-ZsGreen1 vector (Clontech).
DNA fragments of the promoter region containing distal part of exon 1 (22914 to 97 bp and 2391 to 97 bp) were separately amplified by high-fidelity PCR of EcoRV-digested, genomic DNA from human placenta tissue (primers are listed in Table S3). DNA sequence analyses verified that the sequences were identical to those published on the Entrez Genome Database, NCBI. DNA sequences containing PRSS23 promoter ligated into the pGL3basic vector (Promega, Madison, WI, USA). There are two available unique type-II restriction enzyme cutting sites in the DNA fragment of the promoter-NdeI and PstI. The plasmid pGL3-basic-PRSS23 promoter (22914 to 97 bp) was separately digested by NheI and NdeI, NheI and PstI (New England BioLabs, Ipswich, MA, USA) to generate the other two different constructs of the PRSS23 promoter (i.e. 22029 to 97 bp, and 21261 to 97 bp, respectively).
Promoter luciferase reporter assay
For the luciferase reporter assay, 5610 4 cells were cotransfected with the pCMV-Luc vector (Clontech) and pGL3-basic PRSS23 promoter constructs in 24-well plates. After overnight incubation, cells were subcultured in 96-well plate (,1610 4 per well) and treated with E 2 for 16 hours. Luciferase activity was evaluated using the Dual-Luciferase Reporter Assay kit (Promega) and the VICTOR 3 multilabel plate reader (PerkinElmer, Waltham, MA, USA).
Membrane immunoblot
Immunoblot have been described in previous studies [44]. The primary antibodies used in the present study were anti-ERa (clone: F-10), anti-GAPDH (Santa Cruz Biotechnology) and the antihuman PRSS23 antibody. The intensities of protein bands in photographs were evaluated by ImageJ software.
Immunohistochemistry
The histological subtype of each tumor was determined after surgery. The malignancy of infiltrating carcinomas was determined according to the Scarff-Bloom-Richardson classification [45]. The staining procedures were according to Li et al. [46], and images were captured by a TE-2000-E microscope equipped with Nikon D50 digital camera (Nikon, Tokyo, Japan). The intensity of PRSS23 expression in sections was scored following the guidelines of the Allred scoring system [22]. Total Allred scores of samples were analyzed with Fisher's exact test to assess differences between the pathological parameters. Classification of HER2 amplification in breast cancer was performed according to Ellis et al. at 2005 [47].
Soft-agar colony formation assay
We performed soft agar colony formation assays using low melting temperature agarose, as previous described (Sigma-Aldrich) [48]. The images were captured randomly by TE-2000 inverted microscope equipped with Nikon D50 digital camera (Nikon). The size of tumor was all measured in diameter. The mean tumor sizes of different experiments were all normalized to that of the control group.
Flow cytometry
The examined cells were harvested by 0.05% trypsin-EDTA solution (Invitrogen). After washed with ice-cold 1X PBS thrice, the cells were fixed with ice-cold 75% ethanol at 4uC for 1 h. The cells were stained in a 1X PBS solution containing 6.7 mM propidium iodide, 0.1 mg/ml RNase A (Invitrogen) in at 37uC for 30 min, and then analyzed in FACSCalibur (BD, Bedford, MA, USA).
Statistics and data analysis
Microarray data of breast cancer patients were manipulated in MySQL software, and clustering and organization of gene expression were processed with Cluster software from the Eisen lab [49]. The self-organized map was produced by TreeView software. The descriptive statistics of the experimental data were analyzed with Student's t test, the Mann-Whitney U test, and Fisher's exact test in the R statistical program.
|
2016-05-12T22:15:10.714Z
|
2012-01-23T00:00:00.000
|
{
"year": 2012,
"sha1": "8f87878d86298976a7956e58878e83d74dcaf680",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0030397&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f87878d86298976a7956e58878e83d74dcaf680",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
158803412
|
pes2o/s2orc
|
v3-fos-license
|
Eutrophication, Research and Management History of the Shallow Ypacara í Lake (Paraguay)
: Ypacara í Lake is the most renowned lake in landlocked Paraguay and a major source of drinking and irrigation water for neighbouring towns. Beyond its socioeconomic and cultural significance, it has great ecological importance, supporting a rich biodiversity. Rapid growth of human presence and activities within its basin has led to its environmental degradation, a heartfelt matter of high political concern that compels intervention. Here, by reconstructing the history of scientific and management-oriented research on this system, we provide a comprehensive assessment of current knowledge and practice to which we contribute our recent, novel findings. An upward trend in total phosphorus concentration confirms ongoing eutrophication of an already eutrophic system, evidenced by consistently high values of trophic state indices. Downward trends in water transparency and chlorophyll-a concentration support the hypothesis that primary production in this lake is fundamentally light limited. Statistical and other analyses suggest high sensitivity of the system to hydraulic, hydro-morphological and hydro-meteorological alterations arising, respectively, from engineering interventions, land use and climate change. By discussing knowledge gaps, opportunities for research and challenges for management and restoration, we argue that this case is of high scientific value and that its study can advance theoretical understanding of shallow subtropical lakes.
regressions; and (d) time series reconstruction of trophic state indices and nutrient limitation assessment.
These results allowed us to identify and assess the main factors affecting primary production in this lake, as well as both specific and general knowledge gaps that relate to its complex hydromorphological and hydro-ecological conditions, which we then discuss with reference to shallow lakes theory and the state of the art of subtropical limnology. We argue that these gaps, together with the many challenges for the lake's effective management and restoration and the currently favourable political context, configure an ideal scenario for scientific research that is worth exploring.
Location and Climate
Ypacaraí Lake is a major water body of the Salado River Basin. It is situated between latitudes 25.25 and 25.37° S and between longitudes 57.28 and 57.38° W in Eastern Paraguay, one of the two main natural regions of the country, separated from each other by the southward-flowing Paraguay River. The lake is located some 30 km to the east of the historical centre of Asunción, the capital of the country, which is also its largest and most populous city. It's metropolitan area, Greater Asunción, reaches into the basin of the lake (Figure 1). The entire basin lies within a region whose climate can be characterised as humid subtropical [38,39]. The monthly mean air temperature (Figure 2a) ranges from 17 to 27 °C, the warmer period being December-February and the colder period being June-August [40]. The monthly mean precipitations range from 54.0 mm in July to 174.2 mm in November (Figure 2b), with mean annual precipitations of about 1300 mm [40]. The entire basin lies within a region whose climate can be characterised as humid subtropical [38,39]. The monthly mean air temperature (Figure 2a) ranges from 17 to 27 • C, the warmer period being December-February and the colder period being June-August [40]. The monthly mean precipitations range from 54.0 mm in July to 174.2 mm in November (Figure 2b), with mean annual precipitations of about 1300 mm [40].
The region is periodically subjected to the effects of El Niño-Southern Oscillation (ENSO) [41]. During the warm ENSO phase (El Niño), precipitation, in general, increases, and extreme rainfall events become more frequent [42,43], resulting in major floods of the country's fluvial systems [44]. During the cold ENSO phase (La Niña), opposite patterns are verified, with significant droughts affecting south-eastern Paraguay [45].
Hourly wind data for the town of San Bernardino, on the east shore of the lake, are available starting from 2014, recorded by a weather station located at the San Bernardino Nautical Club (CNSB). A recent study of these data determined that, consistent with regional patterns arising from synoptic-scale atmospheric circulation, prevailing winds in the area blow from the northeast ( Figure S1 in the Supplementary Materials) with a mean speed of 2.66 m·s −1 and a maximum recorded speed of 22.8 m·s −1 . Occasional winds blowing from the south are associated with cold fronts [46].
Morphology and Bathymetry
Due to important fluctuations of its water level, the gentle slopes of its floodplain and its potentially highly dynamic lakebed, the morphological characteristics of Ypacaraí Lake can vary greatly over time. In Table 1, we refer these characteristics to the mean lake level calculated for 2016-2017 [47,48] and the latest available bathymetric map (Figure 3), elaborated in 2017 after a survey conducted in 2014 [49]. Maximum length and width were calculated as in [50].
As can be observed in Figure 3, in the central area of the lake the bottom is relatively flat, with a slightly deeper part near its central-eastern shore. No reefs separate this central pelagic zone from the shallower northern, north-western, southern and south-western areas. The region is periodically subjected to the effects of El Niño-Southern Oscillation (ENSO) [41]. During the warm ENSO phase (El Niño), precipitation, in general, increases, and extreme rainfall events become more frequent [42,43], resulting in major floods of the country's fluvial systems [44]. During the cold ENSO phase (La Niña), opposite patterns are verified, with significant droughts affecting south-eastern Paraguay [45].
Hourly wind data for the town of San Bernardino, on the east shore of the lake, are available starting from 2014, recorded by a weather station located at the San Bernardino Nautical Club (CNSB). A recent study of these data determined that, consistent with regional patterns arising from synoptic-scale atmospheric circulation, prevailing winds in the area blow from the northeast ( Figure S1 in the Supplementary Materials) with a mean speed of 2.66 m·s −1 and a maximum recorded speed of 22.8 m·s −1 . Occasional winds blowing from the south are associated with cold fronts [46].
Morphology and Bathymetry
Due to important fluctuations of its water level, the gentle slopes of its floodplain and its potentially highly dynamic lakebed, the morphological characteristics of Ypacaraí Lake can vary Sustainability 2018, 10, 2426 6 of 31 greatly over time. In Table 1, we refer these characteristics to the mean lake level calculated for 2016-2017 [47,48] and the latest available bathymetric map (Figure 3), elaborated in 2017 after a survey conducted in 2014 [49]. Maximum length and width were calculated as in [50]. Table 1. Morphological characteristics of Ypacaraí Lake under mean lake level conditions (estimated for 2016-2017).
Characteristic Value Units
Gauging station 1
Geological and Hydrological Setting
Ypacaraí Lake is located in a graben of the same name, between two well-defined blocks that form the edges of the central segment of the Asunción Rift [51]. It has a SE-NW strike, oriented along the Pirayú-Salado fluvial valley, flanked in the east by the Altos Hill Range (Figure 4a), a formation of Ordovician origin ( Figure S2a in the Supplementary Materials) and in the west by lesser, sparsely distributed hills. The lake lies on a bed of Quaternary sediments deposited on top of Silurian sandstone in the SW and Precambrian granite in the NE ( Figure S2a) [52]. The topography of the basin determines five major sub-basins ( Figure 4a) whose surface areas we report in Table 2.
The Pirayú and Yukyry streams are the main inflows of Ypacaraí Lake. The Pirayú Stream discharges into the lake from its southernmost end, whereas the Yukyry Stream does so through a series of channels distributed along its northern shore, the most important of which we illustrate in Figure 4. Both streams are hydraulically connected to wetlands, especially and notably in the areas of their mouths (Figure 4b). Of less individual importance, in terms of discharge, are several smaller streams that flow into the lake from its east and west shores (Figure 4a).
The Salado River, a tributary of the Paraguay River, is the lake's only natural outflow. Its hydraulic conditions, which determine water level and lake retention time (mean value estimated to be of approximately 180 days [53,54]), are strongly influenced by the presence of both rooted and free-floating aquatic vegetation and have been substantially modified in recent years by major engineering interventions [55]. Vast marsh areas are also associated with this river (Figure 4b) and As can be observed in Figure 3, in the central area of the lake the bottom is relatively flat, with a slightly deeper part near its central-eastern shore. No reefs separate this central pelagic zone from the shallower northern, north-western, southern and south-western areas.
Geological and Hydrological Setting
Ypacaraí Lake is located in a graben of the same name, between two well-defined blocks that form the edges of the central segment of the Asunción Rift [51]. It has a SE-NW strike, oriented along the Pirayú-Salado fluvial valley, flanked in the east by the Altos Hill Range (Figure 4a), a formation of Ordovician origin ( Figure S2a in the Supplementary Materials) and in the west by lesser, sparsely distributed hills. The lake lies on a bed of Quaternary sediments deposited on top of Silurian sandstone in the SW and Precambrian granite in the NE ( Figure S2a) [52]. The topography of the basin determines five major sub-basins ( Figure 4a) whose surface areas we report in Table 2. are hydraulically connected to those of the Yukyry Stream, adding to the complexity of the lake's hydrological system [53]. (Figure 5a), the water of Ypacaraí Lake is visibly characterised by a brownish colour explained by high concentrations of coloured dissolved organic matter (CDOM) and an elevated turbidity related to wind-induced resuspension of sediments (mainly silt and clay) and the presence of suspended colloidal solids (Figure 5b) [56,57]. Despite the lake's polymictic regime, these light-attenuating conditions have been shown to result in considerable diurnal thermal stratification ( Figure 6) [58,59]. The Pirayú and Yukyry streams are the main inflows of Ypacaraí Lake. The Pirayú Stream discharges into the lake from its southernmost end, whereas the Yukyry Stream does so through a series of channels distributed along its northern shore, the most important of which we illustrate in Figure 4. Both streams are hydraulically connected to wetlands, especially and notably in the areas of their mouths (Figure 4b). Of less individual importance, in terms of discharge, are several smaller streams that flow into the lake from its east and west shores (Figure 4a).
The Salado River, a tributary of the Paraguay River, is the lake's only natural outflow. Its hydraulic conditions, which determine water level and lake retention time (mean value estimated to be of approximately 180 days [53,54]), are strongly influenced by the presence of both rooted and free-floating aquatic vegetation and have been substantially modified in recent years by major engineering interventions [55]. Vast marsh areas are also associated with this river (Figure 4b) and are hydraulically connected to those of the Yukyry Stream, adding to the complexity of the lake's hydrological system [53].
Water Quality
Except for vegetated transition areas, where clear water conditions can be observed (Figure 5a), the water of Ypacaraí Lake is visibly characterised by a brownish colour explained by high concentrations of coloured dissolved organic matter (CDOM) and an elevated turbidity related to wind-induced resuspension of sediments (mainly silt and clay) and the presence of suspended colloidal solids ( Figure 5b) [56,57]. Despite the lake's polymictic regime, these light-attenuating conditions have been shown to result in considerable diurnal thermal stratification ( Figure 6) [58,59]. (Figure 5a), the water of Ypacaraí Lake is visibly characterised by a brownish colour explained by high concentrations of coloured dissolved organic matter (CDOM) and an elevated turbidity related to wind-induced resuspension of sediments (mainly silt and clay) and the presence of suspended colloidal solids (Figure 5b) [56,57]. Despite the lake's polymictic regime, these light-attenuating conditions have been shown to result in considerable diurnal thermal stratification ( Figure 6) [58,59]. In Paraguay, the lake is frequently referred to as the 'blue' Ypacaraí Lake, a colour it can indeed present at certain times, when reflecting the open sky ( Figure 5c). However, increasing nutrient concentrations over the last decades have recently resulted in intense cyanobacterial blooms (e.g., February 2013) during which the lake can turn deep green ( Figure 5d) and oxygen depletion leading to fish kills can occur [60,61]. In the past, phytoplankton blooms were hypothesised to be limited by phosphorus due to the presence of trivalent iron (Fe 3+ ), detected in the water column as early as 1937, in colloidal forms (reported in [56]), as well as in the interstitial water of the sediments [57]. Microbiological parameters (e.g., faecal coliforms) are reportedly within acceptable limits [55], as are heavy metals which have not been detected to date in high concentrations in the water [55,60,61] or in the sediments [52,55,60]. Ecotoxicological effects of the lake's water have, however, been reported [62] and may very well be related to the presence of pesticides, detected in 2001 in both the lake and its tributary streams [63]. Further information on water quality can be found in references [ In Paraguay, the lake is frequently referred to as the 'blue' Ypacaraí Lake, a colour it can indeed present at certain times, when reflecting the open sky ( Figure 5c). However, increasing nutrient concentrations over the last decades have recently resulted in intense cyanobacterial blooms (e.g., February 2013) during which the lake can turn deep green ( Figure 5d) and oxygen depletion leading to fish kills can occur [60,61]. In the past, phytoplankton blooms were hypothesised to be limited by phosphorus due to the presence of trivalent iron (Fe 3+ ), detected in the water column as early as 1937, in colloidal forms (reported in [56]), as well as in the interstitial water of the sediments [57]. Microbiological parameters (e.g., faecal coliforms) are reportedly within acceptable limits [55], as are heavy metals which have not been detected to date in high concentrations in the water [55,60,61] or in the sediments [52,55,60]. Ecotoxicological effects of the lake's water have, however, been reported [62] and may very well be related to the presence of pesticides, detected in 2001 in both the lake and its tributary streams [63]. Further information on water quality can be found in references [55,60,61].
Biodiversity
Ypacaraí Lake supports a rich diversity of fish that includes at least 75 species, with predominance of Siluriformes (catfish, 32 species) and Characiformes (characins, 24 species), In Paraguay, the lake is frequently referred to as the 'blue' Ypacaraí Lake, a colour it can indeed present at certain times, when reflecting the open sky ( Figure 5c). However, increasing nutrient concentrations over the last decades have recently resulted in intense cyanobacterial blooms (e.g., February 2013) during which the lake can turn deep green ( Figure 5d) and oxygen depletion leading to fish kills can occur [60,61].
In the past, phytoplankton blooms were hypothesised to be limited by phosphorus due to the presence of trivalent iron (Fe 3+ ), detected in the water column as early as 1937, in colloidal forms (reported in [56]), as well as in the interstitial water of the sediments [57]. Microbiological parameters (e.g., faecal coliforms) are reportedly within acceptable limits [55], as are heavy metals which have not been detected to date in high concentrations in the water [55,60,61] or in the sediments [52,55,60]. Ecotoxicological effects of the lake's water have, however, been reported [62] and may very well be related to the presence of pesticides, detected in 2001 in both the lake and its tributary streams [63]. Further information on water quality can be found in references [55,60,61].
Biodiversity
Ypacaraí Lake supports a rich diversity of fish that includes at least 75 species, with predominance of Siluriformes (catfish, 32 species) and Characiformes (characins, 24 species), followed by Perciformes (perch-like fish, 9 species) and Gymnotiformes (South American knifefish, 5 species), which occupy several trophic levels [64]. Although not dominant, the presence of the invasive Oreochromis niloticus (Nile tilapia) is to be noted [64]. A detailed list of fish species of the lake can be found in Table S2 in the Supplementary Materials.
Phytoplankton is less diverse (see Table S3a-e in the Supplementary Materials). Diatoms (notably, Aulacoseira granulata) generally dominate in winter [65]. Also, in colder months, blooms of chlorophytes (green algae), cryptophytes and euglenids have been documented, the latter two notably dominating in autumn. In June 2017, a bloom of the dinoflagellate Ceratium furcoides, which is invasive in South America, was also reported for the first time [66].
Cyanobacteria include at least 29 species belonging to 15 genera (Table S3d), all of which are potential toxin producers [65]. Notable examples are species of the genera Microcystis (e.g., Microcystis aeruginosa, Microcystis wesenbergii and Microcystis protocystis) and Anabaena (e.g., Anabaena spiroides and Anabaena circinalis), and the invasive Cylindrospermopsis raciborskii. The latter, first detected in the country in 2005, in the Paraná and Paraguay rivers (in low concentrations of less than 2000 cells·mL −1 ), began blooming in the lake, during warmer months, starting from October 2012 [67]. During these blooms, the species represents between 76.1% and 98.8% of phytoplankton samples. A maximum concentration of 833,948 cells·mL −1 was recorded in March 2014 near the Salado River [60]. Cylindrospermopsis raciborskii blooms have since been observed to be consistently followed by blooms of Microcystis aeruginosa, the cyanobacteria that previously dominated the lake during summer [65].
Benthic macroinvertebrates have reportedly disappeared from the sediments of the lake following these recent dramatical cyanobacterial blooms. First, chironomids, which, in 2012, accounted for 40% to 80% of benthos samples, were replaced by Hirudinea (leeches) and Hydracarina (water mites) species, which eventually disappeared as well. Starting from February 2015, until today, benthic macroinvertebrates are no longer found in the sediments of Ypacaraí Lake [61,67].
Meiofauna reported in the water bodies of the basin comprise several species of annelids (oligochaetes and leeches), arthropods (crustaceans, insects and mites) and molluscs [70]. Among the latter, it is important to mention the invasive species Limnoperna fortunei (golden mussel), native to China, that has invaded Paraguayan waters, of which Ypacaraí Lake is no exception, starting, at least, from the late 1990s [71].
As for vegetation, different soil types and their distribution at the basin scale ( Figure S2b in the Supplementary Materials) configure a patchwork of three main formation types: forests, savannas and marshes. Their historical evolution has been well described in works dating back to the early 1980s [72][73][74][75]. Despite all three types of formations being in receding and/or deteriorating states, their study has been notably discontinued in the last decades [76].
Aquatic vegetation includes perennials, such as the free-floating Eichornia crassipes and Pistia stratiotes, floating-leaved Eichornia azurea and Victoria cruziana and emergent Thalia geniculata, Typha domingensis, Cyperus giganteus, Schoenoplectus californicus, Ludwigia peploides and Ludwigia bonariensis. Some of these species are recognised to be invasive outside of their native ranges. In the less deteriorated areas, submerged macrophytes, such as the freely suspended Utricularia foliosa and the rooted Cabomba caroliniana can still be found [76].
As a final note on the ecological value of Ypacaraí Lake, in particular, and the Salado River Basin in general, we can mention a wide variety of birds, with at least 83 species being present, some of which are migratory [70,77]. Other landmark animal species are the capybara (Hydrochoerus hydrochaeris), the largest living rodent in the world, and the broad-snouted caiman or yacaré (Caiman latirostris) [70].
Socioeconomic Context and Anthropic Impacts
The ongoing expansion of the Metropolitan Area of Asunción ( Figure 7) implies an exponential increase in the number of people living in the basin, which grew from 207,000 inhabitants in 1988 to 1,470,000 inhabitants in 2012 [55,78]. It is also important to consider the floating population of tourist destinations, such as the town of San Bernardino (see Figure S2c), a town of about 10,000 permanent residents on the east shore of the lake, whose actual population can grow up to 100,000 inhabitants in summer months [55], numbers that are only expected to increase in the future. During this high season, recreational activities, such as bathing, swimming and angling, as well as water sports (e.g., jet skiing) and motorboat sightseeing excursions are likely having an underestimated impact [79]. 1,470,000 inhabitants in 2012 [55,78]. It is also important to consider the floating population of tourist destinations, such as the town of San Bernardino (see Figure S2c), a town of about 10,000 permanent residents on the east shore of the lake, whose actual population can grow up to 100,000 inhabitants in summer months [55], numbers that are only expected to increase in the future. During this high season, recreational activities, such as bathing, swimming and angling, as well as water sports (e.g., jet skiing) and motorboat sightseeing excursions are likely having an underestimated impact [79]. A range of factors contribute to most domestic and industrial wastewaters being either disposed of through septic tanks or discharged without any treatment into the many streams of the basin. The first results in the pollution of the subjacent aquifers, especially Aquifer Patiño ( Figure S3 in the Supplementary Materials), whereas the second results in the pollution of surface waters, including Ypacaraí Lake. These factors include: (a) increasing population densities resulting from progressive unplanned urbanisation, especially of the Yukyry sub-basin; (b) a high administrative fragmentation (21 municipalities in the basin, as shown in Figure S2c); (c) the institutional complexity that characterises the country (many institutions with overlapping responsibilities) [55]; (d) insufficient infrastructure to provide adequate sanitation (less than 5% of wastewaters are treated [55]); and (e) the lack of enforcement of environmental regulations. Untreated wastewaters of domestic and industrial origin have been recently estimated to account for about 57% of nutrients that would presently be entering surface waters in the basin [55]. A range of factors contribute to most domestic and industrial wastewaters being either disposed of through septic tanks or discharged without any treatment into the many streams of the basin. The first results in the pollution of the subjacent aquifers, especially Aquifer Patiño ( Figure S3 in the Supplementary Materials), whereas the second results in the pollution of surface waters, including Ypacaraí Lake. These factors include: (a) increasing population densities resulting from progressive unplanned urbanisation, especially of the Yukyry sub-basin; (b) a high administrative fragmentation (21 municipalities in the basin, as shown in Figure S2c); (c) the institutional complexity that characterises the country (many institutions with overlapping responsibilities) [55]; (d) insufficient infrastructure to provide adequate sanitation (less than 5% of wastewaters are treated [55]); and (e) the lack of enforcement of environmental regulations.
Untreated wastewaters of domestic and industrial origin have been recently estimated to account for about 57% of nutrients that would presently be entering surface waters in the basin [55]. Mostly concentrated in the Pirayú sub-basin ( Figure S2d), livestock farming is estimated to account for 33% of nitrogen and 37% of phosphorus loads reaching the streams. Accounting for the remaining part would be the nutrient compounds present in fertilisers used in crop fields, also concentrated in the Pirayú sub-basin, a notable exception being strawberry cultivation in Areguá ( Figure S2c), and diffuse sources, reportedly reaching surface waters in progressively higher proportions following the heavy deforestation that occurred in the basin between 1975 and 1985 [54].
In total, nutrient loads effectively reaching the lake after natural depuration of its inflows have been estimated to be 475 tons per year of nitrogen, and 45 tons per year of phosphorus, with untreated domestic wastewaters respectively accounting for 40% and 24% of these loads. Livestock farming would be accounting for 28% of the P-load and 56% of the N-load [81].
In this respect, it is important to mention the depurative role of wetland areas, which is also briefly described in Communication S2 (Supplementary Materials). Reportedly, 26% of these areas were lost between 2005 and 2015 [55] as a consequence of urban expansion and, more recently, the construction, north of the lake, of a highway connecting the city of Luque to the town of San Bernardino [55]. As part of this project, four new concrete bridges were built over the main channel and three anabranches of the Salado River, a major engineering intervention whose impacts on the hydraulic regime of the system and, consequently, on the lake's retention time, are yet to be assessed.
Previous Studies and Projects, and Datasets
Multiple documents resulting from previous studies and projects, including research articles and technical reports, most of which are only available in Spanish, have been collected and reviewed to reconstruct the history of scientific and management-oriented research on this lake. A list is provided, for further reference, in Table S5 (Supplementary Materials).
This review led to the identification of several datasets from both concluded and ongoing measurement campaigns. The more complete and robust of these datasets, in terms of number of variables that were measured and number of measurements available for each variable, were then selected for the statistical and other analyses reported herein (Table 3).
In all cases, samples/measurements were collected/taken near the water surface. The sampling/measurement frequencies of the Japan International Cooperation Agency (JICA), Multidisciplinary Centre for Technological Investigations (CEMIT) of the National University of Asunción (UNA) I and CEMIT-UNA II datasets were variable and are illustrated in Table S6a-c (Supplementary Materials). Lake stations corresponding to these datasets are illustrated in Figure S4 (Supplementary Materials). Limnological variables are measured by Itaipú sensors (CIH-Itaipú dataset) at a 20-min temporal resolution at several points of interest of the Salado River Basin. Data from three of these stations, the ones located inside the lake ( Figure S4), were used in this study. The National Navigation and Ports Administration of Paraguay (ANNP) records the lake level (CNSB gauging station) on a daily basis, and the DMH-DINAC measures meteorological variables (AISP weather station) every 10 min.
Time Series Assembly and Trend Analyses
As a first approximation to assess major limnological changes that might have occurred between the late 1980s (1988)(1989) and the last few years (2012-2017), arithmetic means and standard deviations were calculated for each period/dataset for eight selected limnological variables: water temperature (Tw), lake level (h), Secchi depth (SD), and the concentrations of suspended solids (SS), total phosphorus (TP), total nitrogen (TN), dissolved oxygen (DO) and chlorophyll-a (Chl-a). When data from multiple lake sampling/monitoring stations were available for a given time, these were first spatially averaged to account for overall lake conditions. For all eight variables, time series were assembled from multiple datasets, provided they were directly comparable (same or analogous/equivalent variables and measurement instruments/methods). Nonparametric Mann-Kendall (MK) tests [82] at 95% confidence levels were then performed to assess whether these variables presented significant upward or downward monotonic trends. For this, the Theil-Sen estimator (Sen's slope, S) [83] was used to robustly fit lines to the data, and the Kendall rank correlation coefficient (Kendall's τ) [84] was calculated to determine whether these trends were positive or negative.
Variables in DS1 included: Tw, h, SD, SS, TP, TN, NO 3 − , Chl-a, DO, pH, electrical conductivity (EC) and daily cumulative rainfall (CumRain) which was calculated from high-resolution (10 min) precipitation data. Variables in DS2 were the same, with the addition of water turbidity (Turb), which, in the case of this dataset, was used instead of SS in this and the following analyses.
Principal Component Analyses and Simple Linear Regressions
Principal component analyses (PCAs) were performed on the same two composite datasets, DS1 and DS2, to assess which limnological variables were most influential in determining the overall conditions of the lake at any given time (i.e., those most strongly correlated with the two main principal components, explaining together most of the variance in each dataset). Points were classified as wet or dry depending on whether a cumulative rainfall of more than 30 mm was recorded or not within 24 h prior to the sampling/measurement time.
Additionally, to provide information on the relative importance of evaporation in the water balance of the lake, two-week evaporation volumes (Ev), in mm, were estimated for a thirteen-month period (1 September 2016 to 30 September 2017) using a simplified Dalton-type approach [85,86]. Simple linear regressions were then performed between electrical conductivity (EC) and lake level (h), and between EC and Ev.
Time Series Reconstruction of Trophic State Indices and Nutrient Limitation Assessment
Time series of trophic state indices (TSI) were reconstructed for the 1988-2017 period, starting from SD, Chl-a, TP and TN data (JICA, CEMIT-UNA I and CEMIT-UNA II datasets). For the first three indices, modified versions (MTSI) of the original formulations by Carlson [87] have been proposed for tropical and subtropical lakes [88][89][90][91], motivated by the typical differences these systems present with respect to temperate ones, such as generally higher turbidity (and consequently, generally lower Secchi depths) throughout the entire year [92].
Finally, to assess the degree of nitrogen over phosphorus limitation over the last thirty years, we reconstructed the time series of (a) the difference between the corresponding trophic state indices, TSI (TN)-TSI (TP); and (b) the TN to TP ratio. As done for the other time series, MK tests at 95% confidence level were performed on these two.
In the case of difference (a), consistently negative values of TSI (TN)-TSI (TP) indicate nitrogen limitation, while consistently positive values indicate phosphorus limitation. Values oscillating around zero indicate co-limitation by both nutrients. Note that, for methodological consistency, for this analysis, we recalculated the trophic state index for total phosphorus as in reference [94], given by Equation In the case of ratio (b), we compared TN:TP values to the mass-based Redfield ratio (7.2:1, as in reference [95]), corresponding to the classical atomic Redfield ratio of 16:1 [96], above which, limitation is given by phosphorus, and below which, limitation is given by nitrogen.
Scientific and Management-Oriented Research History
The history of scientific and management-oriented research on this lake has been summarised and is presented in Appendix A. A timeline of major studies and projects, reflecting this history, is also included for illustrative purposes (Figure 8).
Time Series Assembly and Trend Analyses
Time series of eight selected limnological variables were assembled with data from multiple datasets covering two selected periods: 1988-1989 (JICA and ANNP) and 2012-2017 (CEMIT-UNA I, CEMIT-UNA II, ANNP and CIH-Itaipú) (Figure 9a-h). Calculated arithmetic means and standard deviations of all eight variables for the two selected periods as well as trend statistics (Kendall's τ and Sen's slope, S) that were found to be significant (at a 95% confidence level) are shown in Table 4. Figure 8. Timeline of studies and projects reflecting the history of scientific and management-oriented research on Ypacaraí Lake.
Time Series Assembly and Trend Analyses
Time series of eight selected limnological variables were assembled with data from multiple datasets covering two selected periods: 1988-1989 (JICA and ANNP) and 2012-2017 (CEMIT-UNA I, CEMIT-UNA II, ANNP and CIH-Itaipú) (Figure 9a-h). Calculated arithmetic means and standard deviations of all eight variables for the two selected periods as well as trend statistics (Kendall's τ and Sen's slope, S) that were found to be significant (at a 95% confidence level) are shown in Table 4.
Time Series Assembly and Trend Analyses
Time series of eight selected limnological variables were assembled with data from multiple datasets covering two selected periods: 1988-1989 (JICA and ANNP) and 2012-2017 (CEMIT-UNA I, CEMIT-UNA II, ANNP and CIH-Itaipú) (Figure 9a-h). Calculated arithmetic means and standard deviations of all eight variables for the two selected periods as well as trend statistics (Kendall's τ and Sen's slope, S) that were found to be significant (at a 95% confidence level) are shown in Table 4. Significant downward trends in lake level, Secchi depth, dissolved oxygen and chlorophyll-a were found as well as a significant upward trend in total phosphorus, all of which were also evidenced by noticeable changes in the arithmetic means calculated for each period. Although no significant trend was identified for the concentration of suspended solids, its arithmetic mean almost doubled and its variability greatly increased between 1988-1989 and 2014-2017.
Pairwise Pearson Correlation Analyses
Pairwise Pearson correlation coefficients between the twelve limnological variables of interest were calculated for two different periods, 1988-1989 (Table 5) and 2014-2017 (Table 6). For the first period, a significant positive correlation (95% confidence level) was found between water temperature and electrical conductivity, as well as between chlorophyll-a and pH. Very significant and strong negative correlations (99% confidence level) were found between (a) lake level and Significant downward trends in lake level, Secchi depth, dissolved oxygen and chlorophyll-a were found as well as a significant upward trend in total phosphorus, all of which were also evidenced by noticeable changes in the arithmetic means calculated for each period. Although no significant trend was identified for the concentration of suspended solids, its arithmetic mean almost doubled and its variability greatly increased between 1988-1989 and 2014-2017.
Pairwise Pearson Correlation Analyses
Pairwise Pearson correlation coefficients between the twelve limnological variables of interest were calculated for two different periods, 1988-1989 (Table 5) and 2014-2017 (Table 6). For the first period, a significant positive correlation (95% confidence level) was found between water temperature and electrical conductivity, as well as between chlorophyll-a and pH. Very significant and strong negative correlations (99% confidence level) were found between (a) lake level and electrical conductivity; (b) Secchi depth and suspended solids; (c) Secchi depth and total phosphorus; and (d) total phosphorus and dissolved oxygen. , chlorophyll-a concentration (Chl-a), dissolved oxygen concentration (DO), pH, electrical conductivity (EC); ANNP: lake level (h); CIH-Itaipú: lake level (h); DMH-DINAC: daily cumulative rainfall (CumRain); 2 in bold: significant correlation at a 95% confidence level; 3 in bold, underlined: significant correlation at a 99% confidence level.
For the second period, weak but significant positive correlations were found between (a) turbidity and nitrates; (b) total nitrogen and chlorophyll-a; (c) nitrates and daily cumulative rainfall; and (d) electrical conductivity and daily cumulative rainfall. Weak correlations were also found between Secchi depth and pH (positive, very significant), and between Secchi depth and turbidity (negative, significant). As for the first period, a very significant strong negative correlation was found between lake level and electrical conductivity.
Principal Component Analyses and Simple Linear Regressions
The results of the PCAs performed on the same composite datasets, DS1 (1988-1989) and DS2 (2014)(2015)(2016)(2017), are presented in the biplots of Figure 10. For the first period (1988)(1989), the first two principal components, PC1 and PC2, explained together 55.6% of the total variance in the composite dataset. PC1 separated points presenting high water transparency, represented by SD, as well as high DO and pH, from those presenting low values of these variables. It also separated points with low versus high SS, Tw, TP and TN values. In turn, PC2 separated points with low h and high EC from those presenting the opposite conditions. For the second period (2014-2017), the first two principal components, PC1 and PC2, explained together 47.6% of the total variance in the composite dataset. As for the first period, PC1 separated points presenting high SD, DO and pH values from those presenting low ones or, respectively and equivalently, points with low versus high turbidity and Tw, as SD decreases with increasing turbidity and oxygen solubility decreases with increasing water temperature. Again, PC2 separated points with low h and high EC from those presenting the opposite conditions.
These results led to further investigation in the form of two simple linear regressions: (a) between lake level (h; ANNP and CIH-Itaipú datasets) and the concentration of conductive ions, indicated by EC (CEMIT-UNA II dataset); and (b) between two-week cumulative evaporation estimates (Ev) and corresponding two-week means of EC data, which we present in Figure S5 (Supplementary Materials). The analysis revealed that a linear fit may be used to relate EC to h (r 2 = 0.67) or Ev (r 2 = 0.72), with root mean squared errors (RMSE) of 18.1 µS·cm −1 (11.7% of the mean value) and 20.8 µS·cm −1 (11.6% of the mean value), respectively. For the second period (2014-2017), the first two principal components, PC1 and PC2, explained together 47.6% of the total variance in the composite dataset. As for the first period, PC1 separated points presenting high SD, DO and pH values from those presenting low ones or, respectively and equivalently, points with low versus high turbidity and Tw, as SD decreases with increasing turbidity and oxygen solubility decreases with increasing water temperature. Again, PC2 separated points with low h and high EC from those presenting the opposite conditions.
These results led to further investigation in the form of two simple linear regressions: (a) between lake level (h; ANNP and CIH-Itaipú datasets) and the concentration of conductive ions, indicated by EC (CEMIT-UNA II dataset); and (b) between two-week cumulative evaporation estimates (Ev) and corresponding two-week means of EC data, which we present in Figure S5 (Supplementary Materials). The analysis revealed that a linear fit may be used to relate EC to h (r 2 = 0.67) or Ev (r 2 = 0.72), with root mean squared errors (RMSE) of 18.1 µS·cm −1 (11.7% of the mean value) and 20.8 µS·cm −1 (11.6% of the mean value), respectively.
Time Series Reconstruction of Trophic State Indices and Nutrient Limitation Assessment
Reconstructed time series of modified trophic state indices (MTSI) based on SD, Chl-a and TP (Figure 11a-c), and of the trophic state index (TSI) based on TN (Figure 11d), characterise Ypacaraí Lake as eutrophic for the 1988-1989 period. For the 2014-2017 period, values of MTSI (SD) and MTSI (TP) fell within the hypereutrophic range for extended periods, as opposed to MTSI (Chl-a), whose values in this period fell within the mesotrophic range, and TSI (TN), whose values in this period remained in the eutrophic range, except for exceptional, brief periods of apparent mesotrophic or even oligotrophic conditions (in April 2014 and April 2015).
Time Series Reconstruction of Trophic State Indices and Nutrient Limitation Assessment
Reconstructed time series of modified trophic state indices (MTSI) based on SD, Chl-a and TP (Figure 11a-c), and of the trophic state index (TSI) based on TN (Figure 11d), characterise Ypacaraí Lake as eutrophic for the 1988-1989 period. For the 2014-2017 period, values of MTSI (SD) and MTSI (TP) fell within the hypereutrophic range for extended periods, as opposed to MTSI (Chl-a), whose values in this period fell within the mesotrophic range, and TSI (TN), whose values in this period remained in the eutrophic range, except for exceptional, brief periods of apparent mesotrophic or even oligotrophic conditions (in April 2014 and April 2015). A significant downward trend (τ = −0.414; S = −291; 95% confidence level) was found in the difference between TSI (TN) and TSI (TP), which presented negative values throughout both considered periods (1988-1989 and 2012-2017) (Figure 12a). This is indicative of increasingly limiting nitrogen concentrations in the lake, also evidenced by the significant downward trend (τ = −0.394; S = −277; 95% confidence level) in the TN to TP ratio. With few exceptions, starting from 2014, this ratio fell below the mass-based Redfield ratio, N:P = 7.2:1, corresponding to the classical atomic Redfield ratio of N:P = 16:1 (Figure 12b).
Time Series Reconstruction of Trophic State Indices and Nutrient Limitation Assessment
Reconstructed time series of modified trophic state indices (MTSI) based on SD, Chl-a and TP (Figure 11a-c), and of the trophic state index (TSI) based on TN (Figure 11d), characterise Ypacaraí Lake as eutrophic for the 1988-1989 period. For the 2014-2017 period, values of MTSI (SD) and MTSI (TP) fell within the hypereutrophic range for extended periods, as opposed to MTSI (Chl-a), whose values in this period fell within the mesotrophic range, and TSI (TN), whose values in this period remained in the eutrophic range, except for exceptional, brief periods of apparent mesotrophic or even oligotrophic conditions (in April 2014 and April 2015). A significant downward trend (τ = −0.414; S = −291; 95% confidence level) was found in the difference between TSI (TN) and TSI (TP), which presented negative values throughout both considered periods (1988-1989 and 2012-2017) (Figure 12a). This is indicative of increasingly limiting nitrogen concentrations in the lake, also evidenced by the significant downward trend (τ = −0.394; S = −277; 95% confidence level) in the TN to TP ratio. With few exceptions, starting from 2014, this ratio fell below the mass-based Redfield ratio, N:P = 7.2:1, corresponding to the classical atomic Redfield ratio of N:P = 16:1 (Figure 12b).
Factors Affecting Primary Production
As can be expected from the basin-scale changes in land use that have occurred over the last decades (increasing runoff from urban expansion, greater erosion from deforestation, and loss of erosion-controlling and pollution-mitigating wetland areas), the mean concentrations of total suspended solids have almost doubled in recent years (2014-2017) with respect to the late 1980s (from 23.2 to 40.2 mg·L −1 ). This is also reflected by a negative trend in water transparency, evidenced by a significantly lower mean Secchi depth of 0.222 m in 2014-2017 as opposed to 0.513 m in 1988-1989. Primary production in Ypacaraí Lake, hypothesised in the past to be limited by phosphorus due to the presence of P-binding Fe 3+ , both in the water column and in the interstitial water in the sediments, would thus be presently light limited. This could explain the lower mean chlorophyll-a concentration of recent years (8.79 µg·L −1 in 2014-2017, as opposed to 54.3 µg·L −1 in 1988-1989), despite the fact that phosphorus concentration in the lake appears to have remained at high values, within a range that is characteristic of eutrophic and hypereutrophic lakes, at least since 1988. This is also evident from the reconstructed time series of MTSI (TP) (Figure 11c).
According to these results, should light penetration increase enough to allow for significant phytoplankton growth, the limitation would then be given by nitrogen, whose mean concentration has decreased over time from 1910 µg·L −1 in 1988-1989 to 1340 µg·L −1 in 2014-2017, rather than by phosphorus, whose mean concentration has more than doubled in this period, from 124 to 256 µg·L −1 .
This could potentially be configuring an unnatural secondary nitrogen limitation arising from the phosphorus enrichment of the lake, an effect that has already been reported for other shallow subtropical lakes (e.g., Lake Okeechobee [94]). Even though, in this case, we were not able to demonstrate, by means of the difference, TSI (TN)-TSI (TP), that Ypacaraí Lake is not naturally nitrogen limited, a trend towards increasing nitrogen limitation was identified (Figure 12a). Moreover, values of the TN to TP ratio have presented a significant negative trend over the last 30 years and, in 2012-2017, consistently fell below the mass-based Redfield ratio (Figure 12b), supporting the hypothesis of a rather recent transition from phosphorus to nitrogen limitation.
These conditions might explain the shift of the system towards Cylindrospermopsis raciborskii dominance (at the beginning of the warmer period) observed in the lake starting from 2012. In fact, these cyanobacteria can bloom not only in low-light conditions, but are also able to fixate atmospheric nitrogen, a trait that allows them to grow in nitrogen-limited waters [97], as opposed to the non-diazotrophic Microcystis aeurigonosa, which was markedly more dominant in previous years and is now managing to bloom only after Cylindrospermopsis raciborskii populations collapse [65]. Although this hypothesis might seem to contrast with the fact that nitrogen fixation is energy intensive [98], light energy in this lake quite limited due to elevated turbidity; this is supported by the fact that N-fixating heterocysts were indeed found in C. raciborskii specimens sampled between 2012 and 2014 [65]. The reason why C. raciborskii prevails over other diazotrophic cyanobacteria that are also present in the lake, such as Anabaena sp., remains to us, however, an open question.
Complex Hydro-Morphological and Hydro-Ecological Conditions
PCAs performed on the two selected datasets, DS1 (1988-1989) and DS2 (2014-2017), pointed out the high degree of complexity in the functioning of Ypacaraí Lake. When taken together, the first two principal components, PC1 and PC2, accounted for 55.6% and 47.6% of the total variances of DS1 and DS2, respectively. This strongly suggests it is not possible to significantly reduce the number of variables that one needs to consider if one wants to fully capture the dynamic behaviour of this lake (i.e., without losing significant information).
On one hand, PC1 seems to have distinguished among data with high values of SD, pH and DO, and low values of water turbidity (in this case largely attributable to SS), Tw (which nevertheless remained above 15 • C) and nutrient concentrations (which nevertheless remained consistently high, in the eutrophic and hypereutrophic ranges), and vice versa. A significant increase in photosynthetic activity during periods with high water transparency might explain the positive correlation between PC1 and SD, and pH and DO, as photosynthesis consumes free carbon dioxide (CO 2 ), elevating the pH by pushing chemical equilibrium towards lower concentrations of hydrogen ions (H + ), and produces molecular oxygen (O 2 ), elevating DO. Once again, this supports the hypothesis that pelagic primary production in this lake is currently limited by light rather than by nutrients. Should this be the case, however, Chl-a concentrations would expectedly be highly correlated with PC1 too, which does not appear to be the case. This might be explained by the relatively lower water temperatures observed during periods of high transparency in these datasets, but longer time series of these variables are required to confirm this, as no periods of high transparency during spring and summer months were available at the time of this study.
On the other hand, PC2 seems to have distinguished between high values of electrical conductivity (EC) and low lake levels, the concentration of EC-contributing ions being higher during dry periods when evaporation becomes a significant component of the water balance. In this sense, a good positive linear correlation (r = 0.85; r 2 = 0.72) was found between EC and estimated evaporation volumes, a result that suggests it should be possible to accurately estimate one as a function of the other and vice versa. This result could turn out to be relevant, as it has been proposed that, in shallow lakes, EC might be used as an indicator of pollution [99] given its positive correlation with the concentrations of dissolved ions, such as nitrates, ammonium and ortho-phosphates, all of which are, in general, more difficult to measure.
It is notable that, with respect to lake level, the contribution of cumulative rain to PC2 seems to have shifted from 1988-1989 to 2014-2017, indicating that the water level in this lake is not only determined by the amount of precipitation in the basin. This has already been observed in previous studies, in which the flow carrying capacity of the Salado River, the outflow of the lake, was found to be fundamental for the regulation of the lake's water level. To fully assess this effect, however, hydraulic modelling studies are required, for which a bathymetric survey of the Salado River, presently not available, is warranted. These studies would also allow for a better understanding of how this river interacts with the Paraguay River, especially during major floods, and how recent interventions, such as the construction of the Luque-San Bernardino highway, which not only altered the morphology of the Salado River, but also reduced the extension of wetland areas north of the lake, have affected the overall functioning of the system.
Challenges for Restoration
It is unclear whether Ypacaraí Lake ever presented clear water conditions, so attempting to restore them might not be ecologically sound. Supposing the affirmative case, of the many strategies we could think of to push the lake back to the clear water state, food web manipulation would probably have, in the long term, very limited success, as this lake supports a very rich fish diversity that would likely quickly adapt.
Reducing current phosphorus loading (estimated to be of 45 tons per year [81]) is clearly advisable. Current P-loading per unit area can be roughly estimated to be 0.75 g·m −2 ·year −1 . This is well above critical P-loading estimates for turbidification found, through modelling studies, for many other shallow lakes [5]. It is also particularly close to the highest estimate of 0.78 g·m −2 ·year −1 found for the also shallow and subtropical Lake Taihu [100]. Specific threshold estimates for Ypacaraí Lake are presently not available and would require similar modelling studies to be conducted. In any case, should it be that primary production in this lake was indeed once phosphorus limited, management efforts aiming to limit further phosphorus enrichment would be necessary to eventually restore natural phosphorus limitation (or nitrogen-phosphorus co-limitation).
Studies have shown, however, that even though reducing nutrient loading can lead to improvement of the ecological status of shallow subtropical lakes, the response may not be immediate due to internal loads being relatively more important in these systems than in their temperate counterparts [24]. This needs to be considered when estimating the expected recovery times of Ypacaraí Lake, as little is known about its internal P-load or the biochemical processes taking place in the sediments, which ultimately determine remineralisation rates.
Challenges for Management
Four main specific challenges for the management of Ypacaraí Lake can be identified: (a) the high administrative and political fragmentation of the basin; (b) the institutional setting, characterised by many actors whose responsibilities in relation to the management and monitoring of the lake and its system are, in many cases, either overlapping or not clearly assigned [55]; (c) the relevant legal framework at the national level, which is not always coherent, resulting in intrinsically difficult application [55], a challenge that is complemented by insufficient practical enforcement and control, which is typical of but not restricted to emerging economies; and (d) the segregation and lack of systematisation of data and information, and the need for regular pre-processing and quality control activities, without which, part of the collected data ends up being of limited use.
In relation to the latter, major improvements should be mentioned. In particular, the establishment, by the Itaipú Binational Entity, of a network of monitoring stations that are continuously generating data that are made available to the public, in real time, through an online platform administered by the CIH, the importance of which has been clearly highlighted by their use in the present study. The knowledge base hereby developed represents too an important achievement for future, sustainability-oriented management of Ypacaraí Lake, as it unifies data, outputs and conclusions from a variety of previous studies and integrates them with novel information. In this respect, it is important to say that, although a unified knowledge base does not necessarily readily translate into action [101], the socioeconomic and cultural importance of this lake in Paraguay sets an ideal political scenario for future interventions and further research.
Opportunities for Research
In addition to all previously mentioned research gaps that are specific to this shallow, subtropical lake, two key aspects present opportunities to push the state of the art of shallow lakes theory and subtropical limnology forward, both of which we illustrate, in red, in Figure 13: (a) its complex hydro-morphological and hydro-ecological dynamics in the context of a rapidly urbanising, densely-populated, humid subtropical region of a developing country that is subjected to increasingly frequent extreme rainfall events and that cyclically suffers the effects of ENSO; and (b) the reported disappearance, following the intense cyanobacterial blooms of 2012-2013, of tube-dwelling invertebrate communities [61,67]. complex hydro-morphological and hydro-ecological dynamics in the context of a rapidly urbanising, densely-populated, humid subtropical region of a developing country that is subjected to increasingly frequent extreme rainfall events and that cyclically suffers the effects of ENSO; and (b) the reported disappearance, following the intense cyanobacterial blooms of 2012-2013, of tubedwelling invertebrate communities [61,67]. With regard to the first aspect, we can say that, as reported for seven shallow, subtropical lakes in Florida [103], hydro-morphological factors play a fundamental role in determining ecological processes in Ypacaraí Lake. For instance, its increasingly higher water turbidity that would currently be limiting its primary productivity more than nutrients is largely related to increasing concentrations of suspended solids. Higher water levels during floods can therefore improve water transparency via the dilution of suspended solids in a larger volume of water, having the seemingly contradictory effect of favouring macrophytes over phytoplankton, the former being traditionally considered to be negatively affected by water depth. During droughts, lower water levels with highly concentrated suspended solids may actually worsen conditions for macrophytes. These effects, which have also been observed in shallow, subtropical Lake Apopka [104], are, however, not straightforward, as they are very much affected by the hydraulic response of the system (i.e., lake retention time) to hydro-meteorological factors, such as the intensity, duration and frequency of flood-producing extreme rainfall events, and the characteristics of the basin that determine its response in terms of erosion and run-off, which ultimately increase sediment input. As previously explained, the hydraulic regime, meteorological patterns and sediment dynamics are all changing in Ypacaraí Lake because of major engineering interventions, land use and climate change.
In relation to the second aspect, we can say there is now ample evidence that benthic tube-dwelling invertebrates, such as chironomids, can significantly alter multiple important ecosystem functions and thus play a central role in shallow lakes [102]. They compete with pelagic filter feeders for particulate organic matter, which can exert a high grazing pressure on phytoplankton, microorganisms, and perhaps small zooplankton, thus strengthening benthic-pelagic coupling. Furthermore, intermittent pumping by these invertebrates oxygenates the sediments, influencing microbe-mediated biogeochemical functions. Recent modelling studies provide strong evidence that they can even improve the resilience of a shallow lake that is in the clear water stable state, modifying the threshold for the shift between this and the turbid water stable state, their absence thus pushing the system further towards the latter [102].
The disappearance of these organisms from the sediments of Ypacaraí Lake, the cause of which remains unclear, might be explained by (a) oxygen shortages that, even though many invertebrates, including chironomids, are physiologically and behaviourally adapted to cope with, can lead to their elimination; or (b) cyanotoxins, toxic metabolites that cyanobacteria secrete and can cause acute and chronic effects, biochemical alterations and changes in the life cycle of aquatic invertebrates [105]. The first mechanism, on one hand, reportedly occurred in Kleiner Gollinsee, a shallow temperate lake in Germany that experienced a severe brownification-anoxia feedback event. This facilitated a persistent state of anoxia that occasionally extended to the water surface and resulted in the near-complete loss of macroinvertebrates [106]. The second mechanism, on the other hand, was observed in Lake Syczyńskie, a shallow temperate lake in Poland, where chironomid numbers in the sediments dwindled following high microcystin concentrations during hypertrophic periods [107].
Conclusions
Ypacaraí Lake has great ecological value, its system being characterised by a diverse matrix of aquatic habitats that includes numerous streams and vast marsh areas besides the lake itself which sustains a rich biodiversity, particularly of fish. It also constitutes a major source of drinking and irrigation water for neighbouring towns and cities, sustaining local economies largely based on agriculture, livestock farming, fishing, tourism and recreation. Its proximity to Asunción, the capital of Paraguay, and its cultural significance, make its environmental degradation a sensitive issue in the country that is regularly echoed by national media. This further heightens the awareness of the population about its ecological status, which quickly translates into political interest, especially during electoral periods, and continuously motivates scientific research aiming to shed light on its complex functioning in the hope of designing more effective management strategies. Over the years, many studies and projects have been conducted, both nationally and internationally, in collaboration with countries such as Japan, the United States, Italy and The Netherlands. In this article, we summarised the history of these initiatives and reviewed their many outputs.
Unfortunately, despite these efforts, eutrophication of the lake is ongoing, evidenced by an upward trend in total phosphorus that has resulted, in recent years, in intense cyanobacterial blooms of toxin-producing species, such as Cylindrospermopsis raciborskii and Microcystis aeruginosa. A downward trend in chlorophyll-a has, however, been observed, likely due to a significant downward trend in water transparency that is explained by increasing concentrations of suspended solids. This can be attributed to urbanisation, deforestation and the loss of wetland areas, which have altered the hydrological and sedimentological responses of the system to increasingly frequent extreme rainfall events and to ENSO-related periodic floods and droughts.
Phytoplankton, which was hypothesised in the past to be limited by phosphorus, would thus be presently limited by an unfavourable underwater light climate, and, moreover, it is now likely that, whenever nutrient limitation does occur, it is given by nitrogen rather than by phosphorus, a conclusion supported by significant trends towards increasing nitrogen limitation. A significant downward trend in lake level confirms the impact of recent, major hydrological and hydraulic alterations in the basin, a factor whose relevance is not minor, as PCAs results show the lake's overall conditions are highly correlated with water depth.
Finally, there are many unresolved questions regarding the very special hydro-morphological and hydro-ecological conditions of Ypacaraí Lake. Some of them are lake-specific, including the lake's critical nutrient thresholds, the role of its internal phosphorus load, the factors controlling nutrient release from the sediments, the complex interactions between the streams and associated wetland areas, and between the wetlands and the lake itself, to cite a few. Other aspects, however, might be of a more general interest, such as the recent disappearance, following intense cyanobacterial blooms, of the lake's benthic chironomids, key components of a shallow lake system that have recently been shown to improve the resilience of a lake presenting clear water conditions. To us, Ypacaraí Lake is thus no longer just an important element of Paraguayan culture, but also a scientific treasure in a context that is ideal for future research and that we believe can help push the state of the art of shallow lakes theory and subtropical lake science forward. first signs of eutrophication of the lake. In the same period (1978), a report on its water quality was requested by SENASA Director Rogelio Aguadé to the Hydroconsult S.R.L. task force (C. López, F. J. Shade, J. F. Facetti-Masulli and R. López), which was followed by further reports in 1984, 1995 and 2005 (reported in [108]).
In 1988, Barbara Ritterbusch published a limnological study conducted at the Institute of Basic Sciences (ICB), now Faculty of Exact and Natural Sciences (FACEN) of the National University of Asunción (UNA) [70] by a team led by ICB Director Narciso González Romero, who had previously published some studies on the lake in 1967 [68] and 1986 [69]. For this study, several physical, chemical and biological parameters were monitored during the whole year of observation (1984), the analyses of which highlighted some crucial aspects of the lake's functioning. Specifically, the lake was found to be characterised by continuous external input, sedimentation and export of organic and inorganic matter, and poor primary and secondary production due to high water turbidity. Additionally, wastewater treatment, erosion control and effective regulation were identified as necessary for the improvement of water quality.
In response, the Paraguayan Government requested cooperation to the Government of Japan, which, through the Japan International Cooperation Agency (JICA), generated large amounts of data during a mission carried out between 1988 and 1989 [109]. An in-depth study on environmental aspects of Ypacaraí Lake and the Salado River Basin was produced, already mentioning the hydrological and hydraulic complexity of the system, given by the interaction between the Yukyry Stream, the Salado River, associated wetlands and the lake. The first bathymetric map of the lake was also drawn from sonar data and the mean level of the lake, +1.20 m with respect to the CNSB reference zero, was calculated from the observation records and used for calculating the lake's surface area and stored water volume.
The sediments were characterised for the first time in 1991 [52], finding low concentrations of toxic metals, the main metallic component being iron in its trivalent form (Fe 3+ ). No divalent iron (Fe 2+ ) was detected at the time. Primary productivity was hence hypothesised to be controlled by the phosphorus-binding properties of Fe 3+ .
Another important project financed by the United States Trade and Development Agency (USTDA) was conducted in 1995 by the consultancy firm Dames & Moore, Inc. In the final report [110], it was underlined that the very comprehensive study carried out by JICA (1988)(1989) had to "be modified to incorporate population increases and a deeper assessment of the alternatives pertaining to the collection, effluents treatment and protection of natural resources of the basins". The main conclusions and recommendations highlighted the need for both structural and non-structural measures, identifying some options for pollution control and lake management.
In 2000, the Paraguayan NGO Fundapueblos requested support to a survey team of the University of Padua (Italy), led by Giuseppe Bendoricchio, who recommended a series of actions to be taken for a proper management of Ypacaraí Lake and its system. Specifically, measures for the control of the lake level and the reduction of the amount of sediments transported towards the lake were described in the final report of the project [54].
In 2007-2008, a follow-up mission was carried out by the University of Padua (Italy) together with the Wise Use Agency of the Dutch Government, who came forward with proposals for a definitive solution of the lake's eutrophication problem [111]. In 2008, a pilot project on the use of bio-catalysts was also funded by The Netherlands, coordinated by the University of Wageningen, with the participation of the Ministry of Public Works and Communications of Paraguay (MOPC), the Secretariat of the Environment of Paraguay (SEAM) and the Multidisciplinary Centre for Technological Investigations of the National University of Asunción (CEMIT-UNA). As a result, the Paraguayan Government received a plan for the bio-remediation and management of Ypacaraí Lake.
Starting from one of the recommendations left by JICA, a second detailed bathymetric map was drawn by Jean Michel Sekatcheff in 2007. The on-site field work lasted for 35 consecutive days, and was carried out using a single manual probe, the results of which were simultaneously compared with sonar records. Since constant sediment resuspension and deposition are dynamically changing the lake bed, determining a continuous alteration of its bathymetry, the lake bottom was taken to be the one withstanding a force of 400 N. The maximum depth recorded through this method was of 4.4 m.
After the 2012 drastic cyanobacterial bloom, two main task force groups were created: one coordinated by the MSPBS together with SEAM, and an environmental commission directed by the President of the Supreme Court of Justice of Paraguay, a process that resulted, in 2014, in an agreement with the Itaipú Binational Entity to establish a network of monitoring stations, to create and maintain a limnological database, currently hosted by the International Hydroinformatics Centre (CIH), and to finance the periodical sampling campaigns of CEMIT-UNA, which had already started in 2012 with the scope of assessing the ecological conditions of the lake and the influence of environmental factors on cyanobacterial blooms. Other initiatives were also set in motion within this context, among which are a pilot project on water depuration by artificial floating macrophyte islands and the introduction of farmed fish of several native species into the lake.
In June-July 2014, the Reservoir Division of the Itaipú Binational Entity conducted a two-month bathymetric survey using an Acoustic Doppler Current Profiler (ADCP) SonTek RiverSurveyor M9. Data were post-processed in 2016, detecting a maximum depth of 3.7 m, after which a third bathymetric map of the lake was drawn. In 2017, these bathymetric data were reprocessed to correct for water level variations that occurred during the period of the survey, detected by Andrea Salvadore and Luigi Hinegk. A fourth bathymetric map, which we present in this article (Figure 3), was thus produced.
Between 2015-2017, the Italian Consortium Beta Studio S.R.L.-Thetis S.p.A. conducted a project on the recovery of the lake for MOPC and the Inter-American Development Bank (IADB), funded by the European Union through the Spanish Agency for International Development Cooperation (AECID) [55,81]. The general aim of the project was to design a plan for the improvement of sanitary conditions in the basin. Support to this project was provided by INYMA Consult S.R.L., managed by Juan Escribá and Carmen Escribá, who contributed with on-site water quality measurements. Observations, upstream and downstream of the main water courses (Pirayú, Yukyry and Salado) were also carried out from February to June 2016, aiming to better understand wetland-mediated water discharges into the lake and assess the depuration efficiency of these areas (Communication S2 in the Supplementary Materials) [58].
Starting from 2017, the Department of Civil, Environmental and Mechanical Engineering of the University of Trento (UniTN, Italy), as part of Gregorio López Moreira's doctoral research project, established a collaboration with the "Nuestra Señora de la Asunción" Catholic University of Paraguay, to conduct joint research on Ypacaraí Lake through local supervision by Roger Monte Domecq S., of UniTN Master Students in Environmental Engineering, Andrea Salvadore [59] and Luigi Hinegk [53]. Later, UniTN Master Student in Environmental Engineering, Ruben Sadei, also initiated research on the resuspension of sediments and the release of nutrients from the lakebed into the water column, in collaboration with the Faculty of Engineering of the National University of Asunción (FIUNA), under the local supervision of Juan Francisco Facetti. These projects have resulted in a new interdisciplinary task force coordinated by Gregorio López Moreira, the first product of which is this article.
|
2019-02-04T10:34:10.942Z
|
2018-07-11T00:00:00.000
|
{
"year": 2018,
"sha1": "74101deb8490fb675c5e86de5a7469a26b331a91",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/7/2426/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5bd04cc416e5bdc4ad37d5268fc340e9bc9f183e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
202189284
|
pes2o/s2orc
|
v3-fos-license
|
Computerized Maintenance Management System for Thermal Power Plant, Hisar
This paper discusses the development and implementation of Computerized Maintenance Management System (CMMS) for a Thermal Power Plant. CMMS is a computer application which is coded in java and can be used for quick and efficient planning of various maintenance jobs in any industry. The developed computer system is adaptable, inexpensive, time saving and very easy to operate. The motive of this framework is to reduce the cumbersome manual collection of data and inefficiency in data retrieval. This software tool comprises of various modules like Equipment Details, Resources, Work Order, Utilities, and Safety Plans etc. The application developed for Thermal Power Plant, Hisar aims to provide effective maintenance planning and control, proper scheduling and improvement in workforce management.
Introduction
In recent years, Haryana has exhibited higher growth as a result of state government policies that promote industrial sector in the state. Hence, demand of electricity is increasing in Hisar region which increases the load on regional power plant namely Rajiv Gandhi Thermal Power Plant (RGTPP). This power plant is one of the coal based power plants of Haryana Power Generation Corporation Limited (HPGCL). The work for a 1200 MW coal-fired power plant was awarded during 2007. This is the first project in the Northern Region to be awarded Mega Project status with attached benefits under the Mega Project policy of Govt. of India. Maintenance personnel of this plant deal with vast variety of technical and financial data for planning and organizing various jobs for effective systematic plant maintenance.
Thermal Power Plants are complex engineering systems which work continuously under uneven load condition without affecting its performance. It fulfils the demand of public and various industries irrespective of the fluctuating demand and changing environmental conditions. To meet these challenging demands continuously, effective maintenance management of the concerned plant is must. Maintenance management is all about maintaining the resources of the plant so that production proceeds effectively and no efforts are wasted on inefficiency. Its prime objectives are cost control, scheduling work, enhancing the safety of manpower and compliance. Maintenance is one of the most important factors for achieving higher plant productivity and good returns on investment and it should and Computerized Maintenance Management System (CMMS). In fact, CMMS in particular, would facilitate the computer aided planning and control of all maintenance activities. This plant follows Preventive and Breakdown Maintenance which come in action whenever the equipments or machines stop working, standby arrangement becomes functional or plant works at its reduced capacity. Alongside, CBM is also applied for critical equipment such as boilers.
CMMS software maintains a computer database of information about the maintenance activities of an organization for their better planning and control. Concerned plant requires an Enterprise Resource Planning (ERP) system and developed CMMS may work as internal feature for the ERP. It is very helpful in achieving the goal of managing maintenance related data which fulfills the prime motive of this research. It also focuses on reducing the downtime, overall annual maintenance cost, overtime labour and unnecessary maintenance. With the aim of using this CMMS application in RGTPP, Hisar plant, different modules and features were developed. Beni (2014) discussed the benefits of implementation of CMMS in Iranian gas industry and used genetic algorithm to optimize the various parameters of maintenance jobs. The developed process and functional models are useful in analyzing other plants with small modifications that can save time.
Wienker et al. (2015)
highlighted out six key reasons for low success rate in implementation of CMMS such as implementation failure, lack of planning in maintenance activities, Inadequate IT Infrastructure, unclear deadlines, Inadequate resources etc. and outlined some essential elements that ensures success while implementing CMMS and utilizing its benefits in planning maintenance activities.
Lopes (2016)
presented the case study of requirements specification of a CMMS for a manufacturing company. The adapted system by the maintenance department of the concerned company serves various improvement opportunities such as analysis of failures to reduce its occurrences, access to information in real-time, performance assessment, movement tracking of spare parts and ensure its availability when required.
Verma et al. (2016)
studied, summarized and concluded the research conducted in the field of CMMS since 1996. 16 research papers were reviewed and briefly explained. These research papers constitute method and process of implementation of various CMMS systems in the process as well as manufacturing industries and concluded that with the implementation of CMMS, efficiency of maintenance activities improved with reduced cost and maintenance activities of manufacturing units is performed in a more synchronized and automated manner.
Yadav et al. (2017)
gave a general framework of CMMS for National Thermal Power Plant, Badarpur and designed a system that can record data such as downtimes, details of tools & tackles, equipment information which was not recorded previously. The system offers various advantages and gives maintenance personnel capability to ensure proper recording of data and information. Felipe et al. (2018) proposed to establish a computerized maintenance management system for the railway transit. By identifying the causes of the problems, a system was proposed that can reduce the service interruptions effectively and improve the service performance of railway transit by increasing availability of resources with better planning of maintenance activities.
CMMS for RGTPP, Hisar
After analyzing different CMMS Software available in market and various modules that can be added, maintenance techniques of the concerned plant were studied and then modules were finalized after discussing them with the plant maintenance team. CMMS software is developed in NetBeans, which is a software development platform that is coded in Java. NetBeans is an open-source integrated development environment (IDE), primarily intended for development with Java, but it also supports other programming languages like PHP, C++ and HTML5. MySQL handles the database components, while PHP or Python represents the dynamic scripting languages. It is often used for web development and internal testing, but may also be used to serve live websites. Netbeans runs along with Wampserver to get the desired output of the CMMS application. Wampserver provides a server platform to connect various computer systems in or out of the plant. The data is maintained and recorded in Wampserver which can be exported as a SQL file. It provides an easy access to all the updates of the plant and for the recording purposes. Proper trial and testing were done on the working of the software, initial bugs were removed and then handed over to the plant maintenance team for reviewing and implementing according to their requirements.
Proposed CMMS Modules
On detailed plant research and identification of the requirements of this application, two basic pages (Login Page and Home Page) along with fifteen corresponding modules as described below.
Login Page. This ingress page secures the important data by providing access only to authorized personnel with the provision of user ID and Password.
Home Page. Home Page displays name and some information about the organization and consists of all the other modules for navigation within the application.
Equipment Details. Equipment Details stores information of all the equipment in the
Maintenance Department. The 'Equipment details' tab contains information about the equipments like supplier details, warranty details, manufacturer etc. as shown in figure 1.
Work
Request. This module stores information related to the work orders generated along with their current status. It often requests for an immediate action to prevent sudden breakdowns.
Quick Reporting.
This module reports about the labour materials, failure codes, completion date and occurrence of the downtime stating the reports that needs to be looked into quickly, shown in figure 4.
Spare Part Details.
This module is used to store information related to the spare parts used in the maintenance activities by recording their name, ID, working condition etc. shown in figure 5.
Daily Status.
This module contains the day to day information of working employees and the tasks assigned to them. It includes the details about allotment of equipments, time duration for which it is allotted, description of equipment and quantity of equipment available. 3.1.11. Safety Plans. Safety Plans is related to the issues for particular safety equipment and fatal accidents that can be recorded in this module. This keeps a check over the personal protective equipments (PPE) that are needed in some particular departments.
Tools.
Tools module is required to store information regarding the tools that are required at various maintenance activities in a plant as shown in figure 3.
Condition
Monitoring. This module stores the condition of equipments so that spotting of the upcoming failure can be done. It gives detail about the shift timing, section of the task, monitoring of the equipments, shown in figure 6.
Cost Estimation for Implementation of "CMMS"
The cost for implementation of 'RGTPP CMMS' at RGTPP, Hisar has been estimated considering the Software Development cost, other software packages required to run the developed software and infrastructure requirements etc. The cost details are shown in Table 1 given below:
Conclusions and Future Scope
After considering the requirements of the concerned plant, a Computerized Maintenance Management System (CMMS) has been proposed. It provides a systematic way of collecting and preserving information that can be easily utilized by the maintenance personnel for better planning, organizing, scheduling and controlling the maintenance activities. It also improves the efficiency and develops the database for quick and easy retrieval of maintenance related information. The following specific conclusions can be drawn from this study: • Proper utilization of CMMS in the plant improves the reliability of equipments, which is essential for its optimization. CMMS improves the quality and efficiency, with better decision making outputs. • CMMS provides an effective maintenance management of process plant facilities throughout their working loop with improved plant efficiency. • It leads to reduction in downtime, total annual maintenance cost, number of failures of machines. This provides day-to-day maintenance schedule to the maintenance personnel, so as to predict maintenance budget and maintenance policy using CMMS software.
Effectiveness of any plant mainly depends upon availability and maintainability of the machines or equipments during their operations. Hence, in future the applications of CMMS can also be beneficial in any industry such as Sugar, Beverages, Cement, Chemical, Aerospace, Defence, Automotive, Communications, Consumer, Health, Manufacturing and Fertilizer Industries where the management of a huge quantity of maintenance data and information is required. While implementing CMMS in any plant, most of the focus should be given to the successful installation of the hardware and software associated with the new system. Hence, the major benefits of the CMMS can be utilized and effective plant operation can also be achieved.
|
2019-09-10T20:24:05.404Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "28824fb191b7daed36d036f9bc92f7c62a06c8e5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1240/1/012012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "68ae45dc7f12fd31af667d068b8b6f7cc6f657a9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
11083833
|
pes2o/s2orc
|
v3-fos-license
|
Sepsis severity predicts outcome in community-acquired pneumococcal pneumonia
Easily performed prognostic rules are helpful for guiding the intensity of monitoring and treatment of patients. The aim of the present study was to compare the predictive value of the sepsis score and the Confusion, Respiratory rate (≥30 breaths·min−1), Blood pressure (systolic value <90 mmHg or diastolic value ≤60 mmHg) and age ≥65 yrs (CRB-65) score in 105 patients with community-acquired pneumococcal pneumonia. In addition, the influence of timing of the antimicrobial treatment on outcome was investigated. The sepsis and the CRB-65 scores were used to allocate patients to subgroups with low, intermediate and high risk. Comparable, highly predictive values for mortality were found for both scores (sepsis score versus CRB-65): 1) low-risk group, 0 versus 0%; 2) intermediate-risk group, 0 versus 8.6%; 3) high-risk group, 30.6 versus 40%, with an area under the curve of 0.867 versus 0.845. Patients with ambulatory antibiotic pre-treatment had less severe disease with a lower acute physiology score, lower white blood cell count and a faster decline of C-reactive protein levels. No pre-treated patient died. In summary, both scores performed equally well in predicting mortality. The prediction of survival in the intermediate-risk group might be more accurate with the sepsis score. Pre-hospital antibiotic treatment was associated with less severe disease.
index [6] or the recently developed Confusion, Urea (.7 mmol?L -1 ), Respiratory rate (o30 breaths?min -1 ), Blood pressure (systolic value ,90 mmHg or diastolic value f60 mmHg; CURB) and age o65 yrs (CURB-65), and Confusion, Respiratory rate (o30 breaths?min -1 ), Blood pressure (systolic value ,90 mmHg or diastolic value f60 mmHg) and age o65 yrs (CRB-65) scores [7], are applied in CAP. However, in a recent analysis of the Patient Outcomes and Research Team study, .50% of hospitalised CAP patients developed severe sepsis during the course of the disease [8], indicating that systemic infection is frequent. Therefore, to date it is unclear whether a pneumonia severity score or a sepsis score, according to the definition of BONE et al. [9], focusing on the systemic signs and sequelae of infection has the highest potential to predict outcome.
In addition to host factors, which are measured according to the aforementioned scores, treatmentrelated factors may influence the outcome. The time between admission and the first antibiotic dose and combination therapy in severe CAP were reported to be associated with a favourable outcome [10], but this finding has not been confirmed in other studies [11]. A possible explanation is that the major part of treatment delay may occur in the ambulatory setting, where timely diagnosis and treatment pose even greater problems. The impact of pre-hospital treatment on the outcome of pneumococcal disease in hospitalised patients has not yet been evaluated.
Therefore, the aim of the present study was to answer the following questions. 1) Is the sepsis score able to predict mortality of patients with pneumococcal pneumonia as accurately as the CRB-65 CAP score? 2) Does the timing of antibiotic treatment influence the outcome of this patient population?
Case definition
A case of community-acquired pneumococcal pneumonia was defined as a diagnosis of CAP in combination with the isolation of S. pneumoniae from blood, the cerebrospinal fluid, other sterile sites or respiratory secretions of high quality, i.e. o10 4 colony forming units?mL -1 of S. pneumoniae in bronchoalveolar lavage (BAL), and purulent sputum or tracheal secretions (only samples with .25 polymorphonucleates and ,10 squamous cells/high-power field). In addition, cases with positive urinary antigen test were included if the clinical diagnosis was CAP. The diagnosis of pneumonia was based on clinical symptoms (fever, respiratory symptoms, typical auscultatory findings), new or progressive infiltrate on chest radiography and laboratory signs of infection.
Patients
From December 1998 until November 2004, 105 adult patients hospitalised with community-acquired pneumococcal pneumonia at the University Hospital Lü beck (Lü beck, Germany) and two community hospitals from the same region (Medical Clinic, Sana Hospital, Schleswig-Holstein and Medical Clinic, Asklepios Hospital, Bad Oldesloe, Germany) were investigated in a prospective manner. Patients with defined immunodeficiencies (haematological or solid neoplasia, glucocorticoid or cytotoxic therapy, HIV infection or immunoglobulin deficiency) were excluded from the study.
Data on the influence of genetic polymorphisms on the clinical course of the disease have been described previously [12,13]. The study was approved by the institutional ethics committee. Written informed consent was obtained from patients or their relatives.
Laboratory and clinical data Demographic data, comorbidities, complications and previous antibiotic therapy were prospectively assessed. A total of 70 (66.7%) out of the 105 pneumococcal isolates were available for serotyping. The clinical status, including the sepsis severity and the acute physiology score (APS), was documented at days 1, 2 and 7. Assessment of the in-hospital mortality included early and late death defined as death during the first week (days 1-7) and death during the second week or later (oday 8), respectively.
Antibiotic susceptibility testing
The antibiotic resistance of S. pneumoniae strains was determined according to the standards and guidelines from the Clinical and Laboratory Standards Institute (CLSI) [14]. Briefly, direct colony suspensions, equivalent to a 0.5 McFarland standard, were inoculated on Mueller-Hinton agar with 5% sheep blood and incubated at 35uC in a 5% CO 2 atmosphere for 24 h. The panel of routinely tested antibiotics included penicillin G, clindamycin, erythromycin A, vancomycin, ceftriaxone and doxycycline. Resistance testing for fluoroquinolones was not routinely performed, since resistance rates of respiratory fluoroquinolones in the present authors' region are ,1% [15,16]. S. pneumoniae ATCC 49619 was used as a control strain. Current CLSI interpretive criteria were used to define antimicrobial resistance.
Serotyping
Pneumococcal isolates were serotyped by Neufeld's Quellung reaction using type and factor sera provided by the Statens Serum Institut, Copenhagen, Denmark.
Sepsis score and CRB-65 score The sepsis score (nonsepsis, sepsis, severe sepsis and septic shock) was made according to the definition provided by the American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference 1992, adapted by BONE et al. [9]. In brief, sepsis was defined as two or more of the following criteria in combination with pneumococcal infection: 1) temperature .38uC or ,36uC; 2) cardiac frequency .90 beats?min -1 ; 3) respiratory frequency .20 breaths?min -1 or carbon dioxide tension ,32 mmHg; and 4) white blood cell (WBC) count .12,000 cells?mm -3 or ,4,000 cells?mm -3 or .10% band forms. Severe sepsis was defined as sepsis associated with organ dysfunction together with perfusion abnormalities. One of the following criteria had to be met: 1) pH ,7.3; 2) pneumonia-associated confusion; 3) acute renal failure; 4) disseminated intravasal coagulopathy; 5) systolic blood pressure ,90 mmHg; and/or 6) an arterial oxygen tension/inspiratory oxygen fraction ratio ,200. Septic shock was defined as sepsis associated with sepsis-induced hypotension despite adequate fluid resuscitation.
The CRB-65 score was calculated as described by LIM et al. [7], with one point for each of Confusion, Respiratory rate o30 breaths?min -1 , low systolic (,90 mmHg) or diastolic (f60 mmHg) Blood pressure and age o65 yrs.
Influence of antimicrobial treatment on outcome
In order to assess the impact of treatment-related factors, the present authors studied the influence of pre-hospital antimicrobial treatment and in-hospital antimicrobial treatment on clinical course, parameters of inflammation and patient outcome.
Inappropriate treatment was defined as discordant treatment (isolation of pneumococci with resistance against the drug used) or treatment with drugs not recommended for the treatment of pneumococcal pneumonia in current guidelines (e.g. ciprofloxacin).
Statistical analysis
Patients were grouped into low-, intermediate-and high-risk classes according to the results of the sepsis score and the CRB-65 score [17]. The Cochrane Armitage Test was used for trend of category variables. Fisher's exact test (two-tailed) was used for association of discontinuous variables with mortality. Continuous variables were compared by the Mann-Whitney U-test (values are provided as mean¡SEM). A p-value ,0.05 was considered statistically significant.
Demographic data, risk factors and comorbidities
Demographic data, risk factors and comorbidities are presented in table 1. Most patients had at least one risk factor or comorbidity. The total in-hospital mortality was 10.5%.
Disease severity
Single variables (temperature, C-reactive proteins (CRP) levels, leukocytes, age, bacteraemia and comorbidities) were not associated with mortality (data not shown).
The predictive values of the sepsis and the CRB-65 scores were excellent (figs 1 and 2). The sepsis score at day 1 was significantly related to mortality (table 2). At admission, 36 (34.3%) patients were in the high-risk class (severe sepsis or septic shock) with a mortality of 30.5%, compared with 45 (42.9%) patients in the intermediate-risk class with a mortality of 0%, and 24 (22.9%) patients in the low-risk class with a mortality of 0% (p,0.0001).
The CRB-65 score was also predictive for mortality: 16 patients in the high-risk class had a mortality rate of 40.0%; 58 patients in the intermediate-risk class had a mortality rate of 8.6 %; and 32 patients in the low-risk class had a mortality rate of 0% (table 2).
Taking into consideration the different risk classes, there is a trend for a better prediction of survival in the intermediate-risk class as defined by the sepsis score; survival in this subgroup was 100% (95% confidence interval: 92.1-100.0) compared with 91.4% (81.0-97.1) with the CRB-65 score (p5nonsignificant).
Early versus late death All patients were observed until discharge. Death occurred after a mean period of 12.5¡13.7 (1-40) days. The length of hospital stay in survivors was 19.1¡10.8 (5-53) days.
Early death during the first week was seen in six patients (2.7¡1.9 (1-5) days) and was attributable to uncontrolled septic shock (n52), acute respiratory failure (acute respiratory distress syndrome; n52) and meningitis (n52). Late death was seen in five patients (19.1¡10.8 (9-40) days; fig. 3). Late death was observed after a transient recovery from sepsis in all patients and was attributable to secondary organ failure, including secondary bacterial pneumonia with respiratory failure (n53), hypoxic cerebral failure after meningitis (n51) and ischemic cerebral insult (n51).
At admission, late death patients were less severely ill than early death patients. Late death was associated with a lower APS at admission (11¡2.45 versus 16.33¡4.6 for late and early death, respectively; p50.03).
In addition, the performance of the two scores in predicting early or late death was compared. Using the sepsis score, all late-death patients were classified as high-risk patients at admission. In contrast, three out of five late-death patients were initially grouped in the intermediate-risk class using the CRB-65 score (table 2, fig. 3).
In spite of the fact that 38.5% of these treatments were inappropriate (ciprofloxacin, n54) or discordant (macrolide resistance, n51), patients with pre-hospital antibiotic treatment had less severe disease (table 4), as evidenced by lower APS values at admission (p50.02). In addition, lower WBC counts at admission (p50.002) and faster decline of CRP levels with lower values at day 7 were seen (p50.03) in pre-treated patients. A smaller proportion presented in the high-risk group (CRB-65 and sepsis score at admission) and none of the patients died (0 versus 12.0%; p5nonsignificant).
In-hospital treatment A delay of antibiotic therapy .8 h after hospital admission was associated with a trend for better survival: 1) delay . Combination therapy was used in 52.4% of all patients (mostly a b-lactam with a macrolide or fluoroquinolone). No association of the use of combination therapy with outcome was seen (tables 6 and 7).
DISCUSSION
The main finding of the present study was that the sepsis score at admission has a high predictive value for the outcome of community-acquired pneumococcal pneumonia. Using the presence of severe sepsis and/or septic shock (high-risk class) as a cut-off, 30.5% of these patients died compared with 0% of the patients in the intermediate-and low-risk categories. The CRB-65 score also showed an excellent overall performance but appeared less discriminative, with a survival rate of 91.4% in the intermediate-risk class compared with 100% when using the sepsis score (table 2).
Furthermore, the current data show that pre-hospital antimicrobial treatment is associated with a favourable clinical course in patients with pneumococcal pneumonia in spite of the fact that 38.5% of ambulatory treatment courses were inappropriate or discordant (table 4).
For CAP, including pneumococcal infection, severity scores, such as the PSI and the CRB-65 score, are successfully used. The CRB-65 score performed equally well for predicting outcome as the CURB and CURB-65 scores [19]. An association between the CURB-65 score and mortality in patients with bacteraemic pneumococcal pneumonia was recently demonstrated [20]. Conditions such as pneumococcal infection carry a high risk of systemic dissemination and septic shock. Even in CAP due to different aetiologies, the frequency of severe sepsis may exceed 50% [8]. Septic shock is a known risk factor for mortality from pneumococcal infection [21]. Therefore, scoring the severity of sepsis may add prognostic information in these patients. To the present authors' knowledge, the current study is the first that demonstrates a high predictive value of the sepsis score in patients with pneumococcal pneumonia. The current data are in line with a study by EWIG et al. [17] who found a high predictive value of the sepsis score in hospitalised CAP patients (mortality 1% in low-or intermediate-risk class). In addition, the authors observed an increased mortality rate of 8% in the intermediate-risk class by using the CURB score [17]. The predictive value of the CURB score has been evaluated in several studies. Recently, SPINDLER et al. [20] demonstrated an increasing mortality risk in patients with bacteraemic pneumococcal pneumonia according to the CURB-65 score. In that study [20], patients with intermediate risk had a high mortality rate of 15-20%. For clinical pathways, an intermediate-risk class with increased mortality may be useful for the decision of hospital admission, but is less useful for hospital management. The sepsis score with its more discriminative prediction of mortality (low-and intermediateversus high-risk class) may be helpful to decide which patients need more intensive monitoring in the hospital (e.g. intensive care unit). A possible disadvantage of the sepsis severity score compared with the CRB-65 lies in the need of some additional laboratory and clinical investigations. However, these data should be known by the clinician caring for hospitalised CAP patients (e.g. septic encephalopathy, septic shock, respiratory insufficiency, acute renal failure, disseminated intravascular coagulopathy, low blood pressure or acidosis).
It has been previously observed [4] that ,50% of the deaths in CAP patients are observed during the first 7 days due to direct septic complications, and the remaining 50% of the deaths are seen later. This observation can be confirmed for pneumococcal disease: 55% of the patients died during the first week and 45% of the deaths occurred later, after transient recovery due to secondary organ failure ( fig. 3). Patients with early death had initially more severe disease with a higher APS. Interestingly, the weaker discriminative power of the CRB-65 score was more evident in patients with late death. The majority of these patients were initially grouped in the intermediate-risk class with the CRB-65 score, whereas the sepsis score correctly predicted the high risk in all late-death patients (table 2). None of the patients with intermediate-or low-risk class of the sepsis score deteriorated to severe sepsis (high-risk class) during hospitalisation, confirming the stability of this scoring system ( fig. 3). The fact that simple sepsis, or systemic inflammatory response syndrome, has a low predictive potential for the development of more severe disease has been described previously and has served as an argument against the specificity of the sepsis score [8]. In the present authors' opinion the associated high predictive value for survival in these risk groups makes the sepsis score an useful instrument for assessing the risk of patients with serious pulmonary infections.
Several risk factors for pneumococcal infection have been described. Although the present study was not designed to study the incidence of pneumococcal infection, in 90% of cases at least one risk factor or one comorbidity was found (table 1). The influence of comorbidities on outcome is under debate [4].
In the present analysis, single risk factors and comorbidities were not associated with sepsis severity or mortality, but all patients who died had at least one risk factor or comorbidity.
As expected, pneumococcal serotype analysis did not show any clear association with the outcome. In Germany, vaccination with 23 valent polysaccharide vaccine is recommended for patients aged .60 yrs and for all patients with comorbidities [22]. A total of 94% of the recovered serotypes would have been covered by the vaccine. Thus, a considerable part of the invasive pneumococcal infections observed could have been avoided by the vaccination of risk groups.
In line with other German cohorts [18], a low incidence of pneumococcal resistance (macrolide resistance 9%, intermediate penicillin resistance 1%) was found. The role of bacterial resistance, especially in discordant treatment (e.g. receipt of an antimicrobial drug inactive against S. pneumoniae in vitro), is questionable [23]. In the present study, all patients with pneumococcal resistance received concordant in-hospital treatment (e.g. receipt of at least one antibiotic with in vitro activity against S. pneumoniae). A trend towards less severe disease was found in patients with isolation of drug-resistant pneumococci (table 3).
Hospitalisation despite prior ambulatory antimicrobial treatment was seen in 12.4% of the current cohort of patients. It was associated with antibiotic resistance in a minority of cases. Interestingly, the present authors found a less severe course of disease and no deaths in pre-treated patients, in spite of the fact that pneumococci were isolated in all cases at admission and 38.5% had been treated either with inappropriate drugs, e.g. ciprofloxacin, or with macrolides in case of resistance. Pretreated patients had lower CRP and leukocyte values, together with a lower APS (table 4). In addition, less patients were in the high-risk group of the sepsis score and none of the pretreated patients died. This suggests that pre-hospital antibiotic treatment, although suboptimal in many cases, had a beneficial effect on the course of the disease, possibly by modulating the inflammatory response. In line with the present data, RUIZ et al. [24] demonstrated a protective effect of prior ambulatory antimicrobial treatment in patients with severe CAP. Thus, rapid empiric treatment seems to be of importance for the course of CAP.
In contrast, the present authors were not able to confirm an influence of treatment delay in hospital, the use of combination therapy or inappropriate treatment on outcome (tables 5-7). Of note, these data are observational and are open to multiple biases. For instance, critically ill patients may receive immediate attention at the emergency room, leading to faster initiation of treatment and to the institution of combination therapy. This could lead to underestimation of the effect of treatment intensity and speed. Indeed, patients receiving early therapy and combination therapy seemed to be more severely ill at admission (tables 5 and 7). Conversely, a treatment delay of a few hours in hospital may be less important for the course than a delay in the pre-hospital phase, which may comprise days [11].
In conclusion, the sepsis severity assessment and pneumonia scoring with Confusion, Respiratory rate (o30 breaths?min -1 ), Blood pressure (systolic value ,90 mmHg or diastolic value f60 mmHg) and age o65 yrs showed overall comparable performance in predicting mortality. There was a trend for a more accurate discrimination with sepsis assessment in patients with intermediate risk which has to be confirmed in larger cohorts. In hospitalised patients with communityacquired pneumococcal pneumonia, both instruments may be complementary for evaluating disease severity. Regarding modifiable factors, pre-hospital antimicrobial treatment was associated with less severe disease. Controlled studies may be warranted to elucidate the role of earlier initiation of treatment in the pre-hospital setting.
|
2018-04-03T00:23:08.597Z
|
2007-09-01T00:00:00.000
|
{
"year": 2007,
"sha1": "ef260dc78c89a9e375b9db5facb60db25b053d39",
"oa_license": "CCBY",
"oa_url": "http://erj.ersjournals.com/content/30/3/517.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "17ce1ab404a38c4b4b2ea7e78a87faac3e609ff5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17959293
|
pes2o/s2orc
|
v3-fos-license
|
An Endosomal NAADP-Sensitive Two-Pore Ca2+ Channel Regulates ER-Endosome Membrane Contact Sites to Control Growth Factor Signaling
Summary Membrane contact sites are regions of close apposition between organelles that facilitate information transfer. Here, we reveal an essential role for Ca2+ derived from the endo-lysosomal system in maintaining contact between endosomes and the endoplasmic reticulum (ER). Antagonizing action of the Ca2+-mobilizing messenger NAADP, inhibiting its target endo-lysosomal ion channel, TPC1, and buffering local Ca2+ fluxes all clustered and enlarged late endosomes/lysosomes. We show that TPC1 localizes to ER-endosome contact sites and is required for their formation. Reducing NAADP-dependent contacts delayed EGF receptor de-phosphorylation consistent with close apposition of endocytosed receptors with the ER-localized phosphatase PTP1B. In accord, downstream MAP kinase activation and mobilization of ER Ca2+ stores by EGF were exaggerated upon NAADP blockade. Membrane contact sites between endosomes and the ER thus emerge as Ca2+-dependent hubs for signaling.
In Brief
Endosomes form junctions with the ER, but how this contact is regulated remains unclear. Kilpatrick et al. find that Ca 2+ release by an endosomal ion channel facilitates inter-organellar coupling to temper signals mediated by an internalized growth factor receptor. Endosome-ER contact sites thus emerge as Ca 2+ -dependent signaling hubs.
INTRODUCTION
How organelles communicate is a fundamental question that arises given the compartmentalized nature of eukaryotic cell function. Although vesicular traffic is an established means of information transfer, it is becoming clear that traffic also proceeds by non-vesicular means. In particular, membrane contact sites have emerged as potential platforms for both Ca 2+ signaling and lipid transfer (Helle et al., 2013;Phillips and Voeltz, 2016;Levine and Patel, 2016;Eden, 2016). Membrane contact sites are regions of close apposition between membranes that are stabilized by tethering complexes. The endoplasmic reticulum (ER) forms multiple classes of contacts with both the plasma membrane and organelles such as endosomes, lysosomes, and mitochondria. Endosome-ER contacts have been implicated in endosome positioning (Rocha et al., 2009;Raiborg et al., 2015a), dephosphorylation of internalized receptors, and components of the endosomal sorting complex required for transport (ESCRT) machinery (Eden et al., 2010Stuible et al., 2010), endosome fission (Rowland et al., 2014), actin nucleation and retromer-dependent budding (Dong et al., 2016), and cholesterol transport . We have identified multiple populations of contact sites that form between the ER and different endocytic organelles , which include those dependent on VAPs (Dong et al., 2016). Notably, contact sites between the ER and EGF receptor-containing endosomes require annexin-A1 and its Ca 2+ -dependent binding partner S100A11 , raising the possibility that Ca 2+ fluxes may regulate contact.
Here, we reveal an essential requirement for NAADP and TPC1 in regulating membrane contact site formation between endosomes and the ER to control growth factor signaling.
NAADP and TPC1 Maintain Late Endosome and Lysosome Morphology
We examined the effect of inhibiting NAADP action on late endosome and lysosome morphology in primary human fibroblasts using four approaches.
First, we tested NAADP antagonists. Figures 1A and 1B show the effect of an overnight treatment with Ned-19 (Naylor et al., 2009) on late endosome and lysosome morphology as assessed by immuno-fluorescence staining and confocal microscopy of the late endosome and lysosome marker LAMP1. Labeled structures were clustered in the perinuclear region and often appeared enlarged ( Figure 1B; changes in staining intensity quantified in Figure 1H). Similar results were obtained with the recently described Ned-19 analog, Ned-K (Hockey et al., 2015;Davidson et al., 2015) ( Figure 1C) and upon shorter (2-hr) treatment with the antagonists (Figures S1A-S1C and S1I). Analysis of multiple individual labeled structures revealed an increase in the mean area (Table S1). LAMP1 protein levels were similar upon Ned-19 treatment ( Figure S1J). We further examined the ultrastructure of the endo-lysosomal system by electron microscopy (EM). Consistent with results using light microscopy, late endosomes and electron-dense lysosomes were often clustered and more vacuolar in Ned-19-treated cells compared with controls ( Figure 1D; quantified in Figure 1I). Immuno-EM confirmed that LAMP1 localizes to late endosome and lysosome clusters in Ned-19-treated cells ( Figure S1K).
Second, we pharmacologically targeted the TPC pore. Recent studies have shown that TPCs are inhibited by the plant alkaloid tetrandrine (Sakurai et al., 2015). Like NAADP antagonists, tetrandrine clustered late endosome and lysosomes and induced particularly pronounced vesicle enlargement ( Figures 1E, 1J, S1D, and S1I; Table S1). TPCs have also emerged as targets for L-type Ca 2+ channel blockers (Rahman et al., 2014). Therefore, we examined the effect of isradipine and nifedipine (both dihydropyridines), diltiazem (a benzothiazapine), and verapamil (a phenylalkylamine). As shown in Figures S1E-S1I and Table S1, all three structurally distinct classes of inhibitors induced aggregation/vesicle enlargement following an acute treatment.
Third, we buffered Ca 2+ levels. Treatment of cells with the cellpermeable form of the Ca 2+ chelator, 1,2-bis(o-aminophenoxy) ethane-N,N,N 0 ,N 0 -tetraacetic acid (BAPTA), also altered the appearance of late endosome and lysosomes ( Figure 1F). In contrast, treatment with the slower Ca 2+ chelator EGTA did not ( Figure 1G). This differential sensitivity to chelators (Stern, 1992) summarized in Figure 1J suggests that morphology of the endo-lysosomal system is regulated by local Ca 2+ fluxes.
Fourth, to directly probe the role of TPCs in endo-lysosomal morphology, we examined the effect of TPC knockdown. Treatment of fibroblasts with small interfering RNAs (siRNAs) targeting two independent sequences in both TPC1 or TPC2 reduced transcript levels >50% ( Figure S2A). Knockdown of TPC1 protein was confirmed by western blotting (Figures S2B and S2C). As shown in Figures 2A-2C, confocal microscopy of TPC1-silenced cells revealed marked changes in LAMP1 staining (quantified in Figure 2F and Table S1), similar to chemical blockade of NAADP signaling ( Figure 1). TPC2 silencing, however, had little effect ( Figures 2D and 2E). In accord, late endosome and lysosomes appeared more clustered and less distinct upon TPC1, but not TPC2, silencing at the ultrastructural level ( Figures 2G-2K, quantified in Figure 2L). To assess specificity of our molecular manipulations, we performed rescue experiments with a siRNA-resistant TPC1 construct. Expression of this construct in TPC1-depleted cells partially reversed clustering of late endosome and lysosomes as assessed by both LAMP1 immunocytochemistry ( Figures 3A and 3B) and correlative light and electron microscopy (CLEM) ( Figures 3C and 3D), thereby attesting to specificity. Late endosome and lysosomal morphology was also unchanged by Ned-19 in cells where both TPC1 and TPC2 had been silenced ( Figures S2D-S2F), further attesting to specificity. Late endosome and lysosomal form is thus specifically determined by TPCs in an isoform-selective manner.
Taken together, we identify an unexpected role for NAADP, a target channel, and associated local Ca 2+ fluxes in maintaining late endosome and lysosomal morphology.
TPC1 Localizes to Endosome-ER Contact Sites and Regulates Their Formation
The distribution of TPC1 within the endocytic pathway is unclear and likely more diffuse than that of TPC2, which is expressed predominantly on lysosomes ). Immuno-EM revealed localization of GFP-tagged or untagged TPC1 to the limiting membrane of multi-vesicular endosomes, rather than electron-dense lysosomes, with some additional ER localization that might be related to ectopic expression . Parallel studies with TPC2, however, were ambiguous because expression of TPC2 resulted in a proliferation of endocytic organelles with disorganized membranous content, often aggregated in clusters ( Figure 4A). TPC2-mediated disruption of late endosome-lysosomal morphology is consistent with our previous analysis (Lin-Moshier et al., 2014). Intriguingly, we found that TPC1 was often found at membrane contact sites between endosomes and the ER . Quantitative analysis of TPC1-positive endosomes showed that there was a 5-fold increase in the number of gold particles/unit endosomal membrane in contact with the ER compared to regions not associated with the ER ( Figure 4B). TPC1 thus emerges as a component of ER-endosome contacts.
The presence of TPC1 at ER-endosome contact sites raised the possibility that local Ca 2+ signals deriving from the endosome may regulate contact with the ER. We therefore examined the effect of NAADP blockade on ER-endosome contact site formation ( Figure 4C). As shown in Figure 4D, Ned-19 reduced the percentage of endosomes with an ER contact. Similar results were obtained in HeLa cells where Ned-19 reduced ER-endosome contact site formation in a concentrationdependent manner ( Figure S3D). The effects of Ned-19 were recapitulated by knocking down TPC1, whereas depletion of TPC2 had little effect on ER-endosome contacts in fibroblasts ( Figure 4D). Chemical and molecular inhibition of NAADP signaling on contact site formation thus mirrors the effect on gross late endosome and lysosomal morphology (Figures 1 and 2). The ER forms contacts with lysosomes (Kilpatrick et al., 2013) that are biochemically distinct from those with endosomes . To assess specificity of our manipulations, we examined the effect of NAADP blockade on ER-lysosome contact site formation ( Figure 4C). Ned-19 had little effect on the percondition), and LAMP1 intensity in cells treated with tetrandrine/Ca 2+ chelators (J) (73-116 cells from three to five independent treatments). Data are presented as a percentage of DMSO controls (±SEM). See also Figure S1 and Table S1. (L) Summary data quantifying clustering of endolysosomes expressed as a percentage of the area occupied relative to non-nuclear cytoplasm. Data are from ten cells under each condition and normalized relative to Scr siRNA (± SEM). See also Figure S2 and Table S1. centage of lysosomes with an ER contact in both fibroblasts ( Figure 4E) and HeLa cells ( Figure S3D). Silencing of TPC1 was also largely without effect, whereas silencing of TPC2 reduced ER-lysosome contact sites ( Figure 4E). In summary, these data uncover a highly isoform-and compartment-specific role for NAADP signaling in the formation of membrane contacts between endosomes and the ER.
NAADP Regulates EGF Signaling
We have shown previously that ER-endosome contacts enable the interaction between endocytosed EGF receptor (EGFR) tyrosine kinase and the protein tyrosine phosphatase, PTP1B, on the ER (Eden et al., 2010). This contact allows receptor de-phosphorylation-a determinant of signaling by EGF. We reasoned that disrupting NAADP signaling would enhance EGF signaling due to compromised contact at the ER-endosome interface. To test this, we examined the effect of Ned-19 on the phosphorylation state of EGFR. Acute stimulation with EGF induced a transient rise in EGFR tyrosine phosphorylation that was significantly enhanced and prolonged by Ned-19 treatment ( Figure 5A, quantified in Figures 5B, S4A, and S4B). For these experiments, we used HeLa cells due to their high EGFR levels, but similar results were obtained in fibroblasts ( Figures S4B-S4D). To examine downstream functional consequences of disrupting contacts, we took two approaches.
First, we measured ERK activity because activated EGFRs are classically coupled to the MAP kinase pathway. Consistent with the prolonged EGFR activation observed in cells treated with Ned-19, tyrosine phosphorylation of ERK1/2 was significantly increased and extended upon NAADP inhibition ( Figures 5C and 5D).
Second, we measured cytosolic Ca 2+ levels. Activated EGFRs also couple to phospholipase C-gamma (PLCg), which generates inositol 1,4,5-trisphosphate, resulting in Ca 2+ release from ER Ca 2+ stores. EGF evoked readily measurable Ca 2+ signals that were exaggerated and more sustained in cells treated with Ned-19 for 2 hr or overnight ( Figures 5E-5H). Total levels of the EGFR were unchanged ( Figures S4E-S4G), indicating that enhanced signaling effects are not attributable to increased expression of EGFR. These data are again consistent with prolonged activation of EGFR due to perturbed contact at NAADP-sensitive endosome-ER contact sites. Taken together, these data reveal a role for NAADP in regulating both EGFR activity and downstream signaling by MAP kinase and phospholipase C. See also Figure S4.
DISCUSSION
Membrane contact sites between endosomes and the ER are gaining much attention as novel coordinators of cell function (Raiborg et al., 2015b). Contact with the ER is regulated by cholesterol (Rocha et al., 2009) and increases as endocytic vesicles mature (Friedman et al., 2013). Previous studies have shown marked upregulation of contacts in response to expression of STARD3/ STARD3NL (Alpy et al., 2013), ORP1L (Rocha et al., 2009), and protrudin (Raiborg et al., 2015a), but whether these proteins are necessary for contact site formation is less clear. Here, we use EM, which, relative to light microscopy, is better suited for resolving inter-organelle junctions to provide direct evidence that TPC1 is a contact site component (Figure 4). Importantly, inhibiting TPC1 activity using chemical and molecular means significantly decreased ER contact site formation with late endosomes (Figure 4). This was associated with disruptions in late endosome and lysosome morphology (Figures 1 and 2), although whether TPC1 and LAMP1 co-localize to junctionforming organelles remains to be established. Consistent with a role for ER-endosome contact sites in endosomal positioning (Rocha et al., 2009;Jongsma et al., 2016), we observed a more perinuclear population of LAMP1-positive late endosome and lysosomes when contact was reduced by inhibition of NAADP or TPC1 (Figures 1 and 2). We thus provide key evidence implicating endogenous NAADP signaling in regulating ER-endosome contact and the subcellular distribution of endocytic organelles.
Localized Ca 2+ release from endocytic organelles is a driver of endosome-lysosome fusion (Pryor et al., 2000). In accord, TPCs associate with the fusion apparatus (Lin-Moshier et al., 2014;Grimm et al., 2014) and have been implicated in a number of classical organellar vesicular trafficking events . Our identification of ER-endosome contact sites dependent on annexin A1 and NAADP (this study) establishes a paradigm whereby localized Ca 2+ release from endocytic organelles might regulate non-vesicular traffic. Of relevance are recent studies showing that Ca 2+ regulates the formation of contacts between the ER and the plasma membrane (Giordano et al., 2013). Notably, the measured affinity for Ca 2+ is in the low micromolar range, suggesting that large global signals, such as those evoked during Ca 2+ influx, regulate these junctions (Idevall-Hagren et al., 2015). However, because of the restricted volume at contacts, it is possible that even modest, constitutive fluxes in unstimulated cells, as alluded to here, could achieve the necessary Ca 2+ concentrations to modulate Ca 2+ -dependent contacts. The effects of interfering with NAADP/TPC signaling on contact sites correlated well with effects on gross late endosome and lysosomal morphology. However, possible endosome-lysosome fusion defects might also contribute to morphological changes.
Activated EGF receptors undergo internalization onto endosomes where they continue to signal in their phosphorylated form until they are dephosphorylated by PTP1B on the ER. Disrupting ER-endosome contacts by inhibiting NAADP prolongs EGFR phosphorylation ( Figure 5). These data support our findings that EGFR dephosphorylation occurs at endosome-ER contact sites populated by PTP1B and the Ca 2+ -binding protein annexin A1 (Eden et al., 2010. Importantly, we also report that disruption of NAADP-dependent contacts substantially enhances downstream signaling by EGF through both ERK and PLCg ( Figure 5). Whether EGF receptors are coupled to NAADP production similar to vascular endothelial growth factor (VEGF) receptors (Favia et al., 2014) is not known at present. Rather our data identify NAADP as a negative regulator of EGF action through local signaling at the endosome-ER interface (Figure S4H). Many external stimuli elevate NAADP levels (Galione, 2015), raising the possibility that second messenger and mitogenic signaling crosstalk may occur through modulation of contact site strength.
In summary, our findings provide new information on the molecular makeup, regulation, and function of ER-endosome contact sites.
Cell Treatments
Primary cultured human fibroblasts and HeLa cells were maintained in DMEM supplemented with 10% (v/v) fetal bovine serum, 100 units/mL penicillin, and 100 mg/mL streptomycin (all from Invitrogen) at 37 C in a humidified atmo-sphere with 5% CO 2 . Cells were passaged by scraping (fibroblasts) or with trypsin (HeLa cells) and plated onto coverslips (for immunocytochemistry, Ca 2+ imaging, and electron microscopy) or directly onto tissue culture plates/flasks (for western blotting) before experimentation.
For chemical treatments, drugs were dissolved in DMSO or H 2 O, diluted into culture medium, and then sterile filtered. The NAADP antagonists, trans-Ned-19 and Ned-K were synthesized as described by Naylor et al. (2009) and Davidson et al. (2015), respectively. Both were kind gifts from Raj Gossain, A. Ganesan, and Sean M. Davidson. BAPTA-AM was from Biovision. EGTA-AM was from AnaSpec. Tetrandrine was from Santa Cruz. Isradipine, nifedipine, verapamil, and diltiazem were from Sigma.
Stimulation with EGF (100 ng/mL; Sigma) was performed following serum starvation for 1 hr in serum-free DMEM.
Immunocytochemistry LAMP1 labeling and confocal microscopy were performed as described by Hockey et al. (2015). Briefly, all images were acquired under identical confocal settings and mean fluorescence intensity per cell quantified to allow comparison between controls and the various treatments. Automated size analysis of labeled structures was performed using the ''Analyze Particle'' function in Fiji from binary images created by local thresholding using the Bernsen algorithm (5-pixel radius with the default contrast threshold) and Watershed segmentation.
Electron Microscopy
EM was performed essentially as described . Briefly, serum-starved cells were stimulated with EGF and BSA-gold. After fixation in paraformaldehyde (PFA)/glutaraldehyde, cells were post-fixed in osmium tetroxide/potassium ferricyanide and embedded. Clustering was quantified by calculating the area (using Fiji) of three or more late endosomes/lysosomes in close apposition relative to the cytoplasmic area (excluding the nucleus). For pre-embedding labeling, cells were fixed in PFA, permeabilized with digitonin, and incubated with primary and nanogold-secondary antibodies prior to fixation for EM. ER-endosome contact sites in random sections were defined as regions where apposing membranes were <30 nm apart, with no minimum length. For correlative light and electron microscopy (CLEM), cells were plated on gridded dishes, fixed in 4% PFA, and imaged by light microscopy (Nikon Ti-E), prior to preparation for conventional EM as above.
Ca 2+ Imaging Cytosolic Ca 2+ concentration was measured essentially as described (Kilpatrick et al., 2016). In brief, cells were loaded with the fluorescent Ca 2+ indicator Fura-2 in HEPES-buffered saline. Dual excitation time-lapse fluorescence imaging was performed using a CCD camera. Data are presented as fluorescence ratios upon excitation at 340 and 380 nm.
|
2018-04-03T01:17:43.216Z
|
2017-02-14T00:00:00.000
|
{
"year": 2017,
"sha1": "5fecdaf951dcbefafde02423286b36233b5c2a7f",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2211124717301092/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fecdaf951dcbefafde02423286b36233b5c2a7f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
8098083
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of residents' approaches to clinical decisions before and after the implementation of Evidence Based Medicine course.
INTRODUCTION
It has been found that the decision-making process in medicine is affected, to a large extent, by one's experience, individual mentality, previous models, and common habitual approaches, in addition to scientific principles. Evidence-based medicine is an approach attempting to reinforce scientific, systematic and critical thinking in physicians and provide the ground for optimal decision making. In this connection, the purpose of the present study is to find out to what extent the education of evidence based medicine affects clinical decision making.
METHODS
The present quasi-experimental study was carried out on 110 clinical residents, who started their education in September, 2012 and finally 62 residents filled out the questionnaires. The instrument used was a researcher-made questionnaire containing items on four decision-making approaches. The questionnaire was used both as a pre-test and a post-test to assess the residents' viewpoints on decision making approaches. The validity of the questionnaire was determined using medical education and clinical professionals' viewpoints, and the reliability was calculated through Chronbach alpha; it was found to be 0.93. The results were analyzed by paired t-test using SPSS, version 14.
RESULTS
The results demonstrated that evidence-based medicine workshop significantly affected the residents' decision-making approaches (p<0.001). The pre-test showed that principles-based, reference-based and routine model-based approaches were more preferred before the program (p<0.001). However, after the implementation of the program, the dominant approaches used by the residents in their decision making were evidence-based ones.
CONCLUSION
To develop the evidence-based approach, it is necessary for educational programs to continue steadily and goal-orientedly. In addition, the equipment infrastructure such as the Internet, access to data bases, scientific data, and clinical guides should develop more in the medical departments.
Introduction
P hysicians have to make decisions in their practice ranging from diagnosis to analysis and treatment of the disease. In the process of clinical decision making, a body of variables such as signs and symptoms, medical knowledge, prior experience, the models doctors have acquired from their professors and even conjectures, emotions and impulses can affect a physicians' decision (1). Parallel to promotion and accreditation of clinical decisions, the concept of evidence-based medicine was a new and research oriented approach proposed by Guyatt, et al. at Mc Master University in Canada. The new trend has been welcomed and developed increasingly in medical schools all over the world (2)(3)(4). This approach in medicine means the integration of the physician's clinical experiences with the best evidence and documentation or the proper application of the best objective evidence for making accurate, fair and known treatment and diagnostic decisions about the patients (5,6). Evidence-based medicine tries to improve the quality of clinical decisions through developing and reinforcing the ability of raising questions, data search skills, critical evaluation, selection of the best evidence and documentation and the application of the results of critical analysis. Furthermore, it is an attempt to reduce the effects of errors arising from the subjective judgment, out-ofdate data or uncritical and linear inferences through objective clinical decisions derived from reliable and up-to-date scientific evidence (7,8).
With increasing volumes of medical data, students and scholars of this field more than ever need to retrieve the reliable and necessary data for diagnostic and treatment decisions from among large quantities of published articles. As a result, they should be able to criticize and analyze the reliability of the sources and the data. According to some evidence, most students search information from public Internet sites such as Google, and Wikipedia (9). However, they do not possess enough information about searching skills in scientific databases, advanced searching methods, collecting and refining the data and posing a variety of clinical questions. Therefore, in medical education, it is essential to develop the required skills of lifelong, self-directed learning, and the ability of posing questions in medical students (10).
Iranian medical universities have also adopted evidence-based medicine as a new approach in medical education. A variety of workshops have been held to develop the concept and philosophy of evidence-based medicine throughout the country. In the Iranian comprehensive health map, change in health education system and delivery of services by those who are knowledgeable, competent and accountable to the needs of the society is stressed (11).
Although different clinical groups including professors, clinical residents, medical students and other medical and paramedical staff play a part in development of evidence-based medicine, the role of clinical residents is of critical importance, because they not only deliver specialized clinical services, but also have a direct influence on the education and model adoption of junior residents and students. It has been revealed that medical students receive most of their information through interaction with clinical residents more than the time they spend with full-time faculty members (12,13).
The term "resident" or medical specialty student implies living in the hospital. Rider, et al. call medical residents "hospital instructors" (14). Therefore, clinical residents are valuable, potential sources of education who, because of their close contact with junior students, can teach the most necessary, practical, clinical, and educational points (15,16). Numerous studies, for example, have demonstrated that senior residents spend a lot of their time teaching junior residents and medical students (17)(18)(19)(20). Medical students have pointed out that over one-third of their learning in clinical settings has been made by residents (21) and have considered the residents' role, particularly during the first year on the clinical wards, as critical and determining (22)(23)(24).
According to Sánchez et al.'s findings at Mexico National University, senior residents maintained that they spend more than 32.5% of their time teaching medical and paramedical students and junior residents. Sánchez believes that universities of medical sciences should assess the residents' educational needs and arrange well to meet them (25). In addition, the experience of Kathmandu University indicated that life-long learning, active and self-directed learning and evidence-based medicine are essential to clinical residents (26).
Regarding the model role of clinical residents in development of evidence-based education in junior students, identifying their treatment methods and clinical decisions is very significant in both patients' health and medical education. Therefore, the present study aimed to investigate the clinical decision making by clinical residents of Shiraz University of Medical Sciences before and after implementing evidence-based medicine.
Methods
The present quasi-experimental study was carried out using a researcher-made questionnaire in 2013. The participants comprised all of the 110 clinical residents at Shiraz University of Medical Sciences, entering the university in October, 2012. They took part in a 30 hour educational program over five days in a row. The contents of the program included the main topics of evidence-based medicine to the suggested content by Ministry of Health and Treatment. The method of education was planned based on active learning and the five steps of evidence-based medicine.
In all steps, the main approach of education was planned and implemented based on active learning, raising advanced clinical questions, searching data sources, assessing the retrieved articles, selecting options, final decision making and ultimately assessing of performance. In order to find out the residents' viewpoints on decision making methods, we used a questionnaire containing two sections. To prepare the questionnaire, we first raised an open ended question: "How are the clinical decisions usually made on wards?" A specialized focal group including 10 faculty members with pediatrics, social medicine, cardiology, nephrology, neurology, surgery, medical education, gastroenterology, and gynecology specialties answered the question. Having summarized the responses, the researchers came up with 10 decision making approaches, forming the main items of the questionnaire. The items were categorized into four general topics including "scientific principles", "Evidence-Based Medicine approach", "Subjective Personal Experiences", and "Modeling".
The rate of using decision-making methods was calculated using Likert-scale scoring system, i.e. "rarely=1", "sometimes=2", and "usually=3" before and after the implementation of the program. The second section of the questionnaire included eight items as complementary data about prerequisites of evidence-based medicine. Five medical education professionals with clinical specialty helped to prepare the items.
One month after the program, the questionnaires were sent to the clinical wards. To observe the ethical considerations, the questionnaires were filled out and collected anonymously. Of the 110 questionnaires sent, 62 were completed fully and precisely and returned. For descriptive statistics, frequency and standard deviation were calculated and for the inferential statistics, Paired sample t-test and Independent sample t-test were used through SPSS, version 14 (SPSS Inc, Chicago, IL, USA).
Results
Of the 62 participants of the study, 19 (%31.7) were male, and 41 (%68.3) female. The age range of the participants was from 25 to 40 with a mean of 31.1+4.1. The results showed that the evidence-based medicine training program affected the residents' decision making significantly. Furthermore, %53 of the residents asserted, after the program, that they used evidence-based medicine approach in teaching junior residents and medical students to a great extent. They also stated that they encouraged the students to make use of this approach. Table 1 displays the frequency of the participants' viewpoints on each category.
The mean scores of the residents' viewpoints on decision making approaches before and after the training program are shown in Figure 1.
The results of T-paired test showed that residents' clinical decision making using evidence-based approach improved significantly in comparison to that before the training program (p<0.001). Furthermore, decision making approaches based on "personal judgment and experience" and "modeling" decreased in comparison to the use of the methods before the program (p<0.001). However, the use of logical and sensible methods such as consulting scientific references or pocket guidebooks did not change significantly (p=0.52)( Table 2).
Analyzing the prerequisite skills for the application of evidence-based medicine, we found that about 43% of the residents most often had enough skills to search the required articles. 16.6 percent of the residents could most often and 67.2 percent of them could sometimes download their required articles. 77.4 percent of the participants most often had access to the Internet. Only 18 percent of the residents did have the experience of publishing a research article. Also, only 22.6 percent of the residents did state that the computers in the ward were equipped with upto-date CDs (as an up-to-date source of scientific articles). And only 37.2 percent of them did say that there were clinical guidelines on their wards.
The residents were asked an open-ended question: "what sites and databases did you use to search articles?" before and after the training program. Most residents answered that they usually used general sites such as Google or Yahoo before the program but after the program they got familiar with specialized databases to some extent.
Discussion
Experts believe that evidence-based medicine is an approach which can reinforce systematic, critical and scientific thinking and provide the ground for optimal clinical decisions. Decision making, selection and application of documentary evidence are the main parts of evidence-based medicine because this can finally affect the patients' health (15,27,28) According to the findings of the present study, two methods of "reliance on established scientific principles" (2.24) and "model making from the environment" (2.37) were the most preferred methods used by residents in their decision-making (2.24). Although the use of logical and reasonable methods such as "use of reference books" was relatively predictable because of their scientific and model-oriented nature of medicine, the findings demonstrated that the use of methods such as "subjective judgment based on personal experience" was as frequent as the use of "logical and reasonable" methods among the residents (2.16). This is both clinically and educationally interesting. The least used method before the training program was the method based on documentary and critical thinking (1.59). Sadeghi et al. in their study (2011) found out that the most common sources for residents in their decision making were the "use of reference books" (%59.6), "clinical experiences" (%44), and only 19 percent of the residents used new articles (29).
Comparing the use of decision making approaches after the program, we found that the training course significantly affected the residents' use of decision making approaches, or the training program at least directed them to more proper decision making approaches. Although there was no significant difference between the use of logical and sensible methods based on scientific approaches before and after the training program, the decision making approaches based on experience and personal judgment as well as getting model from the environment, which are not scientifically based and may merely arise from habituation and observations, were significantly used less often after the training. Furthermore, the use of evidence-based approach increased significantly after the training. The residents stated that they made use of this approach in teaching undergraduates.
A lot of studies have demonstrated the positive effects of evidence-based medicine on residents' attitudes, knowledge, and skills (30)(31)(32)(33)(34)(35)(36). Such an approach should not only be maintained but also reinforced in clinical settings. Clinical residents, who possess the highest and most important academic degrees, require research competence and attitude, while based on the findings of this study only 16 percent of the residents taking part in this study had some experience regarding the publication of articles.
Research activities are in close connection with evidence-based medicine as they follow the scientific approach, from asking questions to looking for and studying scientific sources to organized methodology to presenting results to critical thinking and finally to knowledge publication. As a result, it is necessary to include the required courses in the general medicine curriculum to reinforce the students' thinking and (36). By and large, training on evidence-based medicine has increased the residents' attitude towards decision making approaches and their application. This makes proper planning for this trend a necessity. The basis should be put during the general medicine curriculum and then the training should be extended through residency study period.
Conclusion
Overall, it seems that training the EBM courses has had a positive influence on the residents' approach to the use of scientific evidence and recommended to continue the EBM courses for other residents. Also, for further development and reinforcement of evidence-based approach in clinical decision making, the provision of infrastructures such as the Internet and accessible databases, scientific gatherings among residents such as journal clubs on evidence-based medicine and compiling educational guidelines in clinical departments are necessary.
|
2016-05-04T20:20:58.661Z
|
2014-10-01T00:00:00.000
|
{
"year": 2014,
"sha1": "cfa5e8a22ee05ee0a1317a4239226d1695dd5d00",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cfa5e8a22ee05ee0a1317a4239226d1695dd5d00",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
7587
|
pes2o/s2orc
|
v3-fos-license
|
Why segment the maxilla between laterals and canines?
Introduction: Maxillary surgery on a bone segment enables movement in the sagittal and vertical planes. When performed on multiple segments, it further provides movement in the transverse plane. Typical sites for interdental osteotomies are between laterals and canines, premolars and canines, or between incisors. Additionally, osteotomies can be bilateral, unilateral or asymmetric. The ability to control intercanine width, buccolingual angulation of incisors, and correct Bolton discrepancy are some of the advantages of maxillary segmentation between laterals and canines. Objective: This article describes important features to be considered in making a clinical decision to segment the maxilla between laterals and canines when treating a dentoskeletal deformity. It further discusses the history of this surgical approach, the indications for its clinical use, the technique used to implement it, as well as its advantages, disadvantages, complications and stability. It is therefore hoped that this paper will contribute to disseminate information on this topic, which will inform the decision-making process of those professionals who wish to make use of this procedure in their clinical practice. Conclusions: Segmental maxillary osteotomy between laterals and canines is a versatile technique with several indications. Furthermore, it offers a host of advantages compared with single-piece osteotomy, or between canines and premolars.
INTRODUCTION
Cohn-Stock was the first to describe anterior segmental maxillary osteotomy in 1921. Since then, several changes have been made to this surgical approach and new osteotomy models have emerged. 1,2 Currently, maxillary surgery is a routine procedure for the correction of dentofacial deformities, and can be performed in one or multiple bone segments. 3,4 Venugoplan et al 5 found, after studying the number and types of procedures performed on patients hospitalized for orthognathic surgery in the United States, that maxillary segmentation is the most frequently performed procedure, involving 45,8% of the cases. 5 Maxillary surgery on a bone segment enables movement in the sagittal and vertical planes. When performed in multiple segments, it also comprises the transverse plane. It is therefore touted as a rather versatile technique. 6 Two or three bone segments can be used. Moreover, interdental osteotomy can be performed in the following sites: between laterals and canines, between premolars and canines (with or without premolar extractions), or between incisors. It can be bilateral, unilateral or asymmetric 7 (Fig 1).
Segmental maxillary surgery between canines and premolars, or between central incisors, is cited by many authors who report its advantages, disadvantages, complications and stability. 3,8,9,10 Very few articles in the literature address the technique of segmental maxillary osteotomy between laterals and canines. Reyneke 9 and Wolford et al 10 cited the technique and emphasized some of its advantages, such as: management of intercanine width, and of the curves of Spee and Wilson; control of incisor buccolingual angulation; less orthodontic mechanics in the postoperative phase; and greater overall ease. This surgical technique is indicated for the treatment of maxillary protrusion of which repositioning with orthodontic treatment alone is not feasible due to substantial tooth movement and potential damage to the periodontium. 3,8 It can correct multiplanar maxillary deformities within a single surgical stage, such as in the following conditions: transverse maxillary expansion concurrently with vertical and sagittal positioning of incisors; anterior open bite correction specifically indicated to speed up orthodontic treatment time; 3,11,12 and to correct tooth size discrepancy. 10 Although considerable advances in the stability and predictability of maxillary surgery have been made over the years, complications can still occur, such as bone necrosis, 4 oronasal and sinus fistula, tooth devitalization and periodontal defects. 1,8,11 Therefore, based on the very scarce scientific literature available for this technique and the clinical experience of the authors, this article aims to address key issues to be considered when making a clinical decision to segment the maxilla between laterals and canines in the treatment of dentoskeletal deformities. It further discusses the history of this surgical approach, indications for its clinical use and the recommended technique as well as its advantages, disadvantages, complications and stability. It is therefore hoped that this paper will contribute to disseminate information on this topic, which will inform the decision-making process of those professionals who wish to make use of this procedure in their clinical practice.
HISTORY
Although segmental maxillary osteotomy is currently employed in many treatment centers for dentofacial deformities, its development has been gradual and characterized by a long history of surgical techniques. Von Langenbeck described the use of horizontal osteotomies for the first time in 1859, and used this technique in 1861 to resect a patient's maxilla. 13 His pioneering efforts were followed by colleagues around the world, which has led to the emergence of various changes and new techniques.
In 1867, Cheever described the Le Fort I maxillary technique involving mandibular displacement to facilitate access to the nasopharyngeal region with the purpose of resecting a tumor. 13 In 1921, Cohn-Stock performed the first anterior segmental maxillary osteotomy to treat a skeletal maxillary protrusion. Despite improvements in occlusion, this procedure compromised facial esthetics due to an excessive retraction of anterior teeth. 14 This approach was the starting point for the development of new techniques. [13][14][15][16] In the 1980s, as a result of these developments, the increased flexibility of different types of osteotomy, advances in Orthodontics and orthognathic surgery, these techniques have become a standard procedure for correction of dentofacial deformities in the three dimensions. 17,18
INDICATIONS OF MAXILLARY SEGMENTATION BE-TWEEN LATERALS AND CANINES
Preoperative orthodontic goals play a major role in determining when premolar extraction will be required, when the curves of Spee, either marked or reverse, will be leveled orthodontically or surgically, when intra and interarches orthodontic procedures will be required to obtain appropriate dental positions, and how the maxillomandibular transverse relationship will be addressed. 19 The indication to segment the maxilla between laterals and canines should be established during the phase of orthodontic and surgical planning (Table 1).
Poor transverse relationship of the maxilla and control of intercanine width
Maxillary expansion surgery by means of interdental osteotomies yields a good transverse relationship between a hypoplastic maxilla and the mandible. However, segmental maxillary osteotomy technique applied between canines and premolars does not allow manipulation of intercanine width, given that the canines are located in the same bone block (Fig 2A). Correction of maxillomandibular transverse discrepancy in the region of canines would not be feasible.
Once the technique is implemented between laterals and canines, it becomes possible to manipulate the intercanine width ( Fig 2B). This approach favors changes in the torques of bone segments, in the curve of Wilson, and correction of transverse maxillomandibular discrepancy in the regions of molars, premolars and canines. When osteotomy is performed between central incisors, the intercanine width could be manipulated, but surgical correction of torque control of the posterior segments would be harder to implement, since each segment would comprise incisors, canines, premolars and molars and these teeth have different torques 9 (Figs 2C, 3, 4). Indications to segment the maxilla between laterals and canines. Why segment the maxilla between laterals and canines? special article
Correcting Bolton discrepancy
Size discrepancy in individual teeth or groups of teeth may be associated with the emergence of changes in occlusion. For maxillary teeth to occlude properly and harmoniously with their mandibular antagonists, there must be adequate proportionality between different tooth sizes. 20 Pizzol et al 21 reported an average 90% of presence of Bolton discrepancy in patients with dentoskeletal deformities. When caused by excessive anteroinferior dental volume, this discrepancy can be corrected in several ways: Selective interproximal dental stripping, changes in the buccolingual or mesiodistal angulation of anterior teeth, mandibular incisor extraction, or by creating space in the upper jaw between laterals and canines. These spaces can be created through orthodontic mechanics, such as the use of springs and changes in the buccolingual angulation of incisors, or by maxillary surgical segmentation between laterals and canines. 10 With segmental maxillary surgery, one can leave spaces between laterals and canines, while maintaining ideal occlusion, and subsequently enhance laterals with direct restorations or ceramic fragments (Figs 5, 6, 7). This will favor the predominance of maxillary central incisors, the smile arc and smile esthetics. The location of interdental osteotomy between canines and premolars, or between central incisors, might correct the transverse maxillomandibular relationship, but not the Bolton discrepancy. 10 Figure 5 -Lateral view illustrative of clinical case with Bolton discrepancy and excess lower dental volume: A) Preoperative clinical condition showing a Class II sagittal relationship and mesiodistal size deficiency of maxillary teeth (smaller laterals). B) Single-piece maxillary surgery: canine Class II sagittal relationship due to Bolton discrepancy. C) Maxillary surgery in three segments: canine Class I sagittal relationship with presence of diastema between laterals and canines to correct Bolton discrepancy. D) Three-segment maxillary surgery in esthetic stage where the diastema has been closed through indirect restoration. Why segment the maxilla between laterals and canines? special article
Controlling incisor buccolingual angulation
Buccal protrusion of incisors is more common in patients with maxillary hypoplasia, and can be corrected by means of the following: premolar extraction or selective stripping and retraction; distal movement of posterior teeth; surgically assisted rapid maxillary expansion or surgical expansion through segmental maxillary osteotomy in three segments.
Segmental surgery between laterals and canines provides the best control in uprighting the segments, as compared with segmentation between canines and premolars. Canines are transitional teeth from the anterior and posterior segments, and therefore have torques that differ from those of the incisors. So one can, for example, upright the incisors without being affected by the canines. If this were performed with the technique between canine and premolar, canines would lose their ideal occlusion due to contact with the mesial surface of premolars, or infraocclusion 10 position (Fig 8).
An easier technique
This technique is more easily performed than osteotomy between canines and premolars, since the location is more anterior and the bone is less thick in the region. 9
ADVANTAGES OF SEGMENTAL MAXILLARY OSTE-OTOMY
When maxillary surgery is performed in multiple segments, it includes, in addition to the sagittal and vertical planes, the transverse plane as well. The tridimensional control afforded by these segments ensure better esthetic and functional results. 6 The advantages of this technique are described in Table 2. A single surgical stage Segmental maxillary surgery involving three segments enables correction of the vertical, sagittal and transverse planes at the same surgical time; 6 whereas surgically assisted maxillary expansion is a technique that corrects the transverse relationship only. The diastema between central incisors and the expansion screw will be present for six months prior to the installation of fixed orthodontic appliances. This will entail a longer orthodontic treatment and longer surgical time. 10 Correction of intra-arch asymmetry The osteotomized segments can be manipulated independently, thereby allowing tridimensional corrections to be implemented. This does not occur in surgeries involving a single segment. Intra-arch asymmetries can be corrected by asymmetric manipulation of segments, such as closing or creating spaces (Fig 9).
Controlling the Curve of Spee
An accentuated curve of Spee of the maxilla is more common in patients with a high occlusal plane and anterior open bite. This condition hinders the development of a good dental intercuspation. Careful evaluation of this curve is important given that if correction is performed with orthodontic mechanics alone, it may not be stable. 10 The anterior and posterior segments of the maxilla can be leveled individually with orthodontic mechanics by establishing different levels for anterior and posterior teeth. Leveling will then be performed during maxillary surgery in three segments, which will enable the correction of the accentuated curve of Spee and a better occlusion (Figs 10, 11, 12).
Controlling the curve of Wilson
If the occlusal surfaces of maxillary teeth are inclined labially, it may become difficult to achieve an appropriate occlusal relationship. In the presence of a transverse maxillary deficiency, an accentuated curve of Wilson and posterior crossbite, an orthodontic or orthopedic correction, or even an approach with surgically assisted maxillary expansion, will be inappropriate, since this curve will be rendered even more accentuated with these mechanisms. In these cases, surgical expansion by means of segmental maxillary osteotomy may be indicated to decrease the curve of Wilson and improve the final occlusion 10 (Fig 13).
DISADVANTAGES OF SEGMENTAL MAXILLARY OS-TEOTOMY BETWEEN LATERALS AND CANINES
Segmental maxillary surgery between laterals and canines has some disadvantages when compared with surgery between canines and premolars in the cases presented in Table 2.
Presence of two occlusion planes in the maxilla between canines and premolars
The first disadvantage is when two occlusion planes are already present in the maxilla, and their transition is between canines and premolars. Thus, one of the goals of preoperative orthodontic treatment would be leveling the maxilla in three segments: one anterior, from canine to canine, and two posterior, from premolars to second molars. The leveling of these curves would be carried out surgically 9 (Fig 14).
Anteroposterior skeletal excess of the maxilla
The second downside is when there is anteroposterior skeletal excess of the maxilla. One can plan bilateral premolar extractions by segmenting the maxilla in this region, and then move the canine-to-canine block posteriorly, thus achieving a better, more esthetic and functional outcome 9 (Fig 15).
SEGMENTAL MAXILLARY OSTEOTOMY SEQUENCE
A mucoperiosteal maxillary buccal incision is performed, with the purpose of exposing the maxilla, above the attached gingiva and the tooth apices, extending from the mesial of first molars from one side to the contralateral side. Mucoperiosteal detachment is performed exposing the bone in the anterior maxillary region, with tunneling in the lateral region of the maxilla, thereby preventing laceration of the maxillary buccal pedicle and exposure of the buccal fat pad. A delicate detachment is necessary in the interdental region, between the roots of the lateral incisor and the canine, on each side of the nasal mucosa floor and medial wall of the nasal cavity and nasal septum perichondrium.
A tool should be used to protect the nasal mucosa. Le Fort I osteotomy is carried out using a 701 fissure bur and reciprocating saw (Fig 16).
Interdental osteotomy of the maxillary cortex is performed with the aid of a 699 fissure bur (Fig 16B and Table 3) between the roots of lateral incisors and canines.
Use a spatula osteotome in the interdental osteotomies (with digital support in the palatal mucosa, detecting the presence of the instrument, thus avoiding damage to soft tissue); and the septum and curve, respectively, in the regions of the septum and pterygoid process of the maxilla (Fig 17).
Lowering of the maxilla is performed along with mobilization with a Rowe forceps, Seldin elevator, or Tessier lever.
If necessary, a septoplasty, turbinoplasty and suturing of the nasal mucosa can be performed at this time.
Palatal osteotomy is then performed using ultrasonic tips (Fig 16C) in the shape of an H ( Table 3). The paramedian Finally, internal rigid fixation is performed with the use of miniplates and system 2.0 mm titanium screws (Fig 20). This fixation follows the vertical planning of the maxilla obtained during surgery through external reference with a Kirschner wire. It is important, therefore, that the maxillary bone be free from bone interference and remain passive in its final position as planned.
After this fixation, autogenous bone grafts are used to improve skeletal stability, maintain the desired inclination of maxillary incisors, and provide primary bone healing in the regions of interdental gaps and maxillary step 23,24,25 (Fig 21).
The intermaxillary splint is then removed and, in centric relation, the relationship between the mandible incision in the palatal mucosa can be performed between the raphe and the palatal artery, extending from the region of the first molar to the ipsilateral canine. It is important to position a scalpel blade #15 at a 45° angle. This is to ensure improved healing through broader connective tissue contact. This incision allows a transverse maxillary expansion greater than 10 mm while preventing a complication in the communication between the maxillary sinus and the oral cavity. Through this incision, the mucoperiostel detachment of the palate is performed, leaving the mucosa of the alveolar process attached 22 (Fig 18).
At this time, the three segments are mobilized, the palatal guide is inserted and the intermaxillary splint is present in the final occlusion (Fig 19). and maxilla is examined to ensure the correct position of the latter. Plication of the alar base is then performed, and the wounds sutured.
STABILITY
Marchetti et al 26 compared the stability of surgically assisted palatal expansion and segmental maxillary osteotomy two years postoperatively. Their results showed that segmental osteotomy for maxillary expansion yielded greater stability.
Krestscmer et al 27 conducted a comparative study on the stability of Le Fort I osteotomy in one segment and three segments. The authors concluded that there was no statistical difference in bone relapse in multiplanar movements in these techniques. They reported that the decision to segment the maxilla must be made in accordance with the occlusal benefits obtained, and that the individual indications of each patient should therefore be taken into account.
Arpornmaeklon et al 12 retrospectively analyzed the stability of maxillary advancement comparing a group subjected to Le Fort I osteotomy without maxillary segmentation (11 patients) with a group who underwent Le Fort I osteotomy with maxillary segmentation (15 patients). The analysis was performed with cephalometric radiographs obtained before surgery (T 1 ), immediately after surgery (T 2 ), and at least one year after surgery (T 3 ). Results showed that the cases without segmentation experienced a higher relapse in both vertical and horizontal directions than cases with maxillary segmentation.
COMPLICATIONS
The literature reports that the most frequent complications of segmental maxillary osteotomy are: necrosis of the repositioned maxillary segment, broadening of the alar base, nose tip rotation, and tooth devitalization, particularly canines. 8 It further stresses the influence of the surgical technique of choice on the results. 28 Other complications to consider are differences in the dentoalveolar region between anterior and posterior segments, bone loss and gingival margin degeneration. 29 Sher 30 sent out 135 questionnaires to oral and maxillofacial surgeons in the United States and Canada. The total number of segmented osteotomies was 6,195 of which 1,133 had been performed in the anterior maxilla. Complication rate was 0.32%, and the highest prevalence of complications were tooth mobility, injury and loss of teeth. The researcher suggested that to avoid complications, it is necessary to encourage the use of orthodontic mechanics at the expense of segmentations; avoid interdental osteotomies, if the space between roots is insufficient; and use osteotomes instead of saws. He concluded that factors such as surgeon experience, a shorter surgical time and proper postoperative follow-up can minimize complications (Table 3).
Dorfman and Turvey 31 documented changes in the level of the interdental bone crest after segmental Why segment the maxilla between laterals and canines? special article osteotomies of the maxilla and mandible. The researchers inferred that a minimum space of 3 mm would be safe for performing interdental osteotomies between two adjacent teeth (Table 3). They also stated that the success of interdental osteotomies depends on maintaining an adequate blood supply to the osteotomized segments through planned incisions and minimal periosteal detachment in osteotomized segments (Table 3).
Interdental osteotomies must be designed in conjunction with preoperative orthodontic treatment to ensure sufficient space to perform osteotomies. 32 This is an important factor, since root divergence is critical to the success of segmental osteotomy. 22 Performing interdental osteotomies in regions with restricted interradicular space is described as a risk factor for the development of marginal bone loss. 33
CONCLUDING REMARKS
Preoperative orthodontic goals can influence the achievement of suitable functional and esthetic results. Transverse maxillomandibular discrepancies of up to 4 mm, and those of dental volume or Bolton discrepancy, as well as changes in the buccolingual angulation and intra-arch asymmetry are occlusal problems that can be solved through orthodontic mechanics control. However, there are situations in which it is necessary to segment the maxilla, namely: transverse discrepancies greater than 4 mm, the presence of two occlusion planes and major root resorption.
Segmental maxillary osteotomy between laterals and canines is a versatile technique with several indications. Furthermore, it offers a host of advantages compared with single-piece osteotomy, or between canines and premolars.
It is important to learn about its indications, limitations and surgical technique with proper manipulation of the gingiva and bone, thus avoiding transoperative and postoperative complications.
As shown above, the literature substantiates the stability and complications of segmental maxillary osteotomy, but few studies have reported these features of the technique when it is employed between laterals and canines. Further studies are warranted to throw more light on this technique by addressing stability, complications, surgical and orthodontic treatment time, the quality of functional and esthetic results, and regional epidemiological data.
|
2016-05-04T20:20:58.661Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "f0765f74d2cbcbab7149df9b8ae0529084de682b",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/dpjo/v21n1/2176-9451-dpjo-21-01-00110.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0765f74d2cbcbab7149df9b8ae0529084de682b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
237656970
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of instability causes in the bi-dc converter and enhancing its performance by improving the damping in the IDA-PBC control
The poor damping of bidirectional dc (bi-dc) converter caused by constant power load makes power system prone to oscillation, and non-minimum phase characteristic also jeop-ardises voltage stability. To solve these challenges, the interconnection and damping assignment passivity-based control (IDA-PBC) is utilised to improve transient response. The influences of the right-half-plane (RHP) zero on the stability margin and controller design are illustrated by zero dynamics analysis. Then the port-controlled Hamiltonian modelling is used to obtain the IDA-PBC control law, which is suitable to the bi-dc converter and independent of the operation mode. The system dissipation property is modified, and thus the desired damping is injected to smooth the transient voltage. To remove the voltage error caused by RHP zero and adjust the damping ratio, an energy controller with an adjustment factor is introduced. Besides, a virtual circuit is established to explain the physical meaning of the control parameter, and the parameter design method is given. Passivity analysis assesses the controller performance. Simulation results are analysed and compared with
INTRODUCTION
In recent years, the dc system has become the potential alternative to the ac system in some scenarios and is widely used in the field of shipboards, electric vehicles and smart buildings [1][2][3][4]. A typical topology of a single bus dc distribution network is shown in Figure 1, consisting of photovoltaic (PV) generation, an energy storage system (ESS) including battery, and ac/dc loads. This dc distribution network is connected to the utility grid and/or ac microgrid (ac-MG) via a grid-connecting converter. The load converters are generally controlled by a high bandwidth regulator. Therefore, constant power loads (CPLs) occur, destructing the stability and even making the system unstable [5][6][7][8][9][10]. Besides, some special converters, such as bidirectional dc (bi-dc) converter, is influenced by CPLs and right-half-plane (RHP) zero. RHP zero makes the system non-minimum phase characteristics, degrading the stability margin and making the controller design difficult. Therefore, advanced control methods need to be developed to improve the system damping and maintain robust voltage regulation against the influence of RHP zero.
The stability challenges of dc converters can be solved by either passive damping [6] or active damping [7,8] methods. Adding a passive damper can effectively address the stability problem [6], but the system complexity will inevitably be increased and the system efficiency is reduced. In the active damping methods, the virtual impedance method is adopted in [7,9] to reshape the output impedance of voltage regulation converters and the input impedance of load converters in dc MG, respectively. However, the load performance might be influenced. To fulfil the impedance-based stability criterion, the output impedance of converters is modified to stabilise the system [10]. An inertia emulation approach is proposed in [11] to control the external characteristics of converters and improve their dynamic response. From the perspective of passivity theory, system passivity can be enforced through the controlled FIGURE 1 Typical topology of single bus dc distribution network active damping impedances [12]. The dc voltage is indirectly stabilised by passivity-based control (PBC) with the Brayton-Moser framework, and the series or parallel damping based solutions are used to solve the issue of the error dynamic [13]. A simplified parallel-damped passivity-based controller is proposed in [14] to maintain its robust output voltage regulation and a complementary proportional integral derivative (PID) controller is designed to remove the steady-state error. The classical linear controllers are simple to design and implement, but there are difficulties in dealing with the uncertainty of system parameters, and the RHP zero is ignored.
The PBC, one kind of non-linear control method, could achieve energy shaping by a damping injection matrix, and its control law is easily implemented using the structure property of the physical system [28]. The virtual damping injection is realised and the voltage regulation issue is addressed [19]. The port-controlled Hamiltonian (PCH) model describes the energy interaction via ports between the modelled system and the external environment. Interconnection and damping assignment-PBC (IDA-PBC) can achieve the active damping control of PCH systems by modifying the energy interchange and dissipation, and the transient response is improved [16,17]. It is worth noting that the arbitrary interconnection system of IDA-PBC will maintain stable because the passivity is preserved in the resulting systems [27].
Based on the passivity theory, [15] proposes a non-linear controller to change the system energy dissipation property and enhance the voltage regulation ability against system parameters variation. The voltage stability challenge of the dc MG is addressed in [27] by applying the IDA-PBC method to the PCH model of source-side converters. In [18], IDA-PBC technique is utilised in the voltage outer loop and the damping performance is improved. In addition, an extra integral loop is used to eliminate the voltage error. The PCH model of power electronic transformer is built in [19], and IDA-PBC is applied to address its voltage regulation issue. Adaptive IDA-PBC can also be used to solve the issue that bi-dc converter is non-minimum phase characteristic [20]. But the model only considers the buck state. In [21], an improved IDA-PBC scheme is proposed to make the interconnection matrix adaptive. The PCH models for dc converter is established, and the unique control equations are acquired. The IDA-PBC is utilised in [22] to ensure cascaded system stability via using the Hamiltonian function, and the dynamics instability issue is solved. In [23], the boost converter is modelled as a PCH system, and the damping performance is improved by an adaptive IDA-PBC method. Besides, an equivalent circuit is derived and control parameters are determined. In [24], a modified IDA-PBC is proposed to ensure the passivity property of LC input filter-dc/dc converter system, considering the interaction between LC filter and converters. Combined with the modified IDA-PBC, a non-linear observer is presented in [25] to eliminate the steady-state errors and reduce the number of sensors. In [26], an adaptive energy shaping control based on the IDA-PBC is developed to ensure the large-signal stability of the power converter with an input filter.
The above works do not study the application of IDA-PBC in bi-dc converter whose instability cause is related to its operation state, and the influence of its RHP zeros on the controller design is ignored. Because of the poor damping of the bi-dc converter supplying CPLs, it is necessary to study an IDA-PBC control law that is suitable for both boost and buck states. Meanwhile, the zero dynamics is studied to guide the controller design and better control performance can be acquired.
In this paper, the zero dynamics of bi-dc converter in both boost and buck state is analysed to elaborate its instability causes and its constraint on controller design. The IDA-PBC control law is derived for bi-dc converter via modifying the energy interchange and dissipation property. Based on the results of zero dynamics analysis, the reasonable IDA-PBC control law is selected to make the system passivity, and a proportional integral (PI) controller with an adjustment factor k is added to regulate the steady-state response. A virtual circuit is established to design parameters, and the reasonable range of r 1 and k could be determined by eigenvalue analysis. Passivity analysis is given to verify the proposed method.
The contributions of this paper are: 1. From the perspective of the RHP zero dynamics, the instability causes of the bi-dc converter is illustrated. The constraints on the controller design are studied, which can be used to guide the IDA-PBC design. 2. PBC method, which is suitable for bi-dc converter and independent of its operation mode, is proposed. IDA-PBC forms the current inner loop, which modifies the energy dissipation and increases system damping, to improve transient response. The outer loop consists of an energy controller with an adjustment factor k, reducing the voltage error. 3. A virtual circuit is established to explain the physical significance of the control parameter. The range of parameters are determined via eigenvalue analysis, and passivity analysis verifies the effect of the proposed method.
The rest of the paper is given as follows. The studied topology introduction and RHP zero dynamics analysis are in Section 2. The IDA-PBC control law is obtained and the energy regulator is designed in Section 3. The virtual circuit is established in The simplified circuit of single bus dc distribution network Section 4. The dynamic and passivity analyses are in Section 5. Section 6 verifies the proposed control method by simulation, and Section 7 gives the conclusion.
MODEL AND PROBLEM DESCRIPTION
This article focuses on designing the passive controller for ESS converters in parallel operation and stabilising the bus voltage of dc distribution network. Therefore, with the introduced virtual resistance, the converter is enforced passivity and the damping improvement is achieved.
The ESS is connected to the dc bus through the buck-boost converter (the bi-dc converter studied in this article) and the voltage transformation is achieved. The simplified circuit of Figure 1 is depicted in Figure 2. By controlling Q 1 , the boost operation (red arrow) is achieved, and the buck operation (blue arrow) is realised by controlling Q 2 ; the trigger pulses are complementary, namely, d 1_buck = 1-d 1_boost . d 1_buck and d 1_boost are the duty ratio in buck and boost state, respectively. L s and C out are the input filter inductance and output capacitor, v s is the output voltage of ESS and i s is the inductance current. v out is the voltage across the output capacitor, and i out is the output current through the line. PV and CPLs can be equivalent to a current source. P const is the consuming power of CPLs and i PV is the equivalent output current of PV. Zero dynamics of bi-dc converter is investigated in this section.
Modelling and zero dynamics analysis in boost state
When the bi-dc converter operates forward (i.e. boost state), its state-space equation is shown in Equation (1).
where u = 1−d 1_boost is the input signal. For the system in Equation (1), the set of the equilibrium point is ε x1 as shown in Equation (2).
When the bi-dc converter operates in the boost state, the following two conclusions could be obtained. The subscript '*' stands for steady-state value.
1. Corresponding to the output y 1 = v out −v out* , the zero dynamics characteristic is unstable. 2. Corresponding to the output y 2 = i s −i s* , the zero dynamics characteristic is stable, but it is not attractive at v out* .
Proof: Based on the second formula in Equation (1), u = P const /(i s •v out* ) can be derived when v out = v out* . Accordingly, Equation (3) can be obtained by combining this result and the first equation in Equation (1).
The slope of f 1 (i s ), when i s = i s* , is expressed as Hence, it is proved that the system in Equation (1) at i s* is unstable.
On the other hand, based on the first formula in Equation (1), u = v s /v out can be derived when i s = i s* . Accordingly, Equation (5) can be obtained by combining this result and the second equation in Equation (1).
Hence, it is proved that the system in Equation (1) is stable at the equilibrium point v out* , but it is not attractive.
Remark 1: The bi-dc converter, regarded as the boost circuit, is prone to make the system unstable because of non-minimum phase characteristics. The influence of CPLs exacerbates this instability as stated in Equations (4) and (5).
Remark 2: From the analysis mentioned above, the zero dynamics characteristic, corresponding to i s , is stable, but it is not attractive at v out* . Therefore, a steady-state voltage error would be caused depending on the initial condition, when a regulator is designed to control i s then indirectly controlling v out .
Remark 3: Designing a controller becomes difficult. The RHP zero limits the system bandwidth, making the dynamic response significantly slow.
Modelling and zero dynamics analysis in buck state
When the bi-dc converter operates in reverse (i.e. buck state), the state-space equation is shown in Equation (6).
where u = d 1_buck is the control signal. For the system in Equation (6), the set of the equilibrium point is ε x2 as shown in Equation (7).
When the bi-dc converter operates in the buck state, the following conclusion could be obtained.
Corresponding to the output y = i s −i s* , the zero dynamics characteristic is stable, but it is not attractive at v out* .
Proof: Based on the first equation in Equation (6), the control signal u = v s /v out can be deduced when i s = i s* . Accordingly, Equation (8) can be derived by combining this result and the second equation in Equation (6).
Hence, it is proved that the system in Equation (6) at v out* is stable, but it is not attractive at i s* . Remark 4: Although the bi-dc converter, when operating in buck state, does not show the non-minimum phase characteristics for i s , PV and CPLs, existing as a current source, make the converter weak damping. The system is prone to oscillation. Besides, there is no zero dynamics for v out .
Remark 5: The zero dynamics, corresponding to output i s , is stable, but it is not attractive at v out* . Therefore, when output voltage v out is indirectly controlled through output current regulation, a steady-state voltage error would be caused depending on the initial condition.
IDA-PBC CONTROL METHOD FOR BI-DC CONVERTER
First, the PCH modelling principle and process are briefly presented. Then, the IDA-PBC control law for bi-dc converter in different operation states is derived.
IDA-PBC design
Normally, the PCH model of non-linear systems can be mathematically expressed as Equation (9).
where I(x) = -I T (x) and D(x) = D T (x) represent energy interchange and energy dissipation in the system, respectively, that is, the interconnection matrix and the dissipation matrix. D(x) is a positive semidefinite matrix, that is, D(x)≥0. ζ represents the input signal, that is, the input voltage in this paper; u stands for the control signal, and y stands for the output quantity of the PCH model. The total energy function E(x) is in Equations (10) Equation (15) Integrability 2 E a (x) Equation (17) Lyapunov stability (18) and (11).
As we all know, the system will stabilise at a certain equilibrium point eventually because of the dissipation property (represented by D(x)).
IDA-PBC method is adopted, making the non-linear system in Equation (9) take the PCH form in Equations (12) and (13) by modifying I(x) and D(x). D d (x)≥0 is the desired damping matrix. A desired function E d (x) is established to replace energy function E(x), and the control signal u is then derived. The internal energy interaction is completely changed and the dissipation property is assured. Hence, the non-linear system in Equation (9) can operate around the desired equilibrium x * stably. Here, 'd' and 'a' imply desired and assigned matrices, respectively. The desired equilibrium point in a dc system can be the bus voltage reference value.ẋ To obtain the desired energy function E d (x), an assigned energy function E a (x) is introduced. The relationship between E d (x) and E a (x) is shown in Equation (14). The requirements (Formulas 15-20) presented in Table 1 should also be satisfied.
The system interconnection is modified by I a (x), and the system damping can be improved by D a (x), meaning that energy shaping and oscillation suppression can be achieved. It can be found from Equations (17) and (18) that the system can converge at the equilibrium point and asymptotic stability is assured.
IDA-PBC control law of boost state
When the bi-dc converter is in the boost state and its output voltage is controlled, Equation (1) can be rewritten in PCH form as shown in Equation (19).
Meanwhile, the interconnection and dissipation matrices (I boost (x) and D boost (x)) are obtained (u = 1-d): The expression of E d (x) is shown in Equation (21).
According to Equation (15), the assigned interconnection and dissipation matrix can be obtained: Two different control laws can be obtained from Equations (1) and (12): Theoretically, with proper injection damping (r 1 and/or r 2 ), d 1_boost and d 2_boost can achieve stable control of current and voltage, respectively. Practically, when the bi-dc converter operates as a boost circuit, it captures a non-minimum phase characteristic (remark 1). This puts a strict constraint on the controller bandwidth (remark 3) and reduces the dynamic response of the system. Without the rapid convergence of current, it is not possible to expect the voltage to quickly converge to the desired value (remark 2). Therefore, it is difficult to use d 2_boost to directly control voltage. In this paper, d 1_boost is adopted to form the IDA-PBC current loop.
IDA-PBC control law of buck state
When operating in reverse, the bi-dc converter is a buck circuit, and the control goal is to stabilise its input voltage, and its differential equations in PCH form are obtained: I buck (x) and D buck (x) are obtained as shown in Equation (27). At this time, D buck (x) no longer reflects the dissipative characteristics, and its elements can be equivalent to negative resistance (a current source actually). This agrees with the analysis results of remark 4.
I d_buck (x) and D d_buck (x) is defined: According to Equation (15), the assigned interconnection and dissipation matrix can be obtained: Two different control laws can be obtained from Equations (6) and (12): Practically, the rapid tracking of voltage reference requires the rapid change of battery current to complete energy balance. Therefore, in the absence of rapid current convergence, it is very difficult to expect the voltage to quickly converge to the desired value. Therefore, d 1_boost is adopted to form the IDA-PBC Considering the requirement of the boost state at the same time, it can be seen from Equations (25) and (30) that d 1_buck = 1−d 1_boost meets the requirement that the trigger pulse should be complementary. According to remarks 2 and 4, there would exist the steady-state output voltage error Δv err (Δv err = v out* −v out ), if v out is indirectly controlled by the current loop. An energy outer loop combined with adjustment factor k is introduced to eliminate Δv err .
IDA-PBC control law of boost state
Theoretically, the IDA-PBC controller can assure that v out accurately tracks its reference value, if the system parameters and operation conditions are accurately available. However, in practice, because of the influences of zero dynamics and uncertainties of operation condition, Δv err might occur in steady state, and the system might lose its stability when the load power changes suddenly (remarks 2 and 5).
In this paper, the objective of the introduced energy controller is to eliminate Δv err . A PI controller (k pe +k ie /s) is adopted to generate the power reference P ref , using the error between output energy and its desired value. An adjustment factor k is introduced to adjust the transient response of the outer loop. Droop control (Equation 33) is utilised to realise multiple parallel operation. The control block diagram is in Figure 3.
REPRESENTATION OF IDA-PBC
The introduction of I a (x) and D a (x) reshapes the energy interaction among different components and the energy dissipation property. The system passivity is assured. D a (x) can be equivalent to a virtual resistance, improving the system damping and smoothing the transient voltage. In this section, the virtual circuit is established to illustrate the physical meanings of IDA-PBC control parameters. The location of D a (x) is determined.
The system dynamic characteristic is explained by Equations (12) and (13), which are rewritten as Equation (34). According to the physical meaning of each item, Equation (34) can be classified into real components (black colour) and virtual components (red colour) as shown in Figure 4. r 1 i s * is regarded as a current-controlled voltage source v vs (i s * ). The dissipation is modified by D a (x) representing the injected damping. Therefore, a virtual damping resistance r 1 is added as shown in Figure 4(a), which can tune the system damping. The voltage equations of the bi-dc converter in boost and buck state are shown in Equations (36b) and (36c), respectively. A virtual damping resistance r 2 is added as shown in Figure 4(b). In boost state, the term v out* P const /v 2 out out is regarded as a voltage-controlled current source I s1 , providing power to the CPL and compensating for its negative incremental resistance influences. Because of D a (x), a virtual damping resistance r 2 is located at the output side, affecting the system damping. The term v out* /r 2 is a constant current source I s2 , supplying power to linear loads. The difference between boost and buck state is that v out* P const /v 2 out is a non-linear load in buck state to absorb the power from the equivalent source P const /v out . Based on the analysis mentioned above, the virtual circuit is constructed as shown in Figure 4(c).
Apparently, under the influence of virtual resistance, the energy interaction and dissipation are modified, and system damping becomes adjustable. Only r 1 is considered in this paper because of the zero dynamics analysis. The expression of the closed-loop system in boost and buck state, as shown in Equations (35a) and (35b), can be acquired by combining Equations (31) with (19) and (26), respectively.
Linearizing Equation (35) around x * , the transfer function between Δi s and Δd can be obtained as shown in Equation (36).
DYNAMICS ANALYSIS AND PASSIVITY VERIFICATION
When the bi-dc converter is running in the boost state, CPL reduces the system damping and the system is prone to oscillation. Besides, the current inner loop has an RHP zero affected by the operation point (remarks 1 and 3), which negatively affects the control performance. Therefore, the dynamic characteristics in the boost state will be analysed emphatically.
Inner-loop dynamics analysis and its parameters design
From Equation (36a), the calculation formulas for damping ratio ξ, natural oscillation frequency ω n and overshoot σ can Figure 5(a). With r 1 increasing, ξ increases gradually and the overshoot σ decreases. Particularly when ξ = 1, there is no overshoot during the dynamic and the critical value of r 1 can be calculated by Equation (38). To meet the stability requirement, the real part of the eigenvalues of the closed-loop system must be negative. Hence, the lower bound of r 1 can be obtained as shown in Equation (39).
With the injected damping increasing, the energy dissipation of the bi-dc converter would be theoretically enhanced, but the transient time of the inner loop would be prolonged also. The coordination between the inner and outer loop would become difficult when the inner loop is slow, which is not conducive to the stable control of the system. Therefore, the value of r 1 needs to be limited. Meanwhile, the factors affecting the system dynamics, such as switching frequency f s , output capacitance and load, should be considered, when the value for r 1 is selected.
The effectiveness of the inner-loop transfer function can be only assured within the frequency range where the bi-dc converter works. Within this frequency range, the average value of voltage and current can be calculated and the state-space average model is valid. Therefore, the location of the dominant poles is subjected to the effectiveness of the system averaged model. Hence, the upper limit of r 1 can be determined.
Generally, the behaviour of the converter is analysed well below the Nyquist frequency (i.e. 0.5f s ). To make the statespace average model more accurate, stricter restrictions can be imposed on the frequency band under study, such as 0.1f s .
The eigenvalues (λ 1 , λ 2 ) distribution is shown in Figure 5(b). To meet different requirements, two frequency limits are set, f limit1 = f s = 10 kHz and f limit2 = 0.5f s = 5 kHz. According to the dynamic response requirements, the appropriate r 1 can be selected within different limits. The concept of the dominant pole is used to approximate the system. The current loop under the IDA-PBC law is equivalent to a first-order inertia loop due to its small time constant. The closed-loop transfer function is
Outer-loop dynamics analysis and its parameters design
Linearising Equation (32) and integrating its result and Figure 3, the small-signal model of energy loop is established and shown in Figure 5(c). Because of the fast response, the inner loop is equivalent to a unity gain block. The relationship between Δv out and its reference is obtained as shown in Equation (41).
Considering practice applications, the transient response would be prolonged, when k selects a large value. According to Equations (41) and (42), the zeros and poles distribution is shown in Figure 5(d), when k changes from 25 to 1500. A pair of conjugate poles near the imaginary axis gradually moves away from the point (0, 0), meaning the damping improvement is achieved. As k increases, the conjugate poles become a pair of negative real poles. One of the negative real poles moves toward the imaginary axis and it should not move into the RHP, meaning the system is always stable.
Passivity analysis
In this part, we perform passivity analysis supporting the previous performance analysis, which is based on the equivalent circuit. The Nyquist plot of output impedance is illustrated in Figure 6. With r 1 varying in a reasonable range, most of the Nyquist plots are in the RHP, meaning that the bi-dc converter system could maintain passive and stable. It can be observed from Figure 6(a) that the passivity is enhanced gradually while r 1 increases in a reasonable range. In Figure 6(b), a variation of filter inductances L s was performed. Increasing L s can reduce the harmonic content of the current. However, the passivity is weakened and the stability margin is also reduced. The entire system always remains passive. Therefore, the appropriate inductance value should be selected, considering its influence on the system stability and the filtering effect at the same time. Besides, Figure 6(b) highlights the application advantages of the IDA-PBC method. If those passive components (or systems) are connected in parallel, the resulting system will still be passive.
SIMULATION VERIFICATION
To test the proposed control strategy, an isolated dc distribution system similar to that in Figure 1 is modelled in MATLAB. Loads are replaced by CPLs. The parameters of the simulation are in Table 2. The load suddenly increases by 1kW at t = 1.5 s and decreases by 1kW at t = 3.0 s. Figure 7 depicts the simulation results under conventional droop control, virtual inertia control (VIC) and the proposed IDA-PBC, respectively. When the load changes suddenly, v out under the traditional droop control is more prone to oscillation. This indicates that the system is inertialess and weak damping. While VIC is adopted and appropriate control parameters are selected, v out would smoothly transition to another steady state. This phenomenon indicates the VIC can improve the system inertia. And the IDA-PBC, with less parameter to select, can achieve the same control effect as VIC. Hence, the system damping is effectively enhanced by the proposed IDA-PBC method.
6.2
Influences of control parameters on the transient response Figures 8(a) and (b) show the impact of r 1 and k on the dynamic response when the system is disturbed. With r 1 or k increasing, the dc bus voltage oscillation has been significantly suppressed. The energy dissipation property and the damping performance is enhanced obviously, making the system stable even if a disturbance occurs. It can be observed that control parameters vary with a large interval. The converter system is a strongly nonlinear system. Therefore, there are the case where the control parameters need to be changed within a large range to obtain different control effects and the case where control effects change greatly when the control parameters change with a small interval.
Figures 8(c) and (d) display the necessity of setting an upper limit for r 1 . When r 1 chooses a little value, the harmonic content is low and total harmonic distortion (THD) is about 1.64%. The harmonic content rises apparently with a large r 1 , and THD is about 16.72%. Therefore, when r 1 is determined, the trade-off between the system damping and harmonic content should be considered.
CONCLUSION
An IDA-PBC method, guided by zero dynamics analysis, is proposed to improve the transient response of dc distribution system. The control law derivation, implementation, dynamic analysis, parameters design and passivity verification are studied in detail. Simulation demonstrates its validity. The main conclusions are: 1. The RHP zero will reduce the stability margin of the bi-dc converter and impose a constraint on the controller design. A steady-state error will be caused. 2. An IDA-PBC method is achieved via modifying the energy interaction and dissipation. It has better performance and comparatively less tuning parameters. The steady-state error can be eliminated by an energy controller. 3. A virtual circuit that explains the physical significance of the control parameter is established. The reasonable range of the control parameters is obtained by the dynamic response.
Larger r 1 will increase the harmonic content and larger k would prolong the transient response, destructing the stable operation. 4. Impedance-based passivity analysis verifies the performance of the proposed IDA-PBC method, which has the same control effect as the VIC. Future work will focus on the adaptive adjustment of control parameters.
ACKNOWLEDGEMENT
This work is supported in part by China Scholarship Council (No. 201906130196).
|
2021-09-01T15:10:31.454Z
|
2021-06-24T00:00:00.000
|
{
"year": 2021,
"sha1": "ba8cd32ba7a02d2931c1bbcb0861bb5ca30e51fb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1049/gtd2.12169",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a0500d764df7abbc66681c79d72ab3aafbc5174d",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
252735846
|
pes2o/s2orc
|
v3-fos-license
|
Chili-supplemented food decreases glutathione-S-transferase activity in Drosophila melanogaster females without a change in other parameters of antioxidant system
ABSTRACT Objectives Many plant-derived anti-aging preparations influence antioxidant defense system. Consumption of food supplemented with chili pepper powder was found to extend lifespan in the fruit fly, Drosophila melanogaster. The present study aimed to test a connection between life-extending effect of chili powder and antioxidant defense system of D. melanogaster. Methods Flies were reared for 15 days in the mortality cages on food with 0% (control), 0.04%, 0.12%, 0.4%, or 3% chili powder. Antioxidant and related enzymes, as well as oxidative stress indices were measured. Results Female flies that consumed chili-supplemented food had a 40–60% lower glutathione-S-transferase (GST) activity as compared with the control cohort. Activity of superoxide dismutase (SOD) was about 37% higher in males that consumed food with 3% chili powder in comparison with the control cohort. Many of the parameters studied were sex-dependent. Conclusions Consumption of chili-supplemented food extends lifespan in fruit fly cohorts in a concentration- and gender-dependent manner. However, this extension is not mediated by a strengthening of antioxidant defenses. Consumption of chili-supplemented food does not change the specific relationship between antioxidant and related enzymes in D. melanogaster, and does not change the linkage of the activities of these enzymes to fly gender.
Introduction
Beneficial effects of medicinal plant preparations on animal health are often associated with their secondary metabolites, that are mostly phenol-containing compounds. Special attention is paid to the antioxidant properties of plant phenols as well as to their ability to directly or indirectly activate cellular antioxidant systems [1]. The activation occurs predominantly by influencing specific transcription factors via direct interaction with them or via affecting their posttranscriptional modifications, crucial for activation or inhibition [1].
It was recently shown that capsaicin, a derivative of the phenol-containing compound, vanillylamine and a pungent principle component of chili pepper, prolongs lifespan in the fruit fly, Drosophila melanogaster, model [2]. We have conducted a complex study [3] to find whether the same capability to extend lifespan is attributable to chili powder, and not only to pure capsaicin. As in the above study of capsaicin effects [2], we used D. melanogaster as a tractable model, very suitable for a quick and inexpensive study.
Many plant-derived life-prolonging preparations influence antioxidant defense systems [4]. On one hand, this could be a side effect, that is not directly connected with a mechanism of the lifespan extension. Indeed, many signaling pathways that control cell senescence also have parallel effects on the activity of the antioxidant defense system. On the other hand, lifespan extension due to activation of antioxidant defenses is in good agreement with the free radical theory of aging, which is, despite a number of controversies, still accepted by many researchers [5][6][7][8].
The present study is focused on the influence of chili-supplemented food on the activity of the antioxidant defense system of D. melanogaster. We have chosen a number of markers that would allow us to comprehensively evaluate the operation of antioxidant defenses. In particular, we measured the activities of first-line antioxidant enzymes, catalase and superoxide dismutase (SOD). Other important enzymes, that support the operation of multiple cellular peroxidases, are glucose 6-phosphate dehydrogenase (G6PDH) and isocitrate dehydrogenase (IDH). These enzymes reduce nicotinamide adenine dinucleotide phosphate (NADP + ), yielding its protonated form, NADPH ( Figure 1). Furthermore, NADPH is used for the reduction of thiol-containing antioxidants (e.g. glutathione, thioredoxin, and glutaredoxin) that in turn are oxidized by peroxidases. The expression of both NADPH-reducing enzymes was shown to be regulated by the transcription factors responsible for antioxidant defense [9]. We also measured glutathione-S-transferase (GST) activity. In most cases, GST allows conjugation of oxidized molecules, such as lipids, with glutathione, and their further excretion from the organism. Among oxidative stress indices, we chose low-and high-molecular mass thiol-containing compounds, lipid peroxides, carbonylated proteins, and the activity of aconitase, which contains superoxide-sensitive iron-sulfur clusters [10].
Fly husbandry
Fruit flies of Canton S line were cultured in standard medium containing 5% sucrose, 5% yeast, 6% cornmeal, 1% agar, 0.18% methyl 4-hydroxyparabenzoic acid (methylparaben), and 0.6% propionic acid, 25°C, 60% relative humidity and 12h:12 h light/dark cycle. Newly eclosed flies were transferred to fresh medium, where they were kept during four days until the beginning of the experiment. Right before the beginning of the experiment flies were separated by sex under short (approx. 10 min) carbon dioxide anaesthesia and put into mortality cages at a density of approx. 150 individuals per cage [3]. All females used in experiments were mated. Food was changed every other day. Experimental flies were reared for 15 days in the mortality cages on food with 10% sucrose, 5% yeast, 1.2% agar, and 0.18% methylparaben, and supplemented with 0.04%, 0.12%, 0.4%, or 3% chili (Capsicum frutescens L.) powder. The powder was mixed within freshly prepared medium cooled to 70°C. The food of the control group had the same composition as food of the experimental groups but did not contain chili powder.
Resistance to oxidative stress
To determine resistance to oxidative stress, 15 flies were placed in 15 ml glass tubes with a napkin (12 cm × 12 cm) soaked with 1.5 ml of 20 mM menadione sodium bisulfite in 5% sucrose solution [11]. The number of dead flies was recorded every day at 9 AM, 3PM, and 9 PM.
Trolox equivalent antioxidant capacity
One hundred milligrams of dry powdered Capsicum frutescens was added to different concentrations of methanol in water (2 mL): 25%, 50%, or 80% (v/v). The mixtures were heated for 60 min at 40°C. The samples were centrifuged for 5 min at 20000×g at 4°C. The supernatants were used for spectrophotometric measurement of antioxidant properties of the powder.
The cation radical 2,2 ′ -azino-bis (3-ethyl benzothiazoline-6-sulfonic acid) (ABTS • +) was generated by reacting 7 mmol ABTS • + and 2.45 mmol potassium persulfate via incubation at room temperature (23°C) in the dark for 12-16 h. Subsequently, the ABTS • + solution was diluted to reach an absorbance of 1.000 ± 0.200 at 734 nm. Then, 10 µL of chili powder extracts were mixed with 200 µL of prepared ABTS • + solution. The mixture was shaken at room temperature and the absorbance reading was taken at 734 nm after 6 min.
All measurements were recorded on a microplate reader Synergy H1 (BioTek, Winooski, VT, USA). The standard curve was prepared using different concentrations of Trolox. The results are expressed as Trolox equivalents (TE) per 100 g of the extract (mmol TE/100 g), with values presenting the mean ± standard deviation.
Enzymatic activities and oxidative stress indices
Flies were homogenized using a Potter-Elvehjem glass/glass homogenizer (1:10 w/v) in 50 mM potassium phosphate buffer (pH 7.5) containing 0.5 mM ethylenediaminetetraacetic acid (EDTA) and 1 mM phenylmethylsulfonyl fluoride and centrifuged (16000×g, 15 min, 4°C) in an Eppendorf 5415R centrifuge. Supernatants were collected and used for the determination of enzymatic activities. Total protein content in whole fruit fly bodies was measured by the Bradford method with serum bovine albumin used as the standard. The activities of superoxide dismutase (SOD, EC 1.15.1.1), glucose 6-phosphate dehydrogenase (G6PDH, EC 1.1.1.49), NADP-dependent isocitrate dehydrogenase (IDH, EC 1.1.1.42), and glutathione-S-transferase (GST, EC 2.5.1.18) were measured spectrophotometrically as described earlier [12]. Briefly, SOD activity was assayed at wavelength 406 nm by the ability of the enzyme to inhibit oxidation of quercetin by superoxide anion-radical produced in the redox initiator system of N,N,N ′ ,N ′ -tetramethylethylenediamine (TEMED) in an alkaline buffer (30 mM Tris-HCl, 0.5 mM EDTA, 0.8 mM TEMED, 0.05 mM quercetin, pH 10.0). The reaction was conducted for 6-8 different volumes (2-100 μl) of the supernatant obtained by the above method from 40 to 50 flies. One unit of SOD activity was defined as the amount of enzyme (per milligram protein) that inhibits quercetin oxidation by 50% of maximum. The activity of G6PDH was measured in 1 ml of the mixture containing 50 mM KPi buffer (pH 7.5), 0.5 mM EDTA, 5 mM MgCl 2 , 0.2 mM NADP + , and 2 mM glucose 6-phosphate at 340 nm by the rate of NADPH formation. The activity of IDH was measured exploring the same principle as for G6PDH and the buffer contained 50 mM KPi buffer (pH 7.5), 2 mM MgCl 2 , 1 mM NADP + , and 0.5 mM isocitric acid. The reaction was started by adding 20 μl of the supernatant to 980 μl of the buffer. The extinction coefficient 6.22 mM −1 cm −1 for NADPH was used. The activity of GST was assayed at 340 nm by the formation of an adduct between reduced glutathione (GSH) and 1-chloro-2,4-dinitrobenzene (CDNB). The reaction mixture contained 50 mM KPi buffer (pH 7.5), 0.5 mM EDTA, 5 mM GSH, 1 mM CDNB, and 1-5 μl of the supernatant in a final volume of 1 ml. The reaction was launched by the sequential addition of CDNB and supernatant. Blanks Figure 1. Explanation of the set of parameters measured in the study. Different types of reactive oxygen species (ROS), namely superoxide anion-radical, hydrogen peroxide, and hydroxyl radical, may hit various targets. Iron-sulfur (Fe-S) clusters present in a number of enzymes, including cytosolic and mitochondrial aconitase (ACO), are a well-established target of superoxide. Enzymes that reduce NADP (e.g. G6PDH and IDH) may provide NADPH for thioredoxin reductases or for the synthesis of Fe-S clusters by Fe-S cluster assembly (ISA) machinery. All types of ROS are able to oxidize thiol-containing compounds (RSH) and lipids, yielding disulfides (RSSR) and lipid peroxides (LOOH), respectively. SOD and catalase (CAT) convert ROS into less toxic species, whereas glutathione-S-transferase (GST) detoxifies lipid peroxides, preventing peroxidation chain reaction.
were measured without CDNB. The extinction coefficient 9.6 mM −1 cm −1 for 1-S-glutathionyl-2,4-dinitrobenzene was used for calculation of the activity. The activities of catalase, the levels of protein carbonyls and lipid peroxides were assayed as described by Lushchak et al. [13], and aconitase activity was determined as described by Lozinsky et al. [14]. Briefly, catalase activity was measured in a spectrophotometer at 240 nm by a decrease in the concentration of hydrogen peroxide. The extinction coefficient for hydrogen peroxide of 0.0394 mM −1 cm −1 was used to calculate the activity. Carbonyl derivatives of proteins were detected by the reaction of the derivatives with 2,4-dinitrophenylhydrazine yielding colored hydrazones, whose concentration was quantified spectrophotometrically. The amount of protein carbonyls was evaluated at a wavelength of 370 nm. The molar extinction coefficient of 22 mM −1 cm −1 for dinitrophenylhydrazones was used for calculations. The values were expressed as nanomoles per milligram of protein. Lipid peroxide (LOOH) content was determined at 580 nm, based on the absorption of light with this wavelength by a complex of ferric iron with xylenol orange (formed in reaction between LOOH, ferrous iron, and xylenol orange). Flies were homogenized with 10 volumes of 96% (vol.) cold (∼5°C) ethanol, centrifuged for 5 min at 13000×g, and supernatants were used for the assay. The levels of lipid hydroperoxides were expressed as cumene hydroperoxide equivalents per gram of wet weight of fruit flies. Aconitase (EC 4.2.1.3) activity was measured at 240 nm by a decrease in concentrations of cis-aconitate. The molar extinction coefficient used for calculations was 0.0037 mM −1 cm −1 .
The levels of high-and low-molecular-mass thiolcontaining compounds were measured as described by Lushchak et al. [15]. Free thiols were measured spectrophotometrically at 412 nm based on their reaction with 5,5 ′ -dithio-bis (2-nitro) benzoic acid that yields 2-nitro-5thiobenzoate anion. Total thiol content (the sum of lowand high-molecular-mass thiol-containing compounds) was measured in the supernatants prepared identically to those used for the measurement of enzyme activities. For measurement of low-molecular-mass thiol-containing compounds (LM-SH) content, supernatants were treated with 10% TCA (final concentration), centrifuged for 5 min at 13000×g and the final supernatants were used for the assay. The molar extinction coefficient of 14 mM −1 cm −1 was used for calculations. The thiol concentrations were expressed as micromoles of SH-groups per gram of fly wet weight.
Statistical analysis
Experimental data are presented as mean ± standard error. Statistical analysis was performed in R, using functions implemented in packages 'base', 'rstatix', 'ggplot2', and 'ggfortify'. The datasets were compared using a pairwise t-test followed by adjustment of p-values by the Benjamini-Hochberg procedure [16]. Mortality curves were compared by the log-rank test in the R package 'survminer' followed by adjustment of p-values by the Benjamini-Hochberg procedure. Differences between sample means and correlations that gave adjusted pvalues less than 0.05 in the pairwise t-test were considered significant.
Results
Fruit fly cohorts were reared on the medium containing a powder from dry fruits of chili peppers, Capsicum frutescens. The chili powder polyphenols exhibited substantial antioxidant activity of about 2.8-4.1 mmol of Trolox (a watersoluble analog of vitamin E) equivalents per 100 g of the powder (Figure 2). The maximum amount of antioxidant compounds was extracted from the powder by 80% methanol at 40°C (Figure 2). However, methanol of the same concentration allowed to extract 31% less antioxidant substances.
Previously, we have shown that consumption of chili-supplemented food extended mean lifespan in D. melanogaster cohorts of both sexes by 10%-14%, although effect was more pronounced in females [3]. Significant lifespan extension in males was caused by food supplemented with 0.04% and 0.12% chili powder. In female cohorts, lifespan was prolonged the most by consumption of food supplemented with 0.12% and 0.4% chili powder. On the other hand, consumption of food supplemented with 3% chili powder caused significantly higher mortality among males. Therefore, we chose fruit fly cohorts fed on food with the above concentrations of chili powder for further investigations.
The concentrations of chili powder that prolonged lifespan in D. melanogaster weakly affected antioxidant enzymes and markers of oxidative stress. In particular, consumption of food with the highest concentration of chili powder resulted in about 37% higher activity of superoxide dismutase (SOD) in males as compared with the control cohort (Figure 3(A)). However, SOD activity was not substantially affected by consumption of chili-supplemented food in females. Interestingly, females reared on the food with 3% chili powder had slightly higher resistance to the redox-cycling and superoxide-generating compound, menadione, whereas flies of other groups did not differ from the control in their resistance to menadione (Figure 4).
The chili-supplemented food did not affect catalase, isocitrate dehydrogenase (IDH), or aconitase activities, and conferred only minor changes on glucose 6-phosphate dehydrogenase activity (G6PDH) ( Table 1). Interestingly, consumption of chili-supplemented food led to a significant drop in glutathione-S-transferase (GST) activity in females to about 40% to 60% of that in the control cohort ( Figure 5 (B)). The levels of oxidative stress markers, such as highand low-molecular mass thiol-containing compounds, protein carbonyls, and lipid peroxides (LOOH), showed only minor changes (Table 1).
It is remarkable that nearly all parameters studied showed dependence on fly sex that in many cases was not significantly affected by consumption of chili-supplemented food. This sex dependence was observed for catalase, G6PDH, IDH, GST, aconitase, and protein carbonyls. A strong sex dependence of antioxidant defenses in general was also found by the principal component analysis ( Figure 6). The sex dependence along with coordinated minor changes in some parameters resulted in strong significant correlations between some parameters. In particular, strong correlations were found between SOD and catalase, catalase and G6PDH, IDH and catalase, as well as between IDH and SOD ( Table 2). Strong correlations were also observed between G6PDH and IDH (Figure 7(A)), aconitase and catalase, G6PDH, and IDH, as well as between protein carbonyls and aconitase (Figure 7(B)), catalase, and G6PDH.
Discussion
Capsaicin, a constituent of chili pepper, was found to prolong lifespan in the fruit fly Drosophila melanogaster [2]. Our recent study has revealed that fruit flies reared on the food supplemented with chili powder live longer than the counterparts on the control diet [3]. The effect can be accounted for by capsaicin as well as by other phenolic substances in chili pepper.
As we demonstrated in the recent study, the maximum amount of phenolic compounds that could be extracted by 80% methanol at 40°C from the powder we use was 942 mg of gallic acid equivalents per gram of the preparation [3]. Current data ( Figure 2) show that Trolox equivalent . Resistance to redox-cycling compound menadione in fruit flies reared for 15 days on the control diet or the diets supplemented with different concentrations of powder from dry chili fruits: Amales, Bfemales. Data are means ± SEM (mortality of cohorts of 29-60 individuals was assayed). c Significantly different from the control, P < 0.05. Groups were compared using a pairwise log-rank test implemented in R package 'survminer' followed by Benjamini-Hochberg correction.
antioxidant capacity of different methanolic extracts of our chili powder correlates with the amount of polyphenols present in the powder. It is comparable with the data obtained in other studies and indicates a moderate amount of antioxidant substances [17]. Chili powder is rich in carotenoids that are established antioxidants. However, trolox antioxidant capacity is likely related to the amount of polyphenols present in the powder since often these parameters positively correlate [18].
The maximum amounts of capsaicinoidscapsaicin, dehydrocapsaicin, and nordihydrocapsaicincould be extracted at the same extraction conditions [3]. Mixing dry powder with a hot (approximately 70°C) fruit fly medium allows only partial extraction of phenolic compounds and capsaicinoids. It was shown that extraction with hot water gave 4-6 times smaller amounts of extracted capsaicinoids than it could be achieved with methanol [19,20].
Of note, our previous study showed flies reared on the control medium and on chili-supplemented food consumed approximately equal amounts of meal -40-60 nl/fly/hour [3]. However, female cohort on the medium with 3% chili powder consumed on average twice as higher amount of medium than counterparts on the control medium [3].
The extension of lifespan in fruit fly cohorts by regular consumption of chili-supplemented food can be conferred by the activation of well-known pro-survival pathways. In particular, a number of plant preparations activate the transcription factor FOXO (forkhead box O), increasing the expression of proteins that are necessary to overcome stress conditions [1,21]. Lifespan extension is also achieved by the inhibition of mTOR (mechanistic target of rapamycin) kinase, that activates autophagy [22]. At the same time, it is known that many life-prolonging plant preparations directly or indirectly affect antioxidant responses [1,6]. Antioxidant defenses could be activated by the phenolic compounds of chili pepper via the transcription factor Nrf2 (nuclear factor-erythroid factor 2-related factor 2). However, our current study shows that chili-supplemented food affects only two antioxidant Table 1. Oxidative stress markersprotein carbonyls (CP), high and low-molecular mass thiols (HM-SH, LM-SH), lipid peroxides (LOOH), and activities of antioxidant and related enzymes: catalase, aconitase, isocitrate dehydrogenase (IDH), and glucose 6-phosphate dehydrogenase (G6PDH) in fruit flies reared for 15 days on the control diet or the diets supplemented with different concentrations of powder from dry Capsicum frutescens fruits. Significantly different from the corresponding group of males, P < 0.05. enzymes, namely superoxide dismutase (SOD) in males ( Figure 3) and glutathione-S-transferase (GST) in females ( Figure 5). Whereas SOD activity was boosted in male cohorts reared on food with 3% chili powder, GST activity was decreased in females on all chili-supplemented diets. It is worth noting, that 3% chili was previously found to be a rather life-shortening concentration of chili powder [3]. Overexpression of SOD was found to prolong lifespan in fruit flies [8,[23][24][25]. However, there is a controversy regarding lifeprolonging effects of SOD in other models [8]. A higher SOD activity makes sense only with a concomitant increase in catalase activity. The reason for this is that the product of the SOD reaction is hydrogen peroxide, a type of ROS, which participates in the Fenton reaction, yielding hydroxyl radical [26,27]. In turn, hydroxyl radicals are strong oxidizers, that react with proteins and fatty acids, causing loss of function to proteins and lipid membranes, respectively [26]. We cannot draw a direct connection between consumption of chili-supplemented food, SOD activity, and lifespan in D. melanogaster. Likely, capsaicin and/or bioactive phenolcontaining compounds of chili powder affect signaling pathways that regulate SOD activity on transcriptional or posttranslational levels. The appearance of the effect only in males may imply that the signaling pathway plays an important role for this gender. Genes that encode antioxidant enzymes, including SOD, are targets for the transcription factors FOXO and Nrf2 in mammals [21,28,29]. However, much less is known about regulation of antioxidant enzyme expression in fruit flies. Earlier, it was found that SOD expression is indirectly suppressed by a dual-specificity kinase Doa (darkener of apricot) in D. melanogaster as well as in human cultured cells [30]. The activity of SOD increased in flies with mutated Doa. In turn, it was found that the LAMMER kinase, homologous to Doa, is activated by mTOR kinase in the budding yeast, Saccharomyces cerevisiae [31]. In addition, it was found that Doa plays role in sex determination, affecting the production of sex pheromones and courtship behavior [32].
Asterisk denotes statistically significant correlations. We expected to see higher activities of all antioxidant enzymes in flies reared on chili-supplemented food. However, activities of many enzymes that we studied showed only a subtle difference in flies that consumed chilisupplemented food as compared with controls. Moreover, GST activity was lower in females fed on food containing chili powder than in control females. Most of genes that encode different isoenzymes of GST in D. melanogaster are regulated by Nrf2 as well as by nuclear receptors such as DHR96 [33][34][35] or Seven-up [36]. In turn, flavonols, such as kaempferol, are able to inhibit nuclear receptors [37]. Moreover, nuclear receptors are expressed in sex-specific manner, respond to sex-specific steroid hormones, and exert sex-specific effects [36,38,39]. It was demonstrated that a number of GST isoenzymes are regulated in a sexdependent manner in D. melanogaster [40]. This may explain the female-specific response of GST activity to a regular consumption of chili-supplemented food.
R²
Our study also shows a pronounced sex dependence of antioxidant defenses in D. melanogaster. We have noticed this feature in our previous studies, finding differences between males and females in activities of catalase and G6PDH, as well as in the levels of protein thiols [12,13,15]. Our current data also show a sex bias in the activities of IDH, GST, SOD, and aconitase. A higher catalase activity in males was also reported by other researchers [41,42], although the opposite situation was also observed [43]. The gene that encodes G6PDH, is located on the X-chromosome. The higher G6PDH activity in males can be explained by overcompensation of gene dosage [44,45]. Other cytosolic enzymes, such as NADP-dependent isocitrate dehydrogenase and cytosolic aconitase (also known as iron regulatory protein), are not directly associated with the X-chromosome. However, integrity of these enzymes may depend on the sexlinked enzymes such as catalase and G6PDH. The set of correlations (Table 2) and principal component analysis ( Figure 6) provide grounds for such dependence. In particular, we see strong correlations between all sex-linked indices of our study (Table 2). In turn, oxidative stress indices such as LOOH, protein thiols, and low-molecular mass thiol-containing compounds did not show correlations with other parameters (Table 2). Hence, most significant correlations were conferred by a linkage of measured parameters with fly gender. Nevertheless, the observed correlations seem logical, as we explained in our previous studies [15,[46][47][48][49]. In particular, G6PDH and IDH activities correlated with catalase activity and that may imply a role of catalase in the protection of these enzymes from oxidative modification [15,[46][47][48][49]. Alternatively, these enzymes could be co-regulated with catalase and be targets of the same transcriptional regulator. Aconitase, as an enzyme that contains iron-sulfur clusters (sensitive to oxidation), is also a well-established marker of oxidative stress [10]. Along with IDH and G6PDH, it can also be oxidatively modified by ROS or protected by the firstline antioxidant enzymes, SOD and catalase (Figure 8). On the other hand, operation of G6PDH and IDH maintains NADPH pools and, in turn, NADPH is required for the assembly of iron-sulfur clusters [50,51].
We conclude that regular consumption of chili-supplemented food extends lifespan in fruit fly cohorts in a concentration-and gender-dependent manner. However, this extension is not mediated by a strengthening of antioxidant defenses. Fruit flies that consumed chili-supplemented food show differences only in activities of SOD and GST. The minor changes observed imply that the effect of chili on lifespan has a weak connection with antioxidant defense. Furthermore, consumption of chili-supplemented food does not change the specific relationship between antioxidant and related enzymes in D. melanogaster, and also does not change the linkage of the activities of these enzymes to fly gender. Cellular ROSproducing machinery Figure 8. Generalized scheme that explains relationships between antioxidant (SOD and catalase) and related enzymes (G6PDH, IDH), and potential oxidative stress markers (protein carbonyls) observed in the study and partially confirmed by regression analysis. SOD converts superoxide radical into less toxic hydrogen peroxide. However, SOD protects biomolecules from oxidation by ROS only in conjunction with catalase, since the latter prevents potential formation of hydroxyl radicals in the reaction between superoxide and hydrogen peroxide. G6PDH, IDH, and aconitase were shown to be sensitive to oxidative modification, and therefore can be oxidized by hydroxyl radicals and contribute to the pool of carbonylated proteins.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was supported by a grant from the National Research Foundation of Ukraine (2020.02/0118) to Prof. Maria Bayliak.
Data availability statement
The findings of this study are available from the corresponding author upon request.
|
2022-10-07T06:17:42.255Z
|
2022-10-06T00:00:00.000
|
{
"year": 2022,
"sha1": "f85a25fdc4e3fc30fbbe0d09b6ebf3cee149a7ea",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "872adb5436b93c3820ace89a58790d37a86250db",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
206263218
|
pes2o/s2orc
|
v3-fos-license
|
Checking the dark matter origin of 3.53 keV line with the Milky Way center
We detect a line at 3.539 +/- 0.011 keV in the deep exposure dataset of the Galactic Center region, observed with the XMM-Newton. The dark matter interpretation of the signal observed in the Perseus galaxy cluster, the Andromeda galaxy [1402.4119] and in the stacked spectra of galaxy clusters [1402.2301], together with non-observation of the line in blank sky data, put both lower and upper limits on the possible intensity of the line in the Galactic Center data. Our result is consistent with these constraints for a class of Milky Way mass models, presented previously by observers, and would correspond to radiative decay dark matter lifetime tau_dm ~ (6-8) x 10^{27} sec. Although it is hard to exclude an astrophysical origin of this line based the Galactic Center data alone, this is an important consistency check of the hypothesis that encourages to check it with more observational data that are expected by the end of 2015.
( Dated: August 12, 2014) We detect a line at 3.539 ± 0.011 keV in the deep exposure dataset of the Galactic Center region, observed with the XMM-Newton.Although it is hard to completely exclude an astrophysical origin of this line in the Galactic Center data alone, the dark matter interpretation of the signal observed in the Perseus galaxy cluster and Andromeda galaxy [1] and in the stacked spectra of galaxy clusters [2] is fully consistent with these data.Moreover, the Galactic Center data support this interpretation as the line is observed at the same energy and has a flux consistent with the expectations about the Galactic dark matter distribution for a class of the Milky Way mass models.
Recently, two independent groups [1,2] had reported a detection of an unidentified X-ray line at energy 3.53 keV in the long-exposure X-ray observations of a number of dark matterdominated objects.The work [2] has observed this line in a stacked XMM spectrum of 73 galaxy clusters spanning a redshift range 0.01−0.35and separately in subsamples of nearby and remote clusters.Ref. [1] have found this line in the outskirts of the Perseus cluster and in the central 14 ′ of the Andromeda galaxy.The global significance of the detection of the same line in the datasets of Ref. [1] is 4.3σ (taking into account the trial factors); the signal in [2] has significance above 4σ on completely independent data.The position of the line is correctly redshifted between galaxy clusters [2] and between the Perseus cluster and Andromeda galaxy [1].In a very long exposure blank sky observation (15.7 Msec of cleaned data) the feature is absent [1].This makes an instrumental origin of this feature (e.g. an unmodeled wiggle in the effective area) unlikely.
To identify this spectral feature with an atomic line in galaxy clusters, one should assume a strongly super-solar abundance of potassium or some anomalous argon transition [2].Moreover, according to the results of [1], this should be true not only in the center of the Perseus cluster, considered in [2], but also (i) in its outer parts up to at least 1/2 of its virial radius and (ii) in the Andromeda galaxy.This result triggered significant interest as it seems consistent with a long-sought for signal from decaying dark matter.Many particle physics models, predicting such properties of dark matter particles, have been put forward.This includes the sterile neutrino, axion, axino and many others .If the interaction of dark matter particles is weak enough (e.g.much weaker than that of the Standard Model neutrino), they need not be stable as their lifetime can exceed the age of the Universe.Nevertheless humongous amounts of dark matter particles make the signal even from such rare decays strong enough to be detectable.
The omni-presence of dark matter in galaxies and galaxy clusters opens the way to check the decaying dark matter hypothesis [58].The decaying dark matter signal is proportional to the column density S DM = ρ DM dℓ -integral along the line of sight of the DM density distribution (unlike the case of annihilating dark matter, where the signal is proportional to ρ 2 DM dℓ).As long as the angular size of an object is larger than the field-of-view, the distance to the object drops out which means that distant objects can give a comparable signal to the nearby ones [59,60].It also does not decrease with the distance from the centres of objects as fast as e.g. in the case of annihilating DM where the expected signal is concentrated towards the centers of DM-dominated objects.This in principle allows one to check the dark matter origin of a signal by comparison between objects and/or by studying angular dependence of the signal within one object, rather than trying to exclude all possible astrophysical explanations for each target [61][62][63][64].
However, in reality, of course, after years of systematic searches for this signal [61,, any candidate line can be detected only at the edge of possible sensitivity of the method.Therefore, to cross-check the signal one needs comparably long exposure data.Moreover, even the uncertainty at the level of factor of few in the expected signal, which is very hard to avoid, results in the necessity to have significantly more statistics than in the initial data set, where a candidate signal was found.
So far DM interpretation of the signal of [2] and [1] is consistent with the data: it has correct scaling between Perseus cluster, Andromeda galaxy and the upper bound from nondetection in the blank sky data [1], between different subsamples of clusters [2].Mass and lifetime of the dark matter particle, implied by DM interpretation of results of [1], is consistent with the results of [2].The signal also has a radial surface brightness profiles in the Perseus cluster and Andromeda galaxy [1] that are consistent with dark matter distribution.However, significance of this results is not sufficient to confirm the hypothesis, they can be considered only as a successful sanity checks.More results are clearly needed to preform a convincing checking program described above.
A classical target for DM searches is the centre of our Galaxy.Its proximity allows to concentrate on the very central part and therefore, even for decaying DM, one can expect a significant gain in the signal if the DM distribution in the Milky Way happens to be steeper than a cored profile.The Galactic Center (GC) region has been extensively studied by the XMM and several mega-seconds of raw exposure exist.On the other hand, the GC region has strong X-ray emission, many complicated processes occur there [91][92][93][94][95][96][97][98][99].In particular, the Xray emitting gas may contain several thermal components with different temperatures; it may be more difficult to constraint reliably abundances of potassium and argon that in the case of intercluster medium.Therefore the GC data alone would hardly provide convincing detection of the DM signal, as even a relatively strong candidate line could be explained by astrophysical processes.In this paper we pose a different question: Are the observations of the Galactic Center consistent with the dark matter interpretation of 3.53 keV line of [1,2]?
The DM interpretation of the 3.53 keV line in M31 and the Perseus cluster puts a lower limit on the flux from the GC.On the other hand, a non-detection of any signal in the off-center observations of the Milky Way halo (the blank sky dataset of [1]) provides an upper limit on the possible flux in the GC, given observational constraints on the DM distribution in the Galaxy.Therefore, even with all the uncertainties on the DM content of the involved objects, the expected signal from the GC is bounded from both sides and provides a non-trivial check for the DM interpretation of the 3.53 keV line.
We use XMM-Newton observations of the central 14 ′ of the Galactic Center region (total clean exposure 1.4 Msec).We find that the spectrum has a ∼ 5.7σ line-like excess at expected energy.The simultaneous fitting of GC, Perseus and M31 provides a ∼ 6.7σ significant signal at the same position, with the detected fluxes being consistent with the DM interpretation.The fluxes are also consistent with non-observation of the signal in the blank-sky and M31 off-center datasets, if one assumes steeper-than-cored DM profile (for example, NFW of Ref. [100]).
Below we summarize the details of our data analysis and discuss the results.
Data reduction.
We use all archival data of the Galactic Center obtained by the EPIC MOS cameras [101] with Sgr A* less than 0.5 ′ from the telescope axis (see Appendix, Table I).The data are reduced by standard SAS1 pipeline, including screening for the time-variable soft proton flares by espfilt.We removed the observations taken during the MJD 54000-54500 due to strong flaring activity of Sgr A* in this period (see Fig. 3 in Appendix).The data reduction and preparation of the final spectra are similar to [1].For each reduced observation we select a circle of radius 14 ′ around Sgr A* and combine these spectra using the FTOOLS [102] procedure addspec.Spectral modeling.To account for the cosmic-ray induced instrumental background we have subtracted the latest closed filter datasets (exposure: 1.30 Msec for MOS1 and 1.34 Msec for MOS2) [103].The rescaling of the closed filter data has been performed to reduce to zero flux at energies E > 10 keV (see [104] for details).We model the resulting physical spectrum in the energy range 2.8-6.0 keV.The X-ray emission from the inner part of the Galactic Center contains both thermal and non-thermal components [93,94].Therefore, we chose to model the spectrum with the thermal plasma model (vapec), non-thermal powerlaw component modified by the phabs model to account the Galactic absorption. 2 We set abundances of all elements but Fe to zero and model known astrophysical lines with gaussians [1,2,106].We selected the ≥ 2σ lines from the set of astrophysical lines of [2,99] 3 .The intensities of the lines are allowed to vary, as are the central energies to account for uncertainties in detector gain and limited spectral resolution.We keep the same positions of the lines between two cameras.
The spectrum is binned to 45 eV to have about 4 bins per resolution element.The fit quality for the dataset is χ 2 = 108/100 d.o.f.The resulting values for the main continuum components -the folded powerlaw index (for the integrated point source contribution), the temperature of the vapec model (∼8 keV), and the absorption column density -agree well with previous studies [93,94].
Results.The resulting spectra of the inner 14 ′ of the Galactic Center shows a ∼ 5.7σ line-like excess at 3.539 ± 0.011 keV with the flux (29 ± 5) × 10 −6 cts/sec/cm 2 (see Fig. 1).It should be stressed that 1σ error-bars are obtained with the xspec command error (see the discussion below).The position of the excess is very close to similar excesses recently observed in the central part of the Andromeda galaxy (3.53±0.03keV) and the Perseus cluster (3.50±0.04keV), reported in [1], and is less than 2σ away from the one described in [2].
We also performed combined fits of the GC dataset with those of M31 and Perseus from [1].As mentioned, the data reduction and modeling were performed very similarly, so we suffice with repeating that the inner part of M31 is covered by almost 1 Msec of cleaned MOS exposure, whereas a little over 500 ksec of clean MOS exposure was available for Perseus (see [1] for details).
We first perform a joint fit to the Galactic Center and M31, and subsequently to the Galactic Center, M31 and Perseus.In both cases, we start with the best-fit models of each individual analysis without any lines at 3.53 keV, and then add an additional gaussian to each model, allowing the energy to vary while keeping the same position between the models.The normalizations of this line for each dataset are allowed to vary independently.In this way, the addition of the line to the combination of Galactic Center, M31 and Perseus gives 4 extra degrees of freedom, which brings the joint significance to ∼ 6.7σ.
To further investigate possible systematic error on the line parameters we took into account that the gaussian compo- nent at 3.685 keV may describe not a single line, but a complex of lines (Table II).Using the steppar command we scanned over the two-dimensional grid of this gaussian's intrinsic width and normalization of the line at 3.539 keV.We were able to find a new best fit with the 3.685 keV gaussian width being as large as 66 ± 15 eV.In this new minimum our line shifts to 3.50 ± 0.02 keV (as some of the photons were attributed to the 3.685 keV gaussian), has flux 24 × 10 −6 cts/sec/cm 2 with 1σ confidence interval (13 − 36) × 10 −6 cts/sec/cm 2 .The significance of the line is ∆χ 2 = 9.5 (2.6σ for 2 d.o.f.).Although the width in the new minimum seems to be too large even for the whole complex of Ar XVII lines (see Discussion), we treat these change of line parameters as the estimates of systematic uncertainties.To reduce this systematics one has either to resolve or to reliably model a line complex around 3.685 keV instead of representing it as one wide gaussian component.
Discussion.The signal, found in the spectra of the central 14 ′ of the GC is consistent with the DM interpretation of the spectral feature, reported in [1,2] (Fig. 2).For example, given the mass modeling uncertainties in the individual objects the decaying DM with the lifetime τ DM ∼ 6 × 10 27 sec would explain the signals from Perseus, M31 and non-observation of the line in the blank-sky dataset.Notice that in the clusters outskirts the hydrostatic mass may be under-estimated (see e.g.(see e.g.[107]).This would only improve the consistency between various data sets.
The ratio of the flux of the GC signal and the upper bound in the blank-sky observation is consistent for steeper-than-cored DM density profiles of the Milky Way (Table III).For example, the central value of the flux in the GC would correspond to the expectation from the Milky Way mass modelling of Ref. [100].The predictions of all NFW and Einasto profiles in Table III are within 1σ range for the GC flux if one assumes blank-sky flux at the upper bound (0.7 × 10 −6 cts/sec/cm 2 ).
As mentioned in the Results, there is a degeneracy between the width of Ar XVII complex around 3.685 keV and the normalization of the line in question.If we allow the width of the Ar XVII line to vary freely we can decrease significance of the line at 3.539 keV to about 2σ.However, in this case the width of the gaussian at 3.685 keV should be 95 − 130 eV, which is significantly larger than we obtain when simulating complex of four Ar XVII lines via fakeit command.In addition, in this case the total flux of the line at 3.685 keV becomes higher than the fluxes in the lines at 3.130 and 3.895 in contradiction with the atomic data (Table II).
Another way to decrease the significance of the line at 3.539 is to assume the presence of potassium (K XVIII) ion with a line at 3.515 keV and a smaller line at 3.47 keV.If one considers the abundance of potassium as a completely free parameter (as it was done in [106] for the Chandra data of the Galactic Center), one can find an acceptable fit of the XMM GC data without an additional line at 3.539 keV.
One may attempt to predict the ratio of fluxes of the K XVIII line and of the strong lines in the interval 3-4 keV (i.e.Ar XVII, Ca XIX, Ca XX, S XVI, etc.) using known ratio of emissivities and assuming certain (solar?) ratio of abundances (similarly to the analysis of the Section 4 of [2]).The problem with such an approach is that neither in the case of the GC, nor in the case of galaxy clusters the emission is described well by a single-temperature model.In [2] a multi-temperature fit to the continuum is found.Based on this model, the authors of [2] conclude that strongly super-solar abundance of K XVIII is required to explain the observed excess.
We did not find a good multi-temperature model of the GC emission that would explain the observed ratio of fluxes of the strong lines.Using flux in one of the strong lines, assuming certain temperature of the plasma and solar ratio of abundances of the element one can predict the flux K XVIII lines.However, for different temperatures and for different strong lines this procedure predictions vary by more than an order of magnitude.Therefore, it is not possible to define reliably the abundance of potassium in the GC and to attribute the flux in the line 3.539 keV to K XVIII ion or to some other unidentified source (dark matter?).Therefore, based on the GC data alone we cannot either claim the existence of an unidentified spectral line no top of the element lines, nor constrain it.
However, if we are to explain the presence of this line in the spectra by the presence of K XVIII, we have to build a model that consistently explains the fluxes in this line in different astronomical environments: in galaxy clusters (in particular Perseus) at all off-center distances, from central regions [2] to cluster outskirts up to the virial radius [1]; GC; and the central part of M31.On the other hand, this transition should not be excited in the outskirts of the Milky Way and of M31.For such a model it is not enough to find the values of temperatures and abundances that would explain the observed flux in each object individually.In particular, in the case of M31 there are no strong astrophysical lines between 3 and 4 keV.However, the powerlaw continuum is well determined by fitting the data over a wider range of energies (from 2 to 8 keV) and allows to clearly detect the presence of the line at 3.53 ± 0.03 keV with ∆χ 2 = 13 [1].
We conclude that although it is hard to exclude completely astrophysical origin of 3.539 keV line in the spectrum of the GC, the DM interpretation of the signal observed in Perseus and M31 [1] and in the stacked spectra of galaxy clusters [2] is fully consistent with the GC data.Moreover, GC data rather supports this interpretation as the line is not only observed there at the same energy, but also its flux is consistent with the expectations about the DM content of the GC for the whole class of the Milky Way mass models.
To check further this intriguing possibility, a more precise resolution of the atomic lines (e.g. with the forth-coming Astro-H mission [108]), an independent measurements of the relative abundance of elements in the GC region, or analysis of additional deep exposure dataset of DM-dominated objects is needed.
FIG. 1 :
FIG.1: Left: Folded count rate for MOS1 (lower curve, red) and MOS2 (upper curve, blue) and residuals (bottom) when the line at 3.54 keV is not added.Right: Zoom at the range 3.0-4.0keV.
|
2015-10-02T07:46:47.000Z
|
2014-08-11T00:00:00.000
|
{
"year": 2015,
"sha1": "51ac56587632d9d79b45c205babbbccb3b6586ec",
"oa_license": null,
"oa_url": "https://scholarlypublications.universiteitleiden.nl/access/item:2865414/view",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "51ac56587632d9d79b45c205babbbccb3b6586ec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
214718383
|
pes2o/s2orc
|
v3-fos-license
|
Why healthcare leadership should embrace quality improvement
Making quality improvement a core tenet of how healthcare organisations are run is essential to ensuring safe, high quality, and responsive services for patients, write John R Drew and Meghana Pandit
H ealthcare staff often have a positive experience of quality improvement (QI) compared with the daily experience of how their organisations are led and managed. 1 This indicates that some of the conditions and assumptions required for QI are at odds with prevailing management practices. For QI to become pervasive in healthcare, we need to change leadership and management.
At a QI event, we listened to an experienced nurse explaining a QI project to improve patient flow. The most striking thing was not her description of the project or what she had learnt or the benefits for patients, but instead how it had made her feel "valued and respected." A manager's job is to achieve organisational goals. In the NHS, this includes meeting emergency and elective targets, such as the referral to treatment target, cancer and diagnostic standards, and the emergency department standard. Clinicians often perceive managerial interactions as authoritarian and lacking patient centredness and see QI as inclusive, bottom-up engagement. 2 Staff appreciate non-hierarchical approaches.
QI can be defined as "a systematic approach that uses specific techniques to improve quality." 2 It requires infrastructuresystematic and disciplined ways to eliminate waste from processes, improve outcomes and experiences for patients, and eradicate mistakes. It requires organisational patience and a culture that empowers staff to achieve positive change. Organisations that foster continuous improvement might say that all staff have two jobs: first, to do their job; second, to improve it.
The nurse we spoke to said that the main difference when working on the QI project was having the time and the "permission" to make improvements in her own work. Staff engagement scores indicate that many NHS clinicians increasingly feel trapped in a flawed system with little prospect of changing it. 3 Understanding why there is a gap between the predominant management practices and culture of the NHS and the "microclimate" associated with local QI activities, and how to close that gap, is vital. Staff often report two contributors to this gap: a lack of "headspace" and feeling like a cog in a machine. 4 Rising demand for healthcare and an estimated 8% vacancy rate 5 in the clinical workforce make it difficult to find time for QI. Some leaders have committed to protecting time for QI because it generates a return in improved quality and productivity. 6 But this is still rare. Without long term strategic commitment, expecting people to find time for their second job is unrealistic. There is growing recognition that this needs to change. 7 8 Increased demand has been compounded by a rise in transparency and regulation, especially in publicly funded health systems, placing managers and leaders under greater pressure. Regulators often require improvement plans to be developed quickly, making meaningful staff engagement difficult. Recent changes in contracts, such as job planning, and pension tax rules in the UK have led many doctors to think that their employment has become more transactional. This, combined with top-down target setting and a narrative of "grip and control," might explain why staff increasingly feel insignificant.
QI as the basis of management QI depends on engaging and empowering the teams delivering care and equipping them with the tools and skills they need to improve care pathways. Ultimately, it means trusting professionals' knowledge and judgment of what patients need and allowing them to make decisions, including the allocation of resources, with appropriate accountability. This requires a shift in managerial and leadership thinking (box 1).
QI needs to become the basis of how organisations are led and managed, replacing traditional, hierarchical structures and incentives. Regulators already recognise this; the Care Quality Commission's report on quality improvement in hospital trusts, for example, says that when leaders and frontline staff work together it creates a powerful sense of shared purpose. 6 This is often present in the NHS trusts that it rates "outstanding," it says. Dido Harding, chair of NHS Improvement, has said, "If all of the boards in the NHS chose to take culture and people management more seriously and put it on a level footing with financial and operational performance, we'd see a huge improvement in culture and outcomes for patients as well." 9 The profound shifts in leadership and management needed for QI to thrive sometimes run contrary to traditional approaches for optimising short term performance. The recent average tenure of an NHS chief executive is 2-3 years, undermining the sustainable culture change needed for QI. 10 Burgess and colleagues describe a different type of governance that fosters learning, citing the partnership of NHS Improvement and five trusts with the Virginia Mason Institute in the United States. 11 Creating a compact with regulators enables a change in attitudes and allows organisations to grow and learn, they say. This promotes board longevity, which is a requirement for continuous improvement. 6
When do QI and good management coalesce?
The most senior leaders might have the greatest challenge; their roles would shift from being responsible for all performance to a devolved model of collective, inclusive, and compassionate leadership. Embedding QI can challenge senior leaders' fundamental beliefs and management practices. Safe healthcare depends on defining and following standards, but an emphasis on engaging frontline staff to develop, apply, and improve those standards is often lacking. Instead, standards are implemented rapidly in a top-down, nonnegotiable fashion. 12 The language of QI often reflects nature, describing organisations as ecosystems to cultivate or living systems to keep healthy rather than machines to optimise. Human factors (such as relationships, trust, and healthy multidisciplinary teams), talent management, succession planning, and assurance are central to this way of working.
Senior leaders must be role models. Their behaviour is amplified throughout the organisations they lead, whether they recognise it or not. Staff will judge what is important by where and how leaders spend their time rather than by what they say.
The Virginia Mason Institute partnership was enabled in 2015 by the secretary of state
Why healthcare leadership should embrace quality improvement
Making quality improvement a core tenet of how healthcare organisations are run is essential to ensuring safe, high quality, and responsive services for patients, write John R Drew and Meghana Pandit for health and social care to adopt "lean thinking" (a method developed by Toyota to deliver more benefits to society while eliminating waste) in the NHS. The trusts' progress is being evaluated, but some trusts already report having developed a "golden thread" of QI that is visible to all, leading to improvements in CQC ratings and staff engagement.
Tr a n s l a t i n g Q I e n d e avo u r s i n t o operational and financial success takes time, and caregivers, providers, and regulators need to hold their nerve to see lasting performance improvement. Other healthcare providers have embraced QI methods without formal partnerships with international organisations and have delivered strong long term results. A key feature in most of these cases has been coaching for the most senior leaders and managers (for example, with a "lean" coach, usually people with experience from other industries who have moved into healthcare or consultants) so that they understand the changes they need to make in their own behaviours and practices. This has been described in the motor industry. 13 So is QI just good management? Management, leadership, and QI are distinct but overlapping. Some leaders are not managers, and vice versa. Some, but not all, leaders and managers will undertake QI, which can be performed in isolation from leadership and management. But integrating all three is likely to optimise outcomes. Broadly, management is controlling a group or team to accomplish a goal. Leadership is influencing others to contribute towards success. Management requires "grip" (staying on top of details, intervening quickly, and giving orders or instructions if performance is below expectations), and QI often requires a deliberate loosening of that grip. This could create conflict unless management has QI as a fundamental principle.
One could argue that QI requires more people to behave like leaders and fewer to behave like managers. In the most radical forms of QI (such as those described in Reinventing Organisations 14 ), many of the roles and responsibilities of management become shared among well functioning, trusted frontline teams. The sense of "them and us" between frontline workforce and management vanishes.
The chairman of the Japanese electronics company Matsushita famously issued a challenge: "The essence of management is getting ideas out of the heads of the bosses and into the heads of labour . . . Business, we know, is now so complex and difficult, the survival of firms so hazardous in an environment increasingly unpredictable, competitive, and fraught with danger, that their continued existence depends on the day-to-day mobilisation of every ounce of intelligence." 15
How can we help leaders get on this path?
Embedding QI in any organisation requires a new narrative from regulators and boards, strategic intent, investment in training leaders and staff, a more distributed leadership model that empowers frontline teams, and a meaningful role for patients so that improvement activity is aligned to what they most need and value. 6 16 It also requires courage and patience from the most senior leaders as they commit to new management practices. Their incentives must depend not only on delivery of topdown targets but also on building a culture conducive to long term quality improvement, which could be personally uncomfortable for them. 17 Quality management systems have an important role. 18 Taichi Ohno, architect of the Toyota Production System (popularised as "lean"), would instruct managers to spend hours "watching" from within a chalk circle on the factory floor. He wanted managers to learn to see waste and opportunities to improve quality and flow.
Learning good management in healthcare includes not only learning to see opportunities to improve healthcare processes but also noticing the experience of frontline staff, and consequently leading in ways that engage and empower them to "mobilise every ounce of intelligence." This article is one of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ, including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ's quality improvement editor post are funded by the Health Foundation.
Box 1: Cycles of continuous improvement All QI activities need to start small and then scale up. The transition to full implementation requires constant plan-do-study-act cycles with user involvement and feedback. One QI activity that changed organisational culture received the HSJ National Patient Safety Team Award in 2018.
The process began with the team members asking themselves, if they were a patient, what would they like to happen after a clinical harm incident in a hospital. The team then defined the current state and future vision. Eight frontline staff participated in a five day workshop to define the key steps that would help achieve the desired outputs. They tested the approach over the next few weeks and agreed metrics that were reported to executives at 30, 60, and 90 days. The workshop included patient representatives. Several changes resulted in increased incident reporting and user feedback, introduction of safety huddles, and the creation of an innovative patient safety response team.
Making such changes stick requires constant and consistent messaging and leading by example. Appreciating the efforts of frontline workers, and saying "thank you," is vital.
|
2020-03-31T16:37:22.126Z
|
2020-03-31T00:00:00.000
|
{
"year": 2020,
"sha1": "3e2a02366aab6c0c757abff9a9687fbe83474c03",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmj.com/content/bmj/368/bmj.m872.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "585a1ea2d9691fd9eb0b9b1e1159c8fa009a6534",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
267495993
|
pes2o/s2orc
|
v3-fos-license
|
The future of International Classification of Diseases coding in steatotic liver disease: An expert panel Delphi consensus statement
Background: Following the adoption of new nomenclature for steatotic liver disease, we aimed to build consensus on the use of International Classification of Diseases codes and recommendations for future research and advocacy. Methods: Through a two-stage Delphi process, a core group (n = 20) reviewed draft statements and recommendations (n = 6), indicating levels of agreement. Following revisions, this process was repeated with a large expert panel (n = 243) from 73 countries. Results: Consensus ranged from 88.8% to 96.9% (mean = 92.3%). Conclusions: This global consensus statement provides guidance on harmonizing the International Classification of Diseases coding for steatotic liver disease and future directions to advance the field.
INTRODUCTION
Nomenclature changes in the field of steatotic liver disease (SLD) were recently proposed and are currently being adopted by a wide range of stakeholders. [1]Among the suggested modifications, the change from NAFLD to metabolic dysfunction-associated steatotic liver disease (MASLD) reflects a drop of the "nonalcoholic" label, enabling the inclusion of positive diagnostic criteria while removing potentially stigmatizing classification.The intake of alcohol as a disease contributor is also acknowledged in the new nomenclature, with the introduction of the term "MASLD and alcohol-associated liver disease (ALD)", abbreviated as MASLD and ALD (MetALD). [1]Moreover, the nomenclature process introduced new defining criteria for MASLD and MetALD.4] As a consequence of these nomenclature changes and to aid in their implementation, administrative coding will need to be adjusted.Globally, the International Classification of Diseases (ICD) coding system is the most used.We thus aimed to build consensus on the appropriateness of using current ICD NAFLD and NASH codes to code MASLD and metabolic dysfunctionassociated steatohepatitis (MASH), respectively.We also sought to develop recommendations to guide research and advocacy on amending future ICD codes for SLD.While ICD systems vary at the local level (eg, which version is in use), ICD-10 is currently the dominant system.Nonetheless, following its release in 2022, ICD-11 use will be gradually introduced over the coming years.
METHODS
We performed a two-stage Delphi process whereby, first, a core group of people (n = 20) indicated their agreement or disagreement with statements and recommendations (n = 6) (Supplemental Table 1, http://links.lww.com/HC9/A809) using "yes" to agree and "no" to disagree, through Microsoft Forms, from July 23 to August 6, 2023.Respondents were also invited to provide qualitative feedback on each item and overall, which was considered during item revisions.This group included individuals who had previously contributed to a consensus statement on the use of NAFLD ICD codes in research [5] and key opinion leaders involved in the nomenclature change.
The second stage involved inviting a panel of individuals with SLD experience to indicate their level of agreement ("agree," "somewhat agree," "somewhat disagree," or "disagree") with the modified items (n = 6) (Table 1), using the described methodology, [6] through Qualtrics XM, from October 6-23, 2023.Respondents were also invited to provide qualitative feedback on each item and overall, which was considered during manuscript writing.Invitees who were not familiar with ICD codes and their use could opt out.Respondents who did not feel qualified to indicate their level of agreement with a survey item could choose the option "not qualified to respond."For the purposes of this study, we defined reaching consensus as having > 80% agreement on each item, with overall agreement being the sum of the "agree" and "somewhat agree" categories in stage 2.
Ethical considerations
This study received ethical review exemption from the Hospital Clínic of Barcelona, Spain, ethics committee on October 4, 2023.All research was conducted in accordance with the Declarations of Helsinki and Istanbul.Respondents consented to participating, and data were anonymized for all analyses.
RESULTS
A total of 479 individuals were invited to participate in stage 2, of whom 269 (56.2%) responded.Of these, 26 (9.7%) opted out as they were not familiar with ICD codes and their use.The 243 respondents (90.3%) who completed the survey worked in 73 countries and had a mean age of 53.9 (SD: 9.4).Most respondents were male (65.4%), worked in high-income countries (66.3%) and in the Europe and Central Asia World Bank region (41.2%), and primarily worked in academia (67.9%) and as clinicians/medical doctors (72.8%) (Supplemental Table 2, http://links.lww.com/HC9/A809,contains further panelist details).
In stage 2, consensus ranged from 88.8% to 96.9% (mean = 92.3%).Four items had < 80% "agree" responses and relied more heavily on the "somewhat agree" category to reach a consensus.A total of 351 qualitative comments were provided across items.There was ≥ 88.8% consensus that MASLD, MASH, and ALD are currently best coded with K76.0, K75.8, or K75.81, and the K70 spectrum of ICD-10 codes, respectively.As for MetALD, which has no ICD code as it was newly introduced, 89.2% agreed that using ICD coding for the perceived dominant disease driver (MASLD or ALD) on an individual basis was preferable while awaiting updates to the ICD system.In terms of recommendations, 91.2% of participants agreed that research should prioritize how best to distinguish between MASLD, MetALD, and ALD when using historical data.Furthermore, the consensus that international societies should advocate for a global update of ICD terminology to better reflect the SLD nomenclature changes was 86.4%.
DISCUSSION
This study found that, among a large panel of experts working across 73 counties, there was a high degree of consensus that NAFLD and NASH ICD codes can be updated to reflect the new MASLD and MASH names and definitions, respectively, without the need for new codes.Renaming the administrative terms across various systems and countries to reflect the nomenclature change should be a priority.This is important, as introducing coding changes may lead to considerable difficulties in comparing study results and interpreting disease epidemiology patterns across settings and over time.It should be noted that definition and ICD code modifications will not mitigate the challenge of correctly calculating the amount of alcohol consumed by patients, but we hope that the recommendation of focusing research on identifying how to best distinguish between MASLD, MetALD, and ALD will promote investigations around this topic.Further work to introduce novel ICD codes to specifically define MetALD is needed, which may be achieved through discussions with national and regional norm-setting bodies and the World Health Organization, which maintains and updates the ICD system.
CONCLUSIONS
This global expert consensus statement recommends that the currently available ICD codes for NAFLD and NASH can be used to define MASLD and MASH, respectively, although advocacy is needed to update ICD terminology to better reflect the nomenclature change and introduce new codes for MetALD specifically.
outside of the submitted work.Jörn M. Schattenberg
|
2024-02-07T05:05:21.931Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "af081bb43e58aedb619b86d0bb0fcdbe82ed7078",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "af081bb43e58aedb619b86d0bb0fcdbe82ed7078",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235288174
|
pes2o/s2orc
|
v3-fos-license
|
A Object detection Method for Missile-borne Images Based on Improved YOLOv3
Detecting small objects in complex circumstances is an important topic in the research of today’ s object detection [1], especially in military, which needs more reliable, stable and accurate detection results. In order to improve the detection of small objects, we improved the structure of the YOLOv3 network by replacing the convolution module in the original network with multi-branch scale convolution, increasing the adaptability of the network to different sizes of objectss and reducing the number of network layers to balance the depth and width of the network, while also improving the feature extraction and representation capabilities. And based on the premise of a small number of data sets, we simulate some complex environments, which are composed of different weather, illumination, motion and rotational blur. We also enhance and extend the data in the network learning. Through the system simulation experiment, small objects can be recognized in such complex environments, which provides a reference for object detection of missile-borne images.
Introduction
With the improvement of science and technology and the modernization of war mode, the weapon system should be equipped with swift reaction speed and attack accuracy. Distant range object often appears in the form of small objects in the field of view, and the contrast between the object and the background is shallow [1], so it is more difficult to detect the object. Based on the above analysis, detecting small objects in a complex battlefield is of great military value and research significance.
In the field of deep learning object detection, how to successfully apply it to weapon system is very difficult. There are three main problems [2]: first, the amount of data is insufficient. In the military field, the most missing is the number of photos of object investigation; second, the climate in which the investigation is taken is different from that of attack, including illumination, noise and weather conditions, which fail to guarantee that the detection algorithm can adapt to the attack of the whole climate through one investigation data learning; third, it is difficult to detect small objects (The definition of small objects refers to MS COCO data set), as shown in Table 1.
Based on the above difficulties, we improve the network of YOLOv3 [3], increase the detection characteristics of small-scale objects, change the convolution module of the original network into multi-branch scale convolution, and use the data enhancement method to make the neural network detect small objects in complex environment more easily. At present, the main bottleneck that restricts the performance of missile-borne image object detection algorithm lies in the overly complicated and blending scene changes. Due to its special application scene, the imaging quality of missile borne camera is closely related to the missile body' s air motion attitude and the battlefield environment. In most cases, the missile-borne image carries the following problems: a) High real-time requirement The short flying time of missile in the air and the brief time from capturing object image to landing put forward higher requirements for the speed and efficiency of object detection process. b) Complex background In the process of missile motion, the background image captured by missile borne camera is constantly changing, and the changing weather will also affect the imaging quality of missileborne image. For example, the image clarity will be affected to a certain extent under the conditions of cloudy, rainy, fog and haze, producing a lot of noise in the process of detection and recognition.
c) Relative small object Combined with the application environment of missile borne camera, it can be seen that when the missile borne camera starts to work at the terminal guidance stage, the far distance to the object and the smaller object lead to the fact that few features can be extracted. In this scene, object detection and recognition algorithm fail to perform well.
YOLOv3 object detection
Referring to network structures of SSD [4] and ResNet [5], YOLOv3 designs Darknet53, a basic model of classified network. Compared with VGG-16, the common feature extraction network for object detection, Darknet53 reduces computational complexity of the model.
In YOLOv3 network, the structure of Darknet53 is used for image feature extraction, and the yolo structure for multi-scale prediction. In the specific process, a series of convolution operations are carried out by outputting the feature maps (13×13 pixels and 1024 dimension) from the Darknet53. Based on the above sampling, the minimum scale yolo layer is formed after connecting the shallow layer feature maps, and the 13×13×512 channel feature map extracted from layer 79 is convoluted to channel 256, and then the 26×26× 256 feature map is generated by sampling. Furthermore, combining it with the feature map of layer 61 and convoluting them to form the mesoscale layer yolo. And the large-scale layer yolo is obtained by corresponding convolution operation of layer 91 and layer36. The feature map prediction, including the coordinate information about the coordinates X and Y of the bounding box, width W and height H, prediction object confidence IOU and category prediction score of the grid. Taking the two buildings striking the object as an example, the number of channels is (4+1+2)×3=21, and the three scale feature maps are 13×13×45, 26×26×45, 52×52×45 respectively, and the structure of Darknet53 can be shown in Figure 1. After using the data set to directly detect the YOLOv3 network, we find that the examination is difficult under the two conditions: the rotation blur exists and the size of the object is less than dozens of pixels. Therefore, the improved YOLOv3 network mentioned later enhances the detection of small-scale fuzzy objects.
3. Missile-borne Image Object detection Based on the Improved YOLOv3 3.1. The improved network structure To ensure the running time and the detection effect of small objects with 15 to 30 pixels, we change the convolution module in the original network to multi-branch scale one, making the network more adaptable to different object sizes. It also reduces the network layers to balance the depth and width of the network [6]. Spatial aggregation, based on multi-branch scale convolution, can be completed through low dimensional embedding. Moreover, its ability to feature extraction and representation for image network does not decline. Its internal structure can be divided into three branches, as shown in Figure 1. Figure 2. Structure of the improved YOLOv3 Among them, DepthConcat is to link the feature maps of branch convolution according to the depth to form a map with constant size and depth superposition, which is followed by a 1x1 convolution layer. It does not change the height and width of the map. However, it changes its depth to achieve dimension reduction, facilitating the linear combination of multi-channel features to integrate information among multiple paths. Adding the shortcut layer to overlay the Only a part of the information can be extracted from each convolution layer in the forward propagation process of the network. The more times they spread forward for small objects, the less information can be learned and retained. Therefore, it is very likely to cause an underfitting result. Adding the shortcut structure means that adding all the information of the final convolution image in each module equals retaining more information of the initial features to a certain extent. After adding the shortcut structure, the network becomes an optimal output selection model. The output result is optimally learned from the previous block and combination convolution. The network can be regarded as a parallel structure. Finally, the feature extraction combination of different levels can be obtained so that the network can learn the most suitable model and parameters.
Taking the feature pyramid network FPN [7] as a reference, the multi-scale fusion method is used to predict. The feature map of large-scale 52x52(the first group of multi-branch module output) can provide resolution information for small objects [8]. The last two layers of the multi-branch module simultaneously provide resolution and semantic information for regular and small-scale objects. Different scale detection can effectively detect objects with different scales. Even if the same object is detected in multiple feature layers, the detection effect will not be affected because the best one can be obtained by non-maximum suppression. The specific structure is shown in Figure 2.
Other improvements 3.2.1. Anchor box calculation
It is not suitable to use the original anchor box to train their small object data, and its setting will affect the accuracy and speed of object detection. We cluster the candidate boxes of a small object dataset in the spatial dimension and calculate the optimal anchors' number. The anchor's number and dimension in YOLOv3 are obtained by clustering VOC and COCO datasets [9] [10], which are not suitable for long-distance small object detection. Therefore, its number and the dimension of width and height are redefined.
The k-means clustering algorithm, based on unsupervised learning, is used to cluster and analyze the object frame, extract all the object frame sizes of the data sets, and classify the similar objects into the same category. Finally, we can obtain number k, the number of anchor frames as the scale parameters set during training. K-means clustering analysis uses Euclidean distance, which means that a more extensive scale frame will produce more errors. Therefore, the network loss value uses the IOU overlap degree as the evaluation index. The intersection of the candidate box and the real box is divided by the union to avoid the error caused by the larger BBox compared with the smaller BBox. The function, which replaces the Euclidean distance, is: The clustering objective function is: As we can see, box represents the candidate box, truth is the object real box, and K is the number of anchor boxes.
Data enhancement
In the process of network iteration, 400 data sets are expanded by adding data enhancement [11]. Besides, the following factors are added: affine transformation, rotation blur, jitter blur, brightness, right and left, up and down, flip, hue, saturation, Gaussian noise, sharpening, proportion multiplying pixel, piecewise affine, snow, cloud, fog, snow and other climate situations. These data enhancement methods are stored in the sequence. When the image data is actually read, the original image and the object marker frame are enhanced at the same time in a random way, which not only increases the data of simulating a complex environment but does not affect the position of the marker box.
Through data enhancement in network training, the distributed data density is increased synchronously in the spatial dimension. The increased data set has a significant improvement in the ability to object recognition and network generalization. Table 2 shows the comparative experimental results. [12], its effect is to suppress the interval part which is not the maximum value, and it can also be called local maximum search. The improved YOLOv3 network can output lots of object boxes directly. There are about 8 object boxes detected by the image collected by the camera with a resolution of 640×480. We want to eliminate multiple overlapping detection boxes of one object and only reserve the best one for each object. Here, we use the non-maximum suppression method to suppress the elements that are not the maximum to refine the number. Traversing the highly overlapped boundary boxes helps us preserve the prediction box with the highest retention reliability, and non-maximum suppression is performed separately for each class of objects.
The detection effect is shown in Figure3. The left one shows the detection result after passing through the network, and the right shows the result after suppression.
Experiment Simulation
The operating system of the experimental platform is Ubuntu 16.04, the processor is Intel Xeon E5-2620v4, 2.10GHz, GPU is Nvidia 1080ti.
Small object detection results
Data enhancement is added to both YOLOv3 and the improved YOLOv3 network experiments, but the latter plays a more significant role in improving small objects' detection accuracy. Table 3 shows the comparative experimental results.The number of recognized images is 1000, and the object size is between 25 and 100 pixels. The detection effect can be seen in Figure 4. The first column represents the detection effect of the YOLOv3 algorithm, and the improved one produces the second. It indicates that the improved algorithm performs better in detecting the object under such backgrounds as different scales, rainy day, motion blur, rotation blur, and snow day. From the experiment, we can see that by adding data enhancement and training various noises, the model, to a large extent, makes the detection algorithm more accurate. Using a small amount of data can also make the object detection algorithm carry good robustness and identify small objects in complex background.
Conclusion
In this paper, a missile-borne image object detection method based on improved YOLOv3 is proposed. The method is based on a small number of data sets. By adding multi-branch structures and residual shortcut layers, the number of network layers is reduced. Using the Kmeans clustering method automatically generates anchor area, enhancing the characterization advantage of the feature map and recognizing small objects. It also uses data enhancement and expansion to increase the training's data diversity, simulating the influence of various factors such as climate, fuzzy, and noise. All these help improve the performance of small object detection in complex backgrounds and boost detection accuracy. As for the object detection research of missile-borne images, this method has specific application value in engineering.
|
2021-06-03T00:50:41.823Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "da7d70bd539425bac18f86abe23c0094ebabf09c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1880/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "da7d70bd539425bac18f86abe23c0094ebabf09c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
84684755
|
pes2o/s2orc
|
v3-fos-license
|
Detection and cellular localization of Xanthomonas campestris pv . viticola in seeds of commercial ‘ Red Globe ’ grapes
Commercial grapevine fruit (Vitis vinifera) of the Red Globe variety were collected in vineyards from Vale do São Francisco lower basin, an area of occurrence of grapevine bacterial canker. Seeds were extracted, classified as symptomatic or asymptomatic and processed in order to be observed under light (LM) and scanning electron microscopy (SEM) along with silver-enhanced immunogold labeling, to allow bacterial detection using a policlonal antibody against Xanthomonas campestris pv. viticola (Xcvi), etiological agent of the disease. The seed samples showed bacterial aggregates associated to the tegument surface and to the first parenchymal layer beneath the seed tegument. Bacterial identity was confirmed by immunogold labeling. This appears to be the first report of Xcvi associated to asymptomatic seeds and berries, suggesting a systemic mechanism to spread and colonize different tissues and sites, driving attention to seeds, presenting them as an important niche for survival and dissemination of this pathogen. These results point towards the need of including seed-bearing fruit in studies regarding Xcvi epidemiology.
INTRODUCTION
Xanthomonas campestris pv.viticola (Nayudu) Dye (Xcvi) is the causal agent of bacterial canker of grapevine (Vitis vinifera L.).Presently, it is the main grapevine bacteriosis in the Vale do São Francisco lower basin (northeast of Brazil), and is classified as a quarantine pest.Since its introduction in the northeastern region more than a decade ago (Malavolta et al., 1998;Lima et al., 1999), Xcvi has caused serious damage to grapevines.Primary order symptoms such as necrotic lesions on leaves are commonly observed in areas where the disease occurs.There is a prevalence of cankers deriving from vascular tissue obstruction in the branches, tendrils and stalks which stops sap flow and impairs the vegetative and reproductive growth of grapevine.Secondary symptoms such as wilting and drying of branches are also observed (Araujo & Robbs, 2000) or, yet, an abnormal development of clusters and berries that can depreciate or even make fruit commercially unviable.
Severity or symptoms varies according to the cultivar.Among the different materials planted in the region, the variety Red Globe is the most vulnerable to the disease (MAPA, 2012), mainly during the rainy season, when higher humidity associated with high temperatures allows an exponential increase of pathogen populations, increasing fruit losses.
Studies on Xcvi interaction with its natural host were performed by Araujo et al. (2004) with symptomatic and asymptomatic materials, using scanning electron microscopy among other techniques.The authors concluded that bacterial cells adhere to the plant surface by means of non-polar fixing on mono-layers, and that the most frequent adhesion sites are the surface of veins and trichomes on leaf blades.After reaching a favorable site, Xcvi can resist removal, which is a selective advantage responsible for increasing and stabilizing the resident population (Araujo, 2001;Nascimento & Mariano, 2004).Araujo et al. (2005) developed antibodies against Xcvi.These allowed the authors to specifically detect Xcvi cells associated to the surface and the interior of symptomatic and asymptomatic plant tissues in grapevines.Based on structural interaction studies and on specific immunolocalization, the authors showed the pathogen's capacity to cause local and systemic infection in the vegetative and reproductive axis.
The systemic colonization nature of some species of the genus Xanthomonas is associated to different plant hosts, although Ryan et al. (2011) reported that some species from the group, such as Xanthomonas citri pv.citri, usually cause local infection and colonization.Therefore, studying each individual pathosystem is important.
Studies by Araujo et al. (2004) advanced the understanding of the grapevine-Xvci pathosystem.However, not all plant organs were analyzed.The current study was performed to investigate whether commercial fruit seeds of variety Red Globe could be a potential bacterial colonization and settlement niche, using light and scanning electron microscopy associated with immunolocalization techniques.
MATERIAL AND METHODS
In situ incursions were performed in the irrigated perimeter of Senador Nilo Coelho, micro-region of Vale do São Francisco River in the state of Pernambuco, Brazil.Vineyards planted with the Red Globe variety and presenting a background of vine bacterial canker were evaluated.Asymptomatic grape clusters were collected in the area.The seeds were extracted from the berries, classified as symptomatic or asymptomatic (Figure 1) and parts of such material (measuring approximately 0.5 cm²) were used for light microscopy.The material used for scanning electron microscopy was vertically and longitudinally transected in the median portion.Parts of 20 seeds (ten symptomatic and ten asymptomatic) were immediately transferred to a fixing solution containing 2.5% glutaraldehyde and 4% paraformaldehyde in potassium phosphate buffer (50 mM, pH 6.8).After 24 hours at room temperature, the segments were washed three times for 15 min in the same buffer.
Additional steps were performed following the protocol of James et al. (1994).
Light microscopy
Samples were dehydrated in a increasing ethanol series (15, 30, 50, 60, 70, 80, 90, 100% ethanol in water v/v), and were kept in each solution for 15 min.Next, they were filtered with a LR White medium grade crylic resin (London Resin Company) for seven days and kept in the refrigerator.The blocks were prepared with individual samples placed in transparent gelatin capsules containing resin and were polymerized in a stove (60ºC) for 18 hours.The polymerized capsules were selected and observed using the ultra-microtome loupe and were lapidated using a steel blade, allowing the obtainment of trapezoid shaped blocks.Semi-thin sections (0.7-1 µm) were obtained from such blocks using a Reichert-Jung Ultracut E ultra-microtome.The sections were collected in glass slides heated on metallic plates.Toluidine blue staining (0.1%) was performed and a new slide heating was done.Samples were examined with a Zeiss Axioplan light microscope.
Scanning electron microscopy
The samples were post-fixed in an aqueous solution of osmium tetroxide (1%), washed three times in phosphate buffer and dehydrated in an increasing acetone series (30, 50, 70, 90, 100% acetone).After that, selected samples were transferred to a critical point drier (Mod.CPD 030, Bal-tec), where they were completely dried by replacing the acetone with liquid carbonic gas kept under high pressure.Later on, the liquid CO 2 turned into gassy CO 2 by applying a temperature of 36ºC and pressure of 70 atm (CO 2 critical drying point).After this procedure the samples were covered with a silver film to increase electronic conductivity, applying an 18 mA electronic current and gold deposition time of 240 sec with an automatic sputter coater (Mod.SCD 050, Bal-tec), providing a 300 nm thick coverage.Samples were examined on a Zeiss DSEM 962 scanning electron microscope.
Immune-staining with colloidal gold followed by silver enhancement
Semi-thin tissue sections (0.9 µm) were placed in a drop of water over a microscope slide, covered with 1% gelatin solution in water and fixed at 40ºC, followed by incubation in blocking solution (3% bovine serum albumin, 0.2% Tween 20 in phosphate buffered saline) for 1 hour at room temperature.Then, after a fast freshwater washing procedure, the primary AC4558 antiserum was applied (Araujo et al., 2005) at a 1:400 dilution.The material was incubated in a moist chamber for 1 hour at 25ºC.Sections were then washed in 0.5% bovine serum albumine, 0.1% Tween 20 in PBS and then in sterilized distilled water (5 min each), and dried with a paper towel.Next, 50 µl of the secondary antibody were applied along with 5 nm diameter gold particles, diluted in a 1:100 IGL solution for 4 hours at 25ºC in a moist chamber.After this period, the slides were washed for 5 min in 0.5% bovine serum albumine, 0.1% Tween 20 in PBS and in sterilized distilled water again (5 min each) and dried.To allow the visualization on the light microscope, immunogold labeling was silver enhanced using the Intense SE BL Silver Enhancement set (Janssen Life Sciences Products) as described by Vanderbosh et al. (1986).The reaction was visualized by the deposition of an opaque blackish/brownish precipitate around the gold particles linked to the secondary antibody and corresponding to Xcvi epitopes, the specific detection target.
RESULTS AND DISCUSSION
Berries collected in vineyards planted with 'Red Globe' where grapevine bacterial canker had already been detected presented symptomatic and asymptomatic seeds (Figure 1).In all ten symptomatic seeds it was possible to confirm the presence of Xcvi, whereas eight from the ten asymptomatic seeds samples were also infected.
The observation of symptomatic seeds under the SEM (Figure 2A, B) confirmed the presence of bacterial aggregates adhered to the tegument surface.Tissue disarray became evident in such regions.This is a characteristic of canker symptomalogy following Xcvi colonization (Figure 2C, D).Observations at 3000× and 5000× magnification indicated the preence of 1.9-2.3µm long and 0.6-0.8µm wide bacilliform cells (Figure 2E, F), a typical morphological characteristic of the genus Xanthomonas as described by Swings et al. (1993) and Araujo et al. (2004).This was evidence of bacterial colonization and also confirmed the identity of Xcvi using immune-staining techniques.
By examining the electron micrographs of asymptomatic seeds (Figure 3A), it was possible to observe Xcvi cells in the cavity close to the connection regions between the seeds and the petiole, where the conductor bundles that individually feed the seeds are set (Figure 3B).The captured image, exactly in this region -which is a connection point -suggests that it is the initial colonization and settling site in the seeds (Figure 3C, D), since they are seeds with absence of symptoms and thus showing low Xcvi population density (Figure 3E, F) when compared to symptomatic tissues (Figure 2E, F) with intense colonization.Such observation corroborates the hypotheses of Araujo et al. (2004) regarding Xcvi systemic colonization.Thus, the presence of Xcvi in the plant conductive elements that link and feed the whole plant turns such elements into an infection path to the fruit and seeds.
Based on light microscopy analysis of transversal and longitudinal sections stained with toluidine blue, it was possible to distinguish different tissue systems in the anatomy of the grape seed var.Red Globe (Figure 2A).Moreover, by using such a technique, it was possible to localize the bacteria inside the seed tissue, colonizing the inner space between the endosperm and the tegument.In the colonization sites there is a possible deposition of plant gels indicated by the coloring metachromasy behavior in high density negative charge, characterized by a purple color (Figure 3B).
Finally, confirmation of the etiological agent for grapevine bacterial canker associated to asymptomatic fruit seeds was possible by analysing the images obtained after colloidal gold immune-staining, followed by silver enhancement (Figure 4C-F).Thus, it was possible to prove, based on the specificity of the AC4558 antiserum, the identities of the observed bacteria under the light microscope, with the formation of black precipitates in the Xcvi infection and colonization sites (Figures 4D, E).These results are in conformity with the reports by Araujo et al. (2005) and Tostes (2012).A detail of the methodological sequence to identify the bacterium is schematized in Figure 4C, based on studies by James et al. (1994) and Olivares et al. (1997), reporting immune-stained diazothrophic endophytic bacteria associated with wheat and sugar cane crops.First, the AC4558 antiserum is linked to the epitopes found on Xcvi cellular walls.Next, the secondary antibody linked to gold particles is connected to the primary one.The black precipitate on the images represents the deposition of silver ions around the colloidal gold, allowing signal detection at light microscope resolution and, consequently, allowing the observation of Xcvi-colonized spots (Figure 4D, E).The reaction's specificity can be verified by comparing the observed staining patterns in Figure 4E and the negative control (pre-immune serum replacing the anti-Xcvi primary antibody; Figure 3F).
Despite considerable progress in detecting Xcvi in asymptomatic samples (Araujo et al., 2005;Freitas, 2012), there remain several questions that should guide future research.The ecology of this bacterium is poorly understood and little is known about epiphytic growth, mechanisms of pathogenesis, modes of dissemination and survival.
Xcvi in infected tissues of grapevine is known as an important primary inoculum source.In this way, Silva et al. (2102) observed that Xcvi survives in grapevine-infected tissues on soil surface for at least 80 days, but is eradicated by composting in 10 days.
Bacterial diseases associated with seeds remain a problem, causing significant economic losses and being responsible for the re-emergence of disease by the introduction into new areas (Gitaitis & Walcott, 2007).According to Darrasse et al. (2010), seeds are passive carriers of a diverse microbial flora that can affect the physiology of seedlings and can favors disease spread.Although grapevine is not commercially propagated by seed, the majority of commercial cultivars have seeds, which makes possible the spread of Xcvi from one region to another along with the berries marketed.Thus, our research emphasizes the importance of seeds as source of inoculum.
Structural analysis of symptomatic and asymptomatic grape seeds from areas affected by grapevine bacterial canker demonstrated the presence of bacteria associated to the tegument surface and/or to the tissue's interior.The presence of Xcvi in symptomatic and asymptomatic seeds, confirmed by immunological techniques, represents the first evidence of Xcvi presence in seeds from berries in asymptomatic clusters.Considering that grapevines are not usually evaluated regarding seed phytosanitary status, such colonization niche could be important for the dissemination and survival of Xcvi.
FIGURE 1 -
FIGURE 1 -Berries and seeds of the Red Globe variety. A. Asymptomatic berries with symptomatic (1) and asymptomatic seeds (2); B. Seeds with typical symptoms on the tegument; C. Absence of canker symptoms.
FIGURE 2 -
FIGURE 2 -Scanning electron microscopy from grape seeds of the Red Globe variety. A. Canker symptoms and epithelial tissue detachment; B. Cellular and tissue integrity impairment; C, D. Mass of bacterial cells colonizing large areas of injured tissue; E, F. Mass formed by bacilliform cells.
FIGURE 3 -
FIGURE 3 -Scanning electron microscopy of asymptomatic grape seeds of the Red Globe variety. A. Absence of canker symptoms; B. Detailed view of the vascular bundle insertion point; C, D. Link of the vascular bundle colonized by Xcvi; E, F. Xcvi cells adhered to the tegument surface and supposedly promoting the beginning of the seed internal cavity colonization.
FIGURE 4 -
FIGURE 4 -A.Bright field optical microscopy of seed transversal sections of asymptomatic grapes of the Red Globe variety stained with 0.1% toluidine blue.It is possible to notice the tissue organization and integrity, showing the following anatomic organization sequence: 1, parenchyma; 2, endosperm; 3, internal space between the endosperm and the tegument; 4, tegument; B. Detailed view of the toluidine blue metachromasy pattern in the presence of bacterial aggregates (arrows) found beneath the tegument layer (notice the purple color); C. Schematics showing the immune-staining biochemical reaction; D. Bright field optical microscopy of a non-stained longitudinal section, where it is possible to observe the efficiency of the immune-staining reaction with the formation of black precipitate around the Xcvi cellular mass.The arrow indicates Xcvi cells leaving the tegument's internal area towards the external area through a cavity; E. Positive immune-staining reaction and a detailed view of the Xcvi allocation spot in the seed's internal tissue; F. Negative control of the immunestaining assay detailing the absence of black precipitate formation in the place where the Xcvi is located.
|
2019-01-12T14:35:09.845Z
|
2014-04-01T00:00:00.000
|
{
"year": 2014,
"sha1": "15a71d49bc070731dea145ff444c51e4bda082da",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/tpp/v39n2/v39n2a04.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "15a71d49bc070731dea145ff444c51e4bda082da",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
29233887
|
pes2o/s2orc
|
v3-fos-license
|
Underestimate Sequences via Quadratic Averaging
In this work we introduce the concept of an Underestimate Sequence (UES), which is a natural extension of Nesterov's estimate sequence. Our definition of a UES utilizes three sequences, one of which is a lower bound (or under-estimator) of the objective function. The question of how to construct an appropriate sequence of lower bounds is also addressed, and we present lower bounds for strongly convex smooth functions and for strongly convex composite functions, which adhere to the UES framework. Further, we propose several first order methods for minimizing strongly convex functions in both the smooth and composite cases. The algorithms, based on efficiently updating lower bounds on the objective functions, have natural stopping conditions, which provides the user with a certificate of optimality. Convergence of all algorithms is guaranteed through the UES framework, and we show that all presented algorithms converge linearly, with the accelerated variants enjoying the optimal linear rate of convergence.
Introduction
In this work we are interested in solving the strongly convex, composite, unconstrained optimization problem, We use x * to denote the optimal solution of (1), and F * := F (x * ) to denote the associated optimal function value. It is assumed that h(x) is a convex and possibly nonsmooth function. Furthermore, throughout the paper we make the following assumption regarding the function f .
Assumption 1
The function f (·) is µ-strongly convex and L-smooth, i.e., for all x, y ∈ R n , it holds that It is straightforward to show that strong convexity of f (x) implies strong convexity of F (x).
For problems of the form (1), which satisfy Assumption 1, it is well known that Nesterov's methods [17,19,21] converge linearly, with the accelerated variants converging at the optimal rate of (1 − µ/L).
Nesterov's acceleration approach, and the idea of adding momentum, has led to the extensive analysis of accelerated first order methods in a variety of settings. This includes a recent surge of interest in investigating stochastic gradient methods [24,25,10] and their accelerated variants [6,26,11,22]. Coordinate descent methods [20,23] are another class of algorithms that have proved extremely popular, largely because they can take advantage of modern parallel computing architecture [9,15], and this has also inspired much research into studying their accelerated versions [8,1,14]. However, while the theoretical and practical performance of Nesterov's methods is well established, a satisfactory geometric interpretation of these approaches has been elusive.
lower bound from the previous iteration. The gap between the function value f (x k ) and the minimum value of lower bound, φ * k say, converges to zero at the optimal rate. Importantly, the lower bound also acts a natural stopping criterion for the algorithm, and when f (x k ) − φ * k ≤ ǫ, where ǫ > 0 is some stopping tolerance, then the user has a certificate of ǫ-optimality, i.e., it is guaranteed that f (x k )− f * ≤ ǫ. In practice, the OQA algorithm can be equipped with historical information to achieve further speed up. However, the OQA algorithm and its history based variant need at least two calls of a line search process at every iteration, which can pose a heavy computational burden in terms of function evaluations. The authors in [7] also briefly describe how their unaccelerated OQA algorithm can be extended to composite functions, and left as an open problem the possibility of deriving accelerated proximal variants.
Very recently, the authors of [5] successfully addressed the open problem in [7] and presented an accelerated algorithm for composite problems of the form (1), that achieves the optimal linear rate of convergence. Their algorithm, called the geometric proximal gradient (GeoPG) method also has a satisfying geometrical interpretation similar to that in [3]. Unfortunately, a major drawback of GeoPG in [5] is that the algorithm is rather complicated, and requires a couple of inner loops to determine necessary algorithm parameters. For example, for GeoPG one must find the root of a specific function and one is also required to compute a minimum enclosing ball via some iterative process; both of these steps must be carried out at every iteration, which is expensive.
In this paper we propose several new algorithms to solve problem (1) that are motivated by, and extend, the previously mentioned works. In particular, we present four algorithms: a Gradient Descent (GD) type algorithm for smooth problems, an accelerated GD type algorithm for smooth problems, a proximal GD type algorithm for composite problems, and an accelerated proximal GD type algorithm for composite problems. Our algorithms all converge linearly, and the accelerated variants converge at the optimal linear rate. These algorithms blend the positive features of Nesterov's methods [17,19,21] and the OQA algorithm [7], and thus enjoy the advantages of both approaches. First, similarly to Nesterov's methods, no line search is needed by any of our algorithms as long as we make the standard assumption that the Lipschitz constant L is known or is easily computable. Hence, there are no 'inner-loops' in any of our algorithm variants, which ensures that the computational cost is low and is fixed at every iteration. Secondly, our algorithms incorporate quadratic lower bounds so they have natural stopping conditions; a feature that is similar to OQA. However, our algorithms update the quadratic lower bound at each iteration by taking a convex combination of the previous two lower bounds, which is different from OQA.
Another contribution of this work is that we also propose the concept of an UnderEstimate Sequence (UES), which is a natural extension of Nesterov's Estimate Sequence [16]. Perhaps surprisingly, estimate sequences initially appeared to be largely overlooked, but since Nesterov's work on smoothing tech-niques in the early 2000s [18], they have seen a significant revival in popularity. For example, the work of Baes in [2], the development of a randomized estimate sequence in [13] and an approximate estimate sequence in [12]. To the best of our knowledge, this is the first work which proposes estimate sequences that form lower bounds on the objective function. The UES framework is the powerhouse of our convergence analysis; we prove that each of our proposed algorithms generates a UES, and subsequently the algorithms converge (linearly) to the optimal solution of problem (1). While we describe 4 new algorithms in this work, we stress that the UES framework is general, and it allows a plethora of algorithms to be developed. Moreover, any developed algorithm whose iterates generate a UES is guaranteed to converge linearly to the optimal solution F * .
Contributions
In this section we state the main contributions of this paper (listed in no particular order).
-Underestimate Sequence. We introduce the concept of an UnderEstimate Sequence (UES), which extends Nesterov's work on estimate sequences in [16]. The UES consists of three sequences is a global lower bound on the objective function F (x). While there have been several extensions and variants of Nesterov's work [16], to the best of our knowledge this is the first time that the estimate sequence framework has been adapted to act as a lower bound or under estimate of F (x). The UES framework is general, conceptually simple, and it allows the construction of a wide variety of algorithms to solve (1).
-New algorithms. We present 4 new algorithms that are computationally efficient and adhere to the UES framework. Crucially, two of our algorithms solve the composite problem (1). The algorithms are: (i) SUESA, a GD type algorithm for smooth problems; (ii) ASUESA, an accelerated GD type algorithm for smooth problems; (iii) CUESA, a proximal GD type algorithm for composite problems, and (iv) ACUESA, an accelerated proximal GD type algorithm for composite problems.
-Algorithms with optimal convergence rate. Each of the algorithms generate iterates that form a UES, so all four algorithms are guaranteed to converge linearly to the optimal solution of (1). Moreover, the accelerated algorithms (ASUESA and ACUESA) are guaranteed to converge linearly at the optimal rate.
-Algorithms with convergence certificates. The underestimate sequence builds a global lower bound of F (x) at each iteration, and the gap between the (minimum of the) lower bound and F (x k ) tends to zero. Thus, this difference acts as a kind of surrogate "duality gap", and once this gap falls below some (user defined) stopping tolerance ǫ, it is guaranteed that the point returned by the algorithm is ǫ-optimal.
-No line search. The algorithms developed in this work are computationally efficient and do not involve any 'inner loops'. In contrast, the methods in [3,7,5] all involve an exact linesearch or a root finding process to determine necessary algorithmic parameters, which comes with an additional computational cost.
Paper Outline
The paper is organized as follows. In the next section we introduce the concept of an Underestimate Sequence (UES), and present a proposition which shows that if one has a UES, then it is guaranteed that F (x k ) − F * → 0 at a linear rate. Section 3 is dedicated to the discussion of lower bounds for the function F (x) (in both the smooth and composite cases), and these lower bounds are a critical part of the underestimate sequences framework. In Section 4 we propose two algorithms for solving (1) in the smooth case (h = 0) and in Section 5 we present two algorithms for solving composite problems of the form (1) (h = 0). All algorithms in Sections 4 and 5 are supported by convergence theory, which shows that they are guaranteed to converge to the optimal solution of (1) at a linear rate. In Section 6 we present another algorithm which uses an adaptive Lipschitz constant, rather than the true Lipschitz constant. Section 7 presents numerical experiments to demonstrate the practical advantages of our proposed algorithms, and we give concluding remarks in Section 8.
Underestimate Sequence
In this section, we present the definition of an Underestimate Sequence (UES) and a proposition showing that if one has a UES then F (x k ) − F * → 0.
where α k ∈ (0, 1) for all k ≥ 0, is called an Underestimate Sequence (UES) of the function F (x) if, for all x ∈ R n and for all k ≥ 0 we have, where φ * k := min x φ k (x). where Definition 1 is different from Nesterov's Estimate Sequence (ES) in two ways. Firstly, both our UES and Nesterov's ES contain a sequence of estimators {φ k (x)} ∞ k=0 for F (x), and φ * k converges to F * as k increases. However, in Definition 1 φ k (x) must be a lower/under estimator of F (x) for all k ≥ 0, while this does not necessarily hold for an ES. Nesterov's proof is based on the fact that F (x k ) ≤ φ * k , but this does not hold in our case. Secondly, the definition of an ES only contains two sequences, while the UES has an extra sequence of points {x k } ∞ k=0 . This enables us to show that the gap between the function value at x k and φ * k decreases in the kth iteration. Proposition 1 shows that any sequences that form a UES (i.e., any sequences that satisfy Definition 1) are guaranteed to converge to the optimal solution of problem (1) and the estimate of the duality gap F (x k ) − φ * k is also guaranteed to converge at a linear rate. Thus, the UES construction provides a general framework for determining whether an optimization algorithm for problem (1) will converge (linearly). In particular, if the iterates generated by an optimization algorithm satisfy Definition 1, then that algorithm is not only convergent, but also achieves a linear rate of convergence.
The UES framework is not only interesting from a theoretical perspective, but it also provides a major practical advantage. In particular, F (x k )−φ * k provides a natural stopping criterion when designing algorithms, due to the fact that F (x k ) and φ * k are upper and lower bounds for F * , respectively. This difference is a kind of surrogate for the duality gap, and subsequently, algorithms that adhere to the UES framework are provided with a certificate of optimality, which is a highly desirable attribute.
Lower Bounds via Quadratic Averaging
The purpose of this section is to introduce (global) lower bounds for the function F (x) defined in (1), in both the smooth (h(x) = 0) and nonsmooth cases. Lower bounds are the cornerstone of the UES set up, as seen in (4) in Definition 1. Being able to efficiently construct global lower bounds for F (x) will allow the development of practical algorithms whose convergence is guaranteed via the UES framework.
Before stating the lower bounds, several technical results are presented that will be used throughout this paper.
Preliminary Technical Results
The proximal map is defined as and the proximal gradient is Definitions (7) and (8) will be used with γ ≡ L. Given some point x ∈ R n , a short step and a long step are denoted by In the smooth case (h ≡ 0), the proximal gradient is simply the gradient ∇f (·), so the short and long steps ( (9) and (10)) simplify as The following Lemma characterizes elements of the subdifferential of h(x + ).
Proof For a given point x ∈ R n , = arg min This gives Multiplying through by L, and rearranging, gives the result. ⊓ ⊔
A Lower Bound for Smooth Functions
For any point y ∈ R n , one can define a lower bound which holds with equality φ(x; y) = f (x) if and only if x = y. The lower bound in (13) is a consequence of the assumption in (2) and the equivalence Now, a sequence of lower bounds {φ k (x)} ∞ k=0 can be defined in the following way. Using (13) and a given initial point x 0 , define the function where Differentiating the expression in (15) w.r.t. x shows that φ * 0 and v 0 in (16) are the minimum value and minimizer of φ 0 (x), respectively. This motivates the following construction: For k ≥ 0, α k ∈ (0, 1), and some point y k between x k and v k , recursively define Lemma 2 For all k ≥ 0, φ k+1 can be written in the canonical form where α k ∈ (0, 1) and Proof Using the definitions (19) and (20), for all k ≥ 0 and all x ∈ R n , φ k+1 (x) can be expressed in the form By taking the derivative of (18) w.r.t. x, we see that the minimizer of φ k (x) is where v k+1 is defined in (19). Substituting this minimizer into (18) gives the minimum value φ * k+1 as in (20).
⊓ ⊔
The following Lemma shows that φ k (x) is a (global) lower bound for f (x).
Proof We proceed by induction. When k = 0, the result holds trivially. Now,
A Lower Bound for Composite Functions
Here, the previous results are extended from the smooth to the composite setting, so it is assumed that h(x) is not equivalent to the zero function.
The following Lemma defines a lower bound for F (x) in (1). The lower bound is the same as that presented in [7] and [5], with the roles of x and y reversed here; the proof is included for completeness.
Lemma 5 (Lemma 6.1 in [7]; Lemma 3.1 in [5]) Given a point y ∈ R n , let G L (y) and y + be defined in (8) and (9), respectively. Then for all x ∈ R n ϕ(x; y) Proof By Assumption 1 (µ-strongly convex) and (L-smooth) Combining (26) and (27) gives Before stating the next result, which shows that ϕ(x; y) is a quadratic lower bound, we give the following equivalence, which is the composite version of (14), Lemma 6 For all x, y ∈ R n , the lower bound (25) has the canonical form where Proof Minimizing ϕ(x; y) in (25) w.r.t. x, and using the definition in (7), yields the minimizer y ++ = arg min The corresponding minimal value is which is equivalent to (31). (Note also that (31) and (30) are the minimizer and minimum value of (29), respectively.) Furthermore, which confirms that (29) is equivalent to (25). ⊓ ⊔ Remark 1 Lemma 6 shows that the lower bound (25) (equivalently (29)) is a quadratic lower bound for F (x). Now, a sequence of lower bounds {ϕ k (x)} ∞ k=0 can be defined in the following way. Using (25) and a given initial point x 0 , define the function where Differentiating (32) w.r.t. x shows that the minimum value and minimizer of ϕ 0 (x) are given by (33). This motivates the following construction For k ≥ 0 and some point y k between x k and v k we recursively define Lemma 7 For all k ≥ 0, ϕ k+1 can be written in the canonical form where Proof Using (34) and (35), for all k ≥ 0 and all x ∈ R n , ϕ k+1 (x) can be expressed in the form Taking the derivative of (35) w.r.t. x shows that the minimizer of ϕ k (x) is Proof The proof follows the same arguments as for Lemma 3; noting that (19) and (36) are equivalent, and then combining (24) and (37) gives the result. ⊓ ⊔ Proof When k = 0, the result holds trivially. Now assume that ϕ k (x) ≤ F (x). Then This completes the proof. ⊓ ⊔
Algorithms and Convergence Guarantees for Smooth Functions
The purpose of this section is to demonstrate that the UES framework, and the previously presented lower bounds, are useable definitions that give rise to efficient implementable algorithms. Throughout this section we consider smooth optimization problems (problems of the form (1) with h ≡ 0) and, as for all results in this work, we suppose that Assumption 1 holds.
We present two algorithms whose iterates fit the Underestimate Sequence framework described in Section 2, and use the lower bounds developed in Section 3.2. The first algorithm is a gradient descent type method, while the second algorithm is a gradient descent type method that incorporates an acceleration strategy. As will be shown, both algorithms are supported by convergence guarantees, which are established via the UES framework.
An Underestimate Sequence Algorithm for Smooth Functions
We are now ready to present an algorithm that fits our UES framework; a brief description follows.
Set y k = x k and y ++ k = x ++ k .
6:
Update v k+1 and φ * k+1 as in (19) and (20), respectively. 7: The Smooth (functions) UnderEstimate Sequence Algorithm (SUESA) presented in Algorithm 1 solves the problem (1) in the smooth case, i.e., when h = 0. The algorithm proceeds as follows. First, an initial point x 0 ∈ R n is chosen, as well as some stopping tolerance ǫ > 0. Secondly, the point v 0 = x ++ 0 (i.e., v 0 is the long step from x 0 ) is constructed, as well as the lower bound φ 0 (x) with minimum value φ * 0 . The algorithm uses a fixed step size of α = µ/L at every iteration. Next, the main loop commences and an iteration proceeds as follows. One sets y k = x k (i.e., y k is not explicitly used in SUESA); x k is updated by taking a gradient descent step with the step size 1 L , resulting in the new point x k+1 ; the point v k+1 = x ++ k+1 is constructed and the lower bound φ k+1 (x) is updated.
The algorithm constructs two points at every iteration, namely x k and v k , and the values φ k (x) and φ * k . The point v k and the value φ * k are used for the lower bound, which is essential for the stopping criterion. The stopping condition f (x k )− φ * k ≤ ǫ provides a certificate of optimality; once the stopping condition is satisfied, it is guaranteed that x k gives a function value f (x k ) that is at most ǫ from the true solution f * .
If
Step 5 is considered in isolation, then one sees that at every iteration of SUESA, the point x k is updated via a standard gradient descent step. That is, a step of size 1/L in the direction of the negative gradient is taken from the current point x k , resulting in the new point x k+1 . However, SUESA is different from the standard gradient descent method, because SUESA also involves several other ingredients, including the points v k and lower bound values φ * k .
The following result provides a convergence guarantee for SUESA. In particular, Theorem 1 shows that the iterates generated by Algorithm 1 form an underestimate sequence (i.e., they satisfy Definition 1) and therefore, Algorithm 1 is guaranteed to converge (linearly) to the solution of problem (1).
Proof We must show that the iterates generated by Algorithm 1 satisfy the conditions of Definition 1. Note that, α k = µ/L ∈ (0, 1) for all k ≥ 0, so by Lemma 4, (4) holds. Thus, it remains to prove (5). Combining the definition of x k+1 (Step 5 in Algorithm 1) with (3), gives Subtracting φ * k+1 in (21) from both sides of the above gives, and {α k } ∞ k=0 generated by SUESA (Algorithm 1) form a UES, so SUESA converges at a linear rate Corollary 1 is simply a consequence Proposition 1, which states that if form an underestimate sequence, then (6) holds (i.e., linear convergence). Theorem 1 shows that SUESA (Algorithm 1) generates iterates forming a UES, implying that SUESA converges linearly to the optimal solution. Moreover, α k = µ L for all k in SUESA, so recalling the definition of λ k in Proposition 1, confirms the rate (1 − µ L ) in Corollary 1. We remark that there are other ways to prove convergence of Algorithm 1. For example, one can proceed by proving that the distance between x k and the minimizer of the lower bound in kth iteration shrinks at a fixed rate. That is, since α k = µ L , we have the following equality, Equation (41) illustrates that, after each iteration of Algorithm 1, the line joining x k+1 and v k+1 is parallel to the line joining x k and v k from the previous iteration (see the blue lines in Figure 1). Moreover, the distance between the two points is reduced by precisely (1 − µ L ) at every iteration. Intuitively, the solution x k and the minimizer v k are becoming ever closer, and eventually they both converge to the optimal solution x * .
One can visualize the fact above using the following toy example. Consider the (smooth) regularized logistic regression problem, i.e., problem (1) with h = 0 and where a i ∈ R n is the ith feature vector with corresponding (binary) label y i ∈ R. For this example we randomly generate 100 two dimensional data points with binary labels {a i , y i } (so m = 100 and n = 2) as shown in the left hand plot in Figure 1. (Each point a i is plotted on a 2D grid, and the point is colored green or red to highlight its label y i ). Parameter λ = 0.01, so the strong convexity constant is µ = 0.01. Algorithm 1 is used to solve this problem, starting from the point x 0 = (−20, 10) T , and the iterates are shown in the right hand plot in Figure 1. 1: Initialization: Set k = 0, ǫ > 0, initial point x 0 ∈ R n and compute µ, L. 2: Set φ 0 (x) as in (15), with v 0 and φ * 0 as in (16). Let α k = µ L , β k = 1 1+α k .
7:
k = k + 1. 8: end while be described as follows. Algorithm initialization is similar to that of SUESA (Algorithm 1), where an initial point x 0 ∈ R n and some stopping tolerance ǫ > 0 are chosen, the point v 0 = x ++ 0 is constructed and the lower bound φ 0 (x) and minimum value φ * 0 are evaluated. For ASUESA one sets α k = µ/L and the parameter β k = 1 1+α k is also used. Parameter α k is fixed for all iterations, and subsequently so too is β k . The main loop proceeds as follows. At every iteration one sets y k to be a convex combination of the points x k and v k ; a gradient descent step is taken from y k , resulting in the new point x k+1 ; the point v k+1 is constructed using (19) and the lower bound φ k+1 (x) is updated via (20).
Notice that Algorithm 2 can be viewed as an accelerated version of Algorithm 1. In contrast to Algorithm 1, ASUESA constructs three points at every iteration, namely x k , v k and y k , where the intermediate vector y k is a convex combination of the points x k and v k (i.e., for ASUESA x k = y k .) Notice also that x k+1 is the result of a gradient descent step taken from the point y k . The variable φ * k is also maintained and is used in the stopping condition.
The following result provides a convergence guarantee for ASUESA. Theorem 2 shows that the iterates generated by Algorithm 2 fit the UES framework (i.e., they satisfy Definition 1) and therefore, Algorithm 2 is guaranteed to converge (linearly at the optimal rate) to the solution of problem (1) (see Corollary 2).
Proof At every iteration of ASUESA the function value is reduced as follows Moreover, By completing the square term one obtains where the last step follows because α k = µ L in ASUESA, so 1 2L − α 2 k 2µ = 0. Now, rearranging the expression for y k in Step 6 in Algorithm 2 gives and notice also that β k = 1 1+α k for all k, so Thus, by the convexity of f we have Using (46) in (43) gives Corollary 2 Let Assumption 1 hold. Then, the sequence of iterates {x k } k≥0 generated by Algorithm 2 exhibits the optimal linear rate of convergence Corollary 2 shows that ASUESA converges linearly at the optimal rate. The difference in convergence rates between Algorithms 1 and 2 is essentially explained by the quadratic term v k − y ++ k 2 , which is entirely ignored in the proof of Theorem 1. Thus, in the proof of Theorem 2, one is able to incorporate another term containing ∇f (y k ) 2 , which leads to a larger allowable value of α k , and ultimately, a tighter bound for Algorithm 2.
Algorithms and Convergence Guarantees for Composite Functions
The purpose of this section is to extend the results presented in Section 4 from the smooth to the composite setting, i.e., here we suppose that h(x) = 0. In particular, we present two algorithms whose iterates fit the Underestimate Sequence framework described in Section 2, and use the lower bounds developed in Section 3.3. Both algorithms appear to fit the composite setting very naturally; the first algorithm is a proximal gradient descent type method, while the second algorithm is an accelerated proximal gradient variant. The algorithms also incorporate stopping conditions that provide a certificate of optimality. We establish convergence guarantees for both algorithms via the UES framework and for all results we suppose that Assumption 1 holds.
A Composite Underestimate Sequence Algorithm
We now present an algorithm to solve 1, which is based on the UES framework. A brief description will follow.
8: end while
The Composite (functions) UnderEstimate Sequence Algorithm (CUESA) presented in Algorithm 3 solves problem (1) when h = 0. The algorithm is described now. First, an initial point x 0 ∈ R n is chosen, as well as some stopping tolerance ǫ > 0. Secondly, the point v 0 = x ++ 0 is constructed, as well as the lower bound ϕ 0 (x) with minimum value ϕ * 0 . The algorithm uses a fixed step size of α k = µ L at every iteration. Next, the main loop commences and an iteration proceeds as follows. One sets y k = x k (so y k is not explicitly used in CUESA); x k is updated by taking a proximal gradient descent step with the step size 1 L , resulting in the new point x k+1 ; the point v k+1 = x ++ k+1 is constructed and the lower bound ϕ k+1 (x) is updated.
The algorithm utilizes two points at every iteration, namely x k and v k , as well as the values ϕ k (x) and ϕ * k . The point v k and the value ϕ * k are used for the lower bound, which is essential for the stopping criterion.
Considering only
Step 5, one sees that at every iteration of CUESA the point x k is updated via a proximal gradient descent step. That is, a step of size 1/L in the direction of the negative proximal gradient is taken from the current point x k , resulting in the new point x k+1 . What makes CUESA distinct from a standard proximal gradient method is the inclusion of several other ingredients related to the lower bound ϕ k (x), which guarantee an ǫ-optimal solution. Now we present a convergence guarantee for CUESA. Theorem 3 shows that the iterates generated by Algorithm 1 form an underestimate sequence (i.e., they satisfy Definition 1) and therefore, Algorithm 1 is guaranteed to converge (linearly) to the solution of problem (1).
Theorem 3 Let Assumption 1 hold. The sequences {x
Proof From Step 5 in CUESA, one sees that y k = x k for all k, so it also follows that y + k = x k+1 for all k. Now, using y = x = x k in the lower bound (25) gives Thus, where the last step follows because Corollary 3 Let Assumption 1 hold. Then, the sequence of iterates {x k } k≥0 generated by Algorithm 3 exhibits a linear rate of convergence
An Accelerated Composite UES Algorithm
An accelerated algorithm for convex composite problems is now presented.
8: end while
The Acclerated Composite UnderEstimate Sequence Algorithm (ACUESA) presented in Algorithm 4 solves (1) when h = 0. The algorithm proceeds as follows. ACUESA is initialized with a starting point x 0 ∈ R n , a stopping tolerance ǫ > 0, the point v 0 = x ++ 0 as well as the construction of the lower bound ϕ 0 (x) and minimum value ϕ * 0 . For ACUESA one sets α k = µ/L and the parameter β k = 1 1+α k is also used. Notice that parameters α k and β k are fixed for all iteration. The main loop proceeds as follows. At every iteration one sets y k to be a convex combination of the points x k and v k ; a gradient descent step is taken from y k , resulting in the new point x k+1 ; the point v k+1 is constructed and the lower bound ϕ k+1 (x) is updated.
Algorithm 4 can be viewed as the accelerated version of Algorithm 3. In contrast to Algorithm 3, ACUESA constructs three points at every iteration, namely x k , v k and y k , where the intermediate vector y k is a convex combination of the points x k and v k (i.e., for ACUESA x k = y k .) Notice also that x k+1 is the result of a gradient descent step taken from the point y k . The variable ϕ * k are also maintained and is used in the stopping condition. The following result provides a convergence guarantee for ACUESA. Theorem 4 shows that the iterates generated by Algorithm 4 fit the UES framework (i.e., they satisfy Definition 1), so Algorithm 4 is guaranteed to converge (linearly at the optimal rate) to the solution of problem (1) (see Corollary 4).
Proof From Step 7 in ACUESA, Hence, By considering the expression for y k in Step 6 in Algorithm 4 and noticing that β k = 1 1+α k for all k, (44) and (45) hold. Thus, combining (44) and (10) gives Substituting (50) into (49) results in Using a rearrangement of the lower bound (25), and (48), gives Thus, the iterates generated by ACUESA form a UES. ⊓ ⊔ Corollary 4 Let Assumption 1 hold. Then, the sequence of iterates {x k } k≥0 generated by Algorithm 4 exhibits the optimal linear rate of convergence
An algorithm with adaptive L
In the algorithms presented so far, the Lipschitz constant L is explicitly used in each algorithm. However, by studying the convergence proofs for Algorithms 1-4 one notices that the role of the Lipschitz constant L is to enforce a reduction in the function value from one iteration to the next (see the first step in the proofs of Theorems 1-4). Thus, it is natural to ask the question, 'Can an adaptive Lipschitz constant, L k say, be used in place of the true Lipschitz constant L?'. In this section we show that, using a strategy similar to that proposed by Nesterov in [19,21], it is possible to employ an adaptive Lipschitz constant while preserving convergence guarantees.
The Inequality
When the Lipschitz constant L is unknown, or is expensive to compute, it may be preferable to employ an 'adaptive' Lipschitz constant, L k say, i.e., determine a value L k that approximates L locally. This approach has been previously studied by Nesterov in [19,21], and it has the added advantage that L k may be smaller than the true Lipschitz constant L, which can lead to large step sizes. Throughout the algorithm certain inequalities must hold to ensure that convergence guarantees are maintained. The relevant inequalities are as follows.
Smooth case. For smooth functions, (39) and (42) must hold for SUESA and ASUESA, respectively. This means that at every iteration, if L k satisfies then convergence guarantees for SUESA and ASUESA are maintained. If L k is chosen to satisfy (51) then we will show that we have the improvement α k = µ/L k (or α k = µ/L k for the accelerated case) at every iteration.
Non-smooth case. For composite functions, (47) and (49) must hold for CUESA and ACUESA, respectively. So, if L k satisfies then the algorithms are still guaranteed to converge. This also implies the improvement α k = µ/L k (or α k = µ/L k for the accelerated case) at every iteration.
With these two inequalities in mind, the adaptive Lipschitz process can be described briefly as follows. When initializing Algorithms 1-4, choose an initial estimate L 0 > 0, and increase and decrease factors u > 1 and d > 1 respectively. To find the appropriate L k , at iteration k, one starts with the value L k−1 (i.e., the adaptive Lipschitz constant from the previous iteration) and increases it via multiplication with u, or decreases it via division by d, until (51) (or (52)) is satisfied. At iteration k, once an L k is found such that (51) (or (52) in the composite case) holds, then the iteration proceeds with L k used in place of L.
Note that, using this process, it is possible that at some iteration, L k < L, i.e., L k may be smaller than L. In this case, the stepsize 1/L k is used, which is larger than 1/L.
The psuedocode is presented in Algorithm 5. Note that determining the adaptive Lipschitz constant occurs as an inner loop within one of Algorithms 1-4. Thus, we use the iteration counter s in Algorithm 5 to distinguish it from the outer loop counter k. Note that the strategy above holds for Algorithms 2 and 4, but it is straightforward to adapt it to Algorithms 1 and 3 by modifying the variables α s and β s .
We now present several theoretical result related to this setup.
Lemma 10 Let u, d > 1. If L 0 ≤ L then for all k, L k ≤ L · u. If L 0 ≥ L then Proof Note that, if L k ≥ L then L k is guaranteed to satisfy (51) or (52).
Now suppose that at the start of iteration k, we have a trial value L s < L.
If L s satisfies either (51) or (52) then one sets L k = L s and terminates the
11:
end if 12: end for 13: Output: L k = L s , α k = α s , β k = β s , y k = y s and x k+1 = x s . inner loop, noting that L k < L. If L s does not satisfy (51) or (52) then it is increase by multiplication with u. Suppose that L s is the largest possible trial value satisfying L s < L and L s · u ≥ L, but with (51) or (52) not holding. Then multiplying L s by u once results in L s · u ≥ L and L s · u ≤ L · u, and this value is guaranteed to satisfy (51) or (52) so we set L k = L s .
On the other hand, suppose that at iteration k = 0, the initial value happens to satisfy L 0 > L. Then this L 0 will satisfy (51) or (52) so it will be accepted with L k = L 0 at iteration k = 0. At iteration k = 1 the first trial value is set to L s = L k−1 /d = L 0 /d. If L s ≥ L, then one accepts L k = L 1 = L s and the iteration proceeds. At the start of any iteration, if the previous value L k−1 ≥ L, then the first trial value is divided by d > 1. Thus, it is guaranteed In other words, L k ≥ L must hold for k = {1, 2, ..., K}.
Proposition 2
The maximum number of times that Lines 4 to 10 are executed during the first K iterations is bounded by Proof In the first iteration, at most we need to execute the procedure from line 4 to line 10 of Algorithm 5 log(Ld/L0) log u times, assuming L 0 /d < L. In the case of L 0 /d ≥ L, this procedure is carried out once. In the kth iteration when k ≥ 2, since we know that L k−1 ≥ L, the above procedure should be run log d log u + 1 times if L k−1 /d < L occurs. Thus, we obtain (53).
Numerical Experiments
In this section, we present numerical results to compare our proposed algorithms with several other methods that have an optimal convergence rate. The algorithms are as follows, and are summarized in Table 1.
OQA. The Optimal Quadratic Averaging algorithm (OQA) [7], which builds upon the work in [3], maintains a quadratic lower bound on the objective function value at every iteration. The quadratic lower bound is called 'optimal' because it is the 'best' lower bound that can be obtained as a convex combination of the previous 2 quadratic lower bounds. In OQA, x k+1 is set to be the minimizer of f (x) on the line joining the points x + k and the minimizer of the current quadratic lower bound. In [7] the author suggest a variant of OQA, which we call OQA+ here, that computes x + k via a line search that does not use the true Lipschitz constant L. We compare both OQA and OQA+ in our experiments.
NEST. We use NEST to denote the algorithm described in Chapter 2 of [17]. Further, NEST+ is a variant of NEST in which the Lipschitz constant L is adaptively update via the strategy in [19,21].
GD. We also implement a Gradient Descent (GD) method which uses a fixed stepsize of 1 L . Note that this is similar to Algorithm 1, although GD does not maintain any kind of lower bound. As the only non-optimal algorithm, Gradient Descent provides a benchmark that will enable us to observe any performance advantages of the optimal methods. Algorithm (4.9) described in [19] with fixed Lipschitz constant CNEST+ Algorithm (4.9) described in [19] with adaptive Lipschitz constant and ERM with a logistic loss (also called logistic regression) In each case y i ∈ {−1, +1} is the label and a i ∈ R n represents the training data for i = {1, 2, ..., m}. All the datasets in our experiments come from LIBSVM database [4]. Also note that for all experiments we have µ = λ.
Comparison on Decreasing Objective Values
In the first experiment we compare the OQA, ASUESA and NEST algorithms (both the standard and adaptive Lipschitz variants) and investigate how the objective function values behave on several test problems. The test problems considered in this experiment are the ala dataset with a squared hinge loss and a value λ = 10 −4 , the rcv1 dataset with a logistic loss and a value λ = 10 −4 , and the covtype dataset with a squared hinge loss and a value λ = 10 −5 . In Figure 2 we plot the gap f (x k ) − φ * k vs the number of function evaluations and the gap f (x k ) − φ * k vs the cpu time. The figure shows the advantages of using an adaptive Lipschitz constant with the adaptive methods performing better than their original versions in most cases. Figure 2 also shows that ASUESA+ performs very well, being the best algorithm on the first dataset, and the second best algorithm on the other two datasets.
Theory and Practice for OQA and ASUESA
In this numerical experiment we study ASUESA and OQA and investigate how their practical performance compares with that predicted by theory. For the OQA algorithm a line search is needed to determine a necessary algorithmic variable, and to ensure that theory for OQA holds, the line search should be exact. In this experiment we will use bisection to compute this variable, but we will restrict the number of bisection steps allowed to b = 2, 5, 20. Figure 3 plots the ratio (f ( for ASUESA and for three instances of OQA, where each instance uses a different number of bisection steps b = 2, 5, 20. We also plot 1 − µ L (black dots), which is the amount of decrease in the gap f (x k ) − φ * k at each iteration predicted by the theory. (In theory, we should have (f ( From the plots in Figure 3 we see that ASUESA performs very well, and as predicted by the theory, with the ratio (f (x k ) − φ * k )/(f (x k−1 ) − φ * k−1 ) always strictly below the theoretical bound. On the other hand, the quality of line search affects OQA significantly. The fewer the number of line search (bisection) iterations, the more likely it is for OQA to violate the theoretical results. Note that this is not necessarily surprising because the theory for OQA requires the exact minimizer along a line segment to be found, so 2 or 5 iterations of bisection may be simply too few to find it. Notice that when b = 2, the green line shows that OQA behaves erratically, with the being greater than 1 on many iterations, indicating that the gap is growing on those iterations. When we use OQA with b = 5 steps of bisection at each iteration (light blue line), the algorithm performs better, and often, but not always, the ratio is less than 1. Finally, the dark blue line shows the behaviour of OQA when b = 20 steps of bisection at each iteration. The dark blue line is always below the theoretical bound of 1 − µ L , indicating good algorithmic performance (often better than predicted by theory). However, the line search needed by OQA comes at an additional computational cost, which can still mean that the overall runtime is longer for OQA than for ASUESA, as we now show.
Here a similar experiment is performed to compare the theoretical and practical performance of SUESA and ASUESA. We have already seen that the theoretical results for ASUESA give a proportional reduction of 1 − µ L in the gap at every iteration. However, for SUESA, the proportional reduction in the gap is 1 − µ L . We investigate how these theoretical bounds compare with the practical performance of each of these algorithms. We use the ala, rcv1 and covtype datasets for this experiment, and for each of the three datasets we form both a logistic loss, and a squared hinge loss to create 6 problem instances. The results are shown in Figure 4.
for SUESA and ASUESA and 1 − µ L (green line) and 1 − µ L (black line). Figure 4 presents the ratio for SUESA and ASUESA. Also displayed is the theoretical (unaccelerated) rate 1 − µ L (the green line) and the theoretical (accelerated) rate 1 − µ L (the black line). One sees that the practical performance of SUESA is very similar to that predicted by the theory because the blue line matches the green line closely. Another observation is that for the accelerated algorithm (ASUESA), in practice, the reduction in the gap f (x k ) − φ * k is often more optimistic than the theoretical rate.
Experiments on composite functions
In this section we perform several numerical experiments on problems with a composite objective. Specifically, we consider the elastic net problem, which is problem (1) with Notice that the first two terms in (56) are smooth, while the ℓ 1 -norm term makes (56) nonsmooth overall. We compare our Algorithm 3 and 4 (CUESA and ACUESA) with the one proposed in [19] (NEST). As stated previously, each of these algorithms can be implemented with either a fixed L or an adaptive L, and we will compare each algorithm under both of these two options.
For these experiments we again use the 3 datasets ala, rcv1 and covtype.
For the ala data the regularization parameters were set to λ 1 = λ 2 = 10 −4 , for the rcv1 data the regularization parameters were set to λ 1 = 10 −4 and λ 2 = 10 −5 , and for the covtype data the regularization parameters were set to λ 1 = 10 −4 and λ 2 = 10 −6 . The results of this experiment are presented in Figure 5, and they show the clear practical advantage of the ACUESA algorithm. The ACUESA algorithm outperforms the CNEST algorithm in all problem instances. Interestingly, on the rcv1 dataset, the CUESA+ algorithm (CUESA with an adaptive Lipschitz constant) performs better than the accelerated ACUESA algorithm, although the ACUESA+ (accelerated plus adaptive Lipschitz constant) algorithm is still the best overall.
In the final numerical experiment presented here, we investigate the theoretical vs practical performance of CUESA and ACUESA. We set up three problems using each of the 3 datasets already described, and the results are presented in Figure 6. Here we have a similar observation as in Figure 5.
As before, the green line represents the theoretical (unaccelerated) rate 1 − µ L and the black line represents the theoretical (accelerated) rate 1 − µ L . Note that the practical performance of CUESA closely matches the theoretical rate. We also observe that the practical performance of ACUESA is always at least as good as the theoretical rate, and can often get better decrease in the gap per iteration than 1 − µ L . All the numerical results presented in this section strongly support the practical success of the SUESA, ASUESA, CUESA and ACUESA algorithms.
Conclusion
In this paper we studied efficient algorithms for solving the strongly convex composite problem (1). We proposed four new algorithms -SUESA, ASUESA, CUESA and ACUESA -to solve (1) in both the smooth and composite cases. All of these algorithms maintain a global lower bound on the objective function value, which can be used as an algorithm stopping condition to provide a certificate of convergence. Moreover, we proposed a new underestimate sequence framework that incorporates three sequences, one of which is a global lower bound on the objective function, and this framework was used to establish convergence guarantees for the algorithms proposed here. Our algorithms have a linear rate of convergence, and the two accelerated variants (ASUESA and ACUESA) converge at the optimal linear rate. We also presented a strategy to adaptively select a local Lipschitz constant for the situation when one does not wish to, or cannot, compute the true Lipschitz constant. Numerical experiments show that our algorithms are computationally competitive when compared with other state-of-the-art methods including Nesterov's accelerated gradient methods and optimal quadratic averaging methods.
|
2017-10-10T16:10:55.000Z
|
2017-10-10T00:00:00.000
|
{
"year": 2017,
"sha1": "c1b2781e5af77b54d132ebc1a691a44cf06e62b7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "beda4d108f524dd0dab94c9d09a36beb4af7ff22",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
55447321
|
pes2o/s2orc
|
v3-fos-license
|
A survey of Wolbachia , Spiroplasma and other bacteria in parthenogenetic and non-parthenogenetic phasmid ( Phasmatodea ) species
The ecological and genetic mechanisms that determine Phasmatodea reproductive biology are poorly understood. The order includes standard sexual species, but also many others that display distinct types of parthenogenesis (tychoparthenogenesis, automixis, apomixis, etc.), or both systems facultatively. In a preliminary survey, we analysed Wolbachia and Spiroplasma infection in 244 individuals from 28 species and 24 genera of stick insects by bacterial 16S rRNA gene amplification. Our main aim was to determine whether some of the bacterial endosymbionts involved in distinct reproductive alterations in other arthropods, including parthenogenesis and male killing, are present in phasmids. We found no Wolbachia infection in any of the phasmid species analysed, but confirmed the presence of Spiroplasma in some sexual, mixed and asexual species. Phylogenetic analysis identified these bacterial strains as belonging to the Ixodetis clade. Other bacteria genera were also detected. The possible role of these bacteria in Phasmatodea biology is discussed. * Present address: BPI – Biologie des Populations Introduites, Institut Sophia Agrobiotech (INRA PACA), 400 route des Chappes, BP 167, 06903 Sophia Antipolis cedex, France.
INTRODUCTION
Parthenogenesis is a very common phenomenon in most animal groups, a reproductive mode that limits genetic recombination.Parthenogenetic females can, in principle, transfer all their genes to their offspring, while a bisexual female transmits only half of them because her chromosome number is reduced during meiosis.This can be interpreted as meaning that the representation of parthenogenetic female genes will double in the next generation, although this is controversial (Suomalainen et al., 1987).However, this could explain why the bacterial endosymbiont Wolbachia manipulates host reproduction in some cases (Werren et al., 2008).Given its almost complete maternal transmission, inducing parthenogenesis, clearly helps its own transmission, accompanied by the "correct" host genes, i.e. those ensuring parthenogenesis.
The order Phasmatodea has interesting biological and ecological characteristics.It comprises more than 2,500 species, some of which -the stick insects -bear a striking resemblance to branches or leaves.Parthenogenesis occurs in a number of phasmids (Bedford, 1978) and is particularly well documented in the genera Bacillus and Timema (Trewick et al., 2008;Schwander & Crespi, 2009), as well as occurring in Sipyloidea, Carausius, Clitumnus and many other genera (Suomalainen et al., 1987;Lacadena, 1996) (Table 1).There have been no reports of parthenogenesis induced by bacteria in phasmids, but a survey of the literature suggests this may primarily be because this possibility has not been explored empirically.
On the other hand, Spiroplasma phylum (Firmicutes) is another bacterial endosymbiont that can be considered one of the most important taxa because of its wide host range.It appears mainly in insects, but is occasionally found in other invertebrates (Haselkorn, 2010).This bacterial genus has been detected among the gastric flora of many arthropod species.Its association with intestinal epithelial cells appears to produce no adverse effects, and the genus is therefore considered to be commensal.However, under other circumstances, members of the genus are described as pathogens.The transition to pathogenicity may be linked to the ability to cross the barrier of the insect gut (Haselkorn, 2010) to reach the haemolymph, ovaries, salivary glands or hypodermis (Regassa & Gasparich, 2006).They have been characterised as pathogenic bacteria in shrimps, crabs and bees, in which they cause high levels of mortality (Haselkorn, 2010).In certain hosts infection with these bacteria may impair reproduction: the bacterium is transmitted maternally, inducing the selective elimination of male progeny.This phenotype is called male killing (Regassa & Gasparich, 2006).The most widely studied example is S. poulsonii, which was isolated from neotropical species of Drosophila willistoni (Sturtevant) (Williamson et al., 1999), in which, in the most extreme case, all the male offspring of infected females are eliminated.Other instances of male killing have been detected in strains of Spiroplasma that infect D. melanogaster Meigen (Montenegro et al., 2005), Danaus chrysippus (L.) (Lepidoptera: Danaidae) (Jiggins et al., 2000) and Adalia bipunctata (L.) (Coleoptera: Coccinelidae) (Hurst et al., 1999a), among others.In cases without male killing, males and females can both be infected with no obvious change in the phenotype (Haselkorn, 2010).In natural populations of Drosophila, an 85% infection rate of non-male-killing Spiroplasma has been noted (Watts et al., 2009).
In phasmids, Spiroplasma has been described in two Argentinian populations of Agathemera spp., as well as in their parasitic mites of the genus Leptus (Leptidae) (DiBlasi et al., 2011), although their phenotypic effects have not been associated with male killing.In addition, this endosymbiont has been identified in the strictly parthenogenetic Ramulus artemis (Westwood) and in the sexual Pharnacia ponderosa Stål (Shelomi et al., 2013), with unknown phenotypic effects.
Wolbachia and Spiroplasma are inherited endosymbionts that can have various influences on their hosts, ranging from mutualistic to parasitic effects, potentially affecting their reproduction and evolution.Both bacteria are transmitted maternally from infected females to their offspring and are not incompatible with each other (Duron et al., 2008, Martinez-Rodriguez et al., 2013).Driving host reproduction leads to an increase in the number of infected females, even at the expense of males, improving the fitness of the bacterium and its transmission between individuals within the population (Haselkorn, 2010).
Stick insect species exhibit a wide range of reproductive mechanisms, some of which are characterised by the absence of males and are therefore compatible with the involvement of these bacteria or of others with similar ef-fects.We explored this possibility in a broad survey of species of phasmids using a polymerase chain reaction (PCR) / DNA sequencing approach.
Obtaining DNA and PCR characterisation
We obtained data from 244 individuals representing 28 species and 24 genera of the order Phasmatodea.Insects were collected in 2012 and 2013 from distinct captive populations of different geographic origin, all of them naturalised in Spain (Table 1).These individuals were kindly donated for this study, as recognised in Table 1, and preserved in absolute ethanol at -20°C until analysed.
Genomic DNA was obtained in different ways, depending on the size of the organism: (1) large individuals -from an abdominal fragment containing the gonads; (2) medium-sized adultsfrom the abdomen; and (3) small adults, nymphs and juveniles of reduced size -from the whole body (except for the head, in order to exclude eye pigments, which reduce the quality of the DNA), as detailed in Zabal-Aguirre et al. ( 2010) and Martinez- Rodriguez et al. (2013).DNA samples were standardised at a final concentration of 50 ng/μl using a NanoDrop 1000 Spectrophotometer (Thermo Scientific, Wilmington, USA).
Preliminary analyses were performed to check the quality of the DNA samples.This enabled us to confirm that these were not fragmented, allowing further microbial detection (see below): (i) for each sample, a 2% agarose electrophoretic gel with 2 µl of sample was run at 70 V, and (ii) a PCR of the cytochrome oxidase I (COI) mitochondrial gene, and 0.6 mM of each primer (numbered as 1 in Table 2) were used in PCR reactions performed in a final volume of 50 µl (1 × buffer, 2.0 mM MgCl 2 , 0.2 mM dNTPs, 1.25 U Taq polymerase and 2.0 µl (100 ng) of DNA).Reagents were supplied by BIOTAQ (Bioline Reagents Ltd, London, UK).A Techne TC-512 thermocycler was programmed to give an initial denaturation step at 94°C for 10 min, followed by 36 cycles of denaturation at 94°C for 30 s, an annealing step at 54°C for 45 s, an elongation step at 72°C for 90 s, and a single final elongation cycle at 70°C for 10 min.
Wolbachia infection in these phasmids was checked by PCR detection of the 16S rRNA sequence from this bacterium (Table 2, primers nº 2).When amplification was not detected by electrophoresis or the negative controls produced a band, the resulting products were re-amplified with the same primers to test for possible false-negatives due to low-level infection or contamination, respectively.Reactions were performed in a final volume of 50 µl, containing 1 × buffer, 2.0 mM MgCl 2 , 0.2 mM dNTPs, 0.6 Table 2. PCR primers used to amplify the cytochrome oxidase I (COI) mitochondrial gene (1) in order to check the quality of the DNAs examined, or to detect possible bacterial endosymbionts in the phasmid species studied: 16S rDNA (2) and wsp (primers 3) from Wolbachia, 16S rDNA from Spiroplasma (4) and from eubacteria (5).
Primers
Sequence (5'-3') To verify our results, a further Wolbachia detection system was developed: PCR of the wsp gene of Wolbachia (Table 2, primers nº 3) was performed in a final volume of 40 µl containing 1 × buffer, 1.5 mM MgCl 2 , 0.2 mM dNTPs, 1 µM of each primer, 0.5 U of BIOTAQ polymerase (Bioline), and 2 µl of standardised DNA template solution from each individual (100 ng).Techne TC-512 thermocycler conditions were here 94°C for 2 min, followed by 37 cycles of 94°C for 30 s, 58°C for 1 min and 72°C for 90 s, followed by a final elongation cycle at 72°C for 10 min.
Spiroplasma infection was tested for the presence of the 16S rRNA gene by PCR using specific primers as shown in Table 2 (primers nº 4).The possible presence of other bacteria in the individual insects as studied here was also checked by PCR of their 16S rRNA sequences using universal primers for eubacteria (Table 2, primers nº 5).For these amplifications, reactions were conducted in a final volume of 50 µl containing the appropriate 1 × buffer, 2.0 mM MgCl 2 , 0.2 mM dNTPs, 0.6 mM of each primer, 1.25 U of BIOTAQ polymerase (Bioline) and 2.0 µl of the standardised DNA template solution (100 ng).Techne TC-512 thermocycler conditions were initially 95°C for 2 min, followed by 35 cycles of 94°C for 30 s, 54°C for 1 min, 72°C for 90 s, and a final elongation cycle of 72°C for 10 min (Martinez-Rodriguez et al., 2013 for details).
The amplification was checked electrophoretically in all cases: 10 µl of each PCR product were run at 70 V in a 2% agarose gel containing 0.5 mg/ml of ethidium bromide with a track reserved for a 1-kb DNA size marker (Biotools, Madrid, Spain), before visualising using a UV transilluminator (Uvitec UVIdoc HD2, Cambridge, UK).
All PCR reactions included the appropriate controls.As positive controls for Wolbachia, Spiroplasma and eubacteria, DNA from previously characterised infected individuals of Chorthippus parallelus (Zetterstedt) (Orthoptera: Acrididae) was used (Martinez-Rodriguez et al., 2013).For the negative controls, no DNA was included in the PCR reaction mix.All amplifications were made at least twice.
PCR product purification, sequencing and characterisation
PCR-amplified sequences from the 16S rRNA gene of Spiroplasma and eubacteria were purified with the ExoSAP-IT kit supplied by GE Healthcare Bio-Sciences Corp. (Piscataway, NJ, USA).Resulting products were automatically sequenced by STA-BVIDA (http://stabvida.com/,Caparica, Portugal).The genus and taxon were assigned (when possible) with BLAST (Basic Local Alignment Search Tool) (http://blast.ncbi.nlm.nih.gov/) using the consensus sequences in the databases of the National Center for Biotechnology Information (NCBI).The new sequences as here obtained have been registered in Genbank under accession numbers KJ685895 to KJ685899.
Sequence analyses, alignment and an evolutionary model
Phylogenetic analyses were based on the available Spiroplasma sp.16S rRNA nucleotide sequences.A preliminary manual analysis of the chromatograms was performed with DNAstar Lasergene Core Suite (http://www.dnastar.com)software.ClustalW software (Larkin et al., 2007) was used to align the sequences obtained and those registered from other arthropods.In all cases we found sufficient homology to enable further phylogenetic inference.
The on-line ALTER tool (Glez-Pena et al., 2010) was used to convert the data formats when they differed.Text files were manually edited with notepad++ software (http://notepad-plusplus.org/).jModeltest software (Posada, 2008) was used to select the appropriate nucleotide substitution model with the Akaike information criterion (AIC) (Akaike, 1973(Akaike, , 1974)).The model selected was the GTR + G + I variant of the General Time Reversible (GTR) model described by Tavaré (1986), which considers distinct probabilities for each base substitution on the assumption that nucleotide base frequencies may differ.
Escherichia coli was used as the outgroup to root the tree.Figtree software (http://tree.bio.ed.ac.uk/software/figtree/) was employed to visualise and edit the phylogenetic trees.
None of the 244 phasmid individuals analysed showed
Wolbachia infection (see Table 1).To be certain of this negative result, positive controls (as described in the Material and Methods section) and primers for two Wolbachia loci were used (Table 2).This enabled us to rule out false negatives, and the possibility of sequence variation in the sequences not recognized by a singular pair of primers.
Nineteen individuals belonging to the following species -Neohirasea maerens (Brunner von Wattenwyl), Ramulus artemis, Leptynia montana Scali, Entoria nuda Brunner von Wattenwyl, Sungaya inexpectata (Zompro), Diapherodes gigantea (Gmelin) and Sipyloidea sipylus (Westwood) showed PCR amplification using the primers for the Spiroplasma sp.16S rRNA gene (Fig. 1, Table 1).PCR products were automatically sequenced and sequences BLAST aligned up to the genus level.Sequences showing at least 97% of identity were considered operational taxonomic units (OTUs).11 of the 19 analysed individuals of the strictly parthenogenetic Ramulus artemis and three of the 12 individuals of the occasionally parthenogenetic Entoria nuda proved to be infected by Spiroplasma.The other species only comprised one infected individual each (Table 1).
Phylogenetic reconstruction with these sequences based on ML and BI linked the strains detected with those previously described in phasmids (Gasparich et al., 2004;Di-Blasi et al., 2011;Shelomi et al., 2013) (see * Spiroplasma sp. in Figs 2 and 3).It is of interest that our strains assign to a new and different Spiroplasma clade (** Spiroplasma sp. in Figs 2 and 3).This new clade (** Spiroplasma sp.) is further divided into two subclades (Fig. 3).One of these includes four of the phasmid species studied here: R. artemis, N. maerens, E. nuda and S. sipylus; the other subclade comprises various arthropods, including our L. montana and Agathemera.
The survey with universal 16S rDNA PCR primers to identify other possible eubacterial endosymbionts infecting our phasmid species yielded 19 positive results.These PCR products were sequenced and BLAST-aligned.Again, using the minimum of 97% identity as the criterion for being considered an OTU, we were able to assign these sequences to different bacterial taxa (Fig. 4; Table 1).
DISCUSSION
The reproductive alterations induced by Wolbachia have been found in many organisms (Werren, 1997;Werren et al., 2008;Brucker & Bordenstein, 2012).However, to our knowledge, the possibility that this bacterial endosymbiont infects phasmids has not previously been explored, even though these arthropods are a well-known example of occasional parthenogenesis (thelytoky) (More, 1996), a phenomenon potentially induced by this bacterium (Simon et al., 2003).
In an attempt to evaluate the role played by Wolbachia in the reproduction of these organisms, we studied the incidence of this bacterial endosymbiont in phasmid species displaying different kinds of reproductive mode -from standard bisexual reproduction, to automictic or apomictic parthenogenesis and tychoparthenogenesis.However, in none of the species and individuals analysed was the presence of Wolbachia detected by the approaches here used.This makes it very unlikely, in our opinion, that this bacterium is generally involved in the reproductive systems of phasmids, although we cannot discount the possibility of it being involved in particular cases.The absence of Wolbachia infection from all these organisms is striking, given the high proportion of insect and arthropod species infected (Zug & Hammerstein, 2012).This by itself may be of evolutionary significance in this group of organisms.
On the other hand, we found Spiroplasma sp. in 7.7% of the individuals and 25% of the species analysed (Table 1).This bacterial endosymbiont of maternal transmission also induces reproductive alterations in several organisms.The preferential killing of male descendants is its most common effect, with a variable incidence (from 5 to 90% of infected females) depending on the taxon under consideration and other ecological and probably genetic aspects (Hurst & Jiggins, 2000;Hutchence et al., 2012;Ventura et al., 2012;Martin et al., 2013;Sanada-Morimura et al., 2013;Harumoto et al., 2014;Xie et al., 2014).
However, we have detected this bacterium in phasmid species apparently characterised by obligate sexual reproduction, like Leptynia montana and Diapherodes gigantea, in species with obligatory parthenogenesis, such as Ramulus artemis and Sipyloidea sipylus, and in Neohirasea maerens and Entoria nuda, which show occasional parthenogenesis.These preliminary results are promising and suggest the value of further research involving more individuals and populations, progeny analyses, experimental crosses between infected and uninfected individuals, and perhaps studies with previously infected individuals from parthenogenetic lineages treated with antibiotics.
A previous morphological study found Spiroplasma in the gut and certain muscle tissues of another stick insect, Agathemera spp.(Phasmatodea), but not in its eggs.This seems to rule out the possibility that this bacterium can induce the male-killing phenotype in these phasmids (DiBlasi et al., 2011).In our study, Spiroplasma was isolated from the abdomen, where the gonads (and the eggs in females) are located.This leads us to assume that the bacteria follow their standard maternal mode of transmission, the eggs presumably also being infected.We found the infection in both males and females, which may rule out the possibility of male killing in these cases.Even so, we are reminded of certain cases in which this phenotype only affects a limited proportion of the descendants, as observed in natural Japanese populations of Gastrolina depressa Baly (Coleoptera: Chrysomelidae): male killing is absent from northern and southern populations, but is present in 50 to 80% of the females from the centre of the islands (Chang et al., 1991;Hurst & Jiggins, 2000).
Taxonomically, Spiroplasma is classified within the order Entomoplasmatales (Regassa & Gasparich, 2006), in the Mollicutes lineage (Gasparich et al., 2004).Recent phylogenetic analyses based on the 16S rRNA gene classified this genus as non-monophyletic (Regassa & Gasparich, 2006).The phylogenetic characterisation of the 16S rRNA sequences of the Spiroplasma detected here ascribes the strains found in R. artemis, N. maerens, E. nuda and S. sipylus to a new divergent clade, with the ML and BI approaches (Figs 2 and 3, respectively).They appear to be associated with a 16S rRNA sequence previously described in a mite (GenBank: M24477), and classified in serogroup VI of Spiroplasma (Weisburg et al., 1989;Tully et al., 1995).This serogroup belongs to the Ixodetis clade, which includes the single lineage S. ixodetis and is at a considerable evolutionary distance from the other characterised Spiroplasma spp.(Regassa & Gasparich, 2006).Similar divergence is also displayed by the other known case of this microorganism infecting a phasmid (DiBlasi et al., 2011;Shelomi et al., 2013).This prevents a simple interpretation of the possible biological effects of Spiroplasma in these hosts.More data from other organisms infected by these strains will shed light on this specific clade and the phenotype induced in its hosts.
Spiroplasma strains similar to S. ixodetis have been associated with abnormal sex ratios in the butterfly, D. chrysippus and the ladybird beetle, A. bipunctata (Regassa & Gasparich, 2006). However, DiBlasi et al. (2011) did not find male killing induced by Spiroplasma in Agathemera spp.(Phasmatodea).In our case, we have no data that would justify the inference of a possible phenotypic effect of this bacterium in its hosts.As indicated above, further complex experiments (F 1 and F 2 crosses with infected and uninfected individuals, the use of antibiotics, etc.) are needed to clarify this matter.
Given the absence of correlation between our results with Wolbachia and Spiroplasma and the reproductive mode of the stick insects analysed, we complemented our study of the microbiota of these phasmids with a broad PCR-based survey of other eubacteria, in an attempt to detect other endosymbionts that might influence their reproductive biology.Insects are usually associated with microorganisms that contribute to their physiology (Mohr & Tebbe, 2006;Belda et al., 2011).However, our sequencing and BLAST comparison results indicate a relatively scarce microbial presence, with ~ 8.0% (19 out of 244) representativeness (Fig. 4), Proteobacteria and Firmicutes being the most commonly associated phyla (Fig. 5).This low rep-resentativeness may have several non-mutually exclusive explanations.Of these, we acknowledge that the captivity of the individuals studied here may have affected the bacterial diversity.In fact, this may have a significant influence in this kind of studies, as reported by Lo et al. (2006).Their non-natural diet was probably a major contributor to this, although our organisms did come from four distinct sources.Neither can we rule out the possibility that certain bacteria are insufficiently represented, which would make them difficult to detect by these methods.In any case, the bacterial taxa detected seem to be related to the nutritional function of their hosts, being microorganisms commonly associated with insects.
In summary, our results fail to reveal any definite association between bacterial infections and the reproductive modes of phasmids, more especially any clear link with the most common microorganisms involved here, Wolbachia and Spiroplasma.In the latter genus, however, further studies would ascertain its possible phenotypic or physiological effect on infected individuals and species.
Fig. 2 .
Fig.2.Spiroplasma spp.phylogeny on the 16S rDNA gene using the ML approach, indicating the infected host species (•).Roman numbers refer to the serological classification system for Spiroplasma.* Spiroplasma sp.belonging to previously described clades.** Spiroplasma sp.: new clade described in this study.Phasmid species used for this study infected by Spiroplasma sp. are shaded.
Fig. 3 .
Fig. 3. Spiroplasma spp.phylogeny based on the 16S rDNA gene, using the BI approach, indicating the infected host species (•).Roman numbers refer to the serological classification system for Spiroplasma.* Spiroplasma sp.belonging to previously described clades.** Spiroplasma sp.: new clade described in this study.Phasmid species used for this study infected by Spiroplasma sp. are shaded.
|
2018-12-05T04:08:24.480Z
|
2015-07-15T00:00:00.000
|
{
"year": 2015,
"sha1": "95361ee205293db1139d0b92b73e3fe99bff42b2",
"oa_license": "CCBY",
"oa_url": "http://www.eje.cz/doi/10.14411/eje.2015.061.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "95361ee205293db1139d0b92b73e3fe99bff42b2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
119325538
|
pes2o/s2orc
|
v3-fos-license
|
Modeling magnetic neutron stars: a short overview
Neutron stars are the endpoint of the life of intermediate mass stars and posses in their cores matter in the most extreme conditions in the universe. Besides their extremes of temperature (found in proto-neutron stars) and densities, typical neutron star' magnetic fields can easily reach trillions of times higher the one of the Sun. Among these stars, about $10\%$ are denominated \emph{magnetars} which possess even stronger surface magnetic fields of up to $10^{15}-10^{16}\,\mathrm{G}$. In this conference proceeding, we present a short review of the history and current literature regarding the modeling of magnetic neutron stars. Our goal is to present the results regarding the introduction of magnetic fields in the equation of state of matter using Relativistic Mean Field models (RMF models) and in the solution of Einstein's equations coupled to the Maxwell's equations in order to generate a consistent calculation of magnetic stars structure. We discuss how equation of state modeling affects mass, radius, deformation, composition and magnetic field distribution in stars and also what are the open questions in this field of research.
I. INTRODUCTION
In 1979, when the gamma-ray burst detection was already ongoing, a peculiar explosion occurred. It was characterized by a short burst of high energies (hard γrays) followed by a soft γ-ray emission that decayed with a sinusoidal variation. The explosion was so extreme that for a fraction of seconds saturated all γ-ray detectors in orbit. In the following decade, similar γ-ray explosions were detected and the sources identified as small remnants of supernovae [1], making it possible conclude that a new class of objects had been found, named soft gamma repeaters (SGRs), though the mechanism responsible for generating such explosions remained unknown.
With the advances on X-ray detection, new objects with peculiar radiation emission were also observed and, in particular, another class of objects denominated Anomalous X-ray Pulsars (AXPs) was identified. AXPs present low rotation rates (P ∼ 6 − 12 s) and high surface magnetic fields (B ∼ 10 13 − 10 15 G). They are called anomalous because their X-ray radiation could not be explained by their rotation rates, as typical pulsars are.
The further monitoring of SGRs and AXPs made it possible to characterize that both these objects present low rotation rates and high magnetic fields [2], though their radiation emission behavior were distinct. SGRs present gamma-ray explosions with repetition periods that can vary from seconds to years. During the inactive phase, a steady X-ray emission is also identified. Furthermore, SGRs also produce uncommon and extremely energetic explosions denominated giant flares, which can be up to ten times more luminous than a supernova event [3]. AXPs are characterized by their steady X-ray emission, but in the scale of hard X-rays. Besides the X-ray * rosana.gomes@ufrgs.br emission, also present glitch phenomena, in which an increase in the spinning rate, followed by a return to the original spin rate is observed [4]. AXPs differ from SGRs due to their lack of activities such as periodic emissions or giant flares explosions.
The current interpretation is that these objects are two different life stages of objects denominated magnetars [5,6], which are magnetically powered neutron stars. The mechanism responsible for their radiation emission comes from the strong magnetic field decay along time [7,8], being SGRs associated to young magnetars and AXPs to older ones. SGRs present high energy explosions due to instabilities from the magnetic tension between core and crust, or even the large scale rearrangement of magnetic fields (giant flares); while AXPs present steady X-ray emission due to the collision of high energy particles on their surfaces and magnetic field lines (also in SGRs).
The conservation of magnetic flux during the supernova collapse is not able to explain the intensities of magnetic fields found in magnetars, as the radius of a 1.4 M ⊙ star would have to be smaller than its Schwarzschild radius in order to generate a magnetic field of 10 15 G [9]. In 1992, Robert Duncan & Christopher Thompson proposed a mechanism, denominated magnetohydrodynamic dynamo mechanism (MDM), based on the amplification of magnetic fields through the combination of rotation and convection in hot proto-neutron stars [1,10], which is currently the most accepted to explain magnetars. According to this theory, after the proto-neutron star phase, convection stops and stars are left with a magnified magnetic field that can reach up values observed in magnetars, marking the end of the dynamo process.
Even though most high energy phenomena involving magnetars have to do with the modeling of their crust dynamics, understanding the effects of strong magnetic fields in the inner core of these objects is of crucial importance for the big picture of matter's behavior in neutron stars. To fulfill this a goal, a self-consistent formalism that describes both microscopic and macroscopic features of these objects is needed. Modeling magnetic neutron stars in the presence of strong magnetic fields involves the modeling of both equation of state (EoS) and solving the coupled system of Einstein-Maxwell's equations. In what follows, we present a brief summary of the current status of this field, focusing on these two topics (EoS and structure). For a more detailed discussion about magnetic field effects in the cooling of neutron stars, see Ref. [15] and references therein.
A. Landau Quantization
Effects of strong magnetic fields in a low-density Fermi gas were first performed by Canuto [16][17][18][19], followed by calculations using RMF models with magnetic field effects [20,21]. The introduction of magnetic effects in RMF models is given by adding the electromagnetic interaction term in the lagrangian density of models. This is done under the consideration of a constant dipolar (usually in z direction) and external magnetic field that generates a quantization of charged particles energy levels: E ν = m 2 + k 2 z + 2|q|Bν, where m and k z are the mass and Fermi momentum in the z direction of the charged particles, respectively, and ν ≡ l The quantum number ν is associated to the so called Landau quantization of energy levels of charged fermions and, according to the expression above depends on the charge q, quantum orbital number l and spin s of particles. One can easily check that besides for the ground state (l = 0, s = +1), all other Landau energy levels are double degenerate. Furthermore, under conditions of zero temperature (relevant of neutron stars), energy levels obey Fermi-Dirac statistics, having a Fermi energy E F and a maximum Landau level ν max < (E 2 F − m 2 )/2|q|B, beyond which k z becomes negative. Landau quantization introduces a sum over levels ν for EoS calculations, running only through the lower Landau levels for strong magnetic fields. In the weak field regime, the sum reaches the continuous limit, making the integrals equivalent to the B = 0 case [15,22].
When magnetic fields are introduced in RMF models, the effect is the softening of the EoS due to the increase of charged particles population and, hence, decrease of isospin asymmetry [12,20,21]. In addition, magnetic field effects give rise to an anisotropy in the matter energy-momentum tensor components, generating two pressure components [17,18,[22][23][24][25]: P = −Ω, P ⊥ = −Ω − BM, where P and P ⊥ are the pressure components in the parallel and perpendicular directions to the external magnetic field, Ω is the grand potential and M is the magnetization of matter.
Effects of anomalous magnetic moment were also investigated through the corresponding coupling of particles (charged and uncharged) to the electromagnetic field tensor [19,[21][22][23]. For the case of hadronic matter, it has been concluded that strong magnetic fields turn the EoS stiffer due to polarization effects. However, a significant impact is only identified for fields up to B ∼ 10 18 G [26], whereas for quark matter much smaller effects have been estimated so far [27]. The modeling of hybrid stars under strong magnetic fields is not an easy task, as the matching of two phases has to take into account magnetization.
In the past, magnetic effects were introduced in many RMF models for describing neutron stars with different population content [20,21,[23][24][25][26][27][28][29][30][31][32][33][34][35][36][37], but still solved the Tolman-Oppenheimer-Volkoff (TOV) equations for spherically symmetric stars. In such cases, either only the isotropic component of the pressures or the perpendicular component were used as the only pressure contribution for calculating the structure of stars, under the assumption that anisotropy effects are small. However, as we discuss in the next section, deformation effects are extremely important for the description of neutron stars that present strong magnetic fields.
B. Particle Population
As densities increase in the interior of neutron stars, it is expected that new degrees of freedom such as hyperons, delta resonances or even quark matter can take place in the core of these objects and are, therefore, also affected by strong magnetic fields. As already discussed in the previous session, magnetic fields affect the energy levels of charged particles and, when AMM is considered, even of uncharged ones. The effect of magnetic fields on the particles population of neutron stars is pushing the exotic particles threshold of appearance to higher densities [36,38]. However, such effects are shown to be significant only for very strong magnetic fields of the order B ∼ 10 18 G.
A more relevant source of exotic particles suppression comes from the substantial reduction on the central density of strongly magnetized stars. As magnetic fields have the effect of opposing gravity, similar to rotation effects, the central density of stars drops and, depending on the stiffness of the EoS, exotic particles such as hyperons might completely vanish [39]. Similar effects can be identified in the case of hybrid stars [40,41].
Because of the substantial change in particles population due to magnetic fields, it is predicted a particles re-population (appearance of exotic degrees of freedom) as the magnetic field strength of neutron stars decays through time. Such a transition could become detectable via gravitational wave emission in the future, with new generations of gravitational wave detectors.
III. MAGNETIC FIELDS IN THE STELLAR STRUCTURE A. Einstein-Maxwell Equations
Introducing strong magnetic field effects in the stellar structure of compact stars involves solving the coupled Einstein-Maxwell equations with a metric that allows stars to be deformed. For solving such a complicated system of equations numerically, a formalism that uses the 3+1 decomposition of space-time technique was developed by Bonazzola et al. [42], and implemented in the so called LORENE (Langage Objet pour la RElativite NumeriquE) library.
Due to the non-negligible pressure anisotropy in the presence of strong magnetic fields, an axisymmetric formalism is necessary for describing the system, meaning that the metric potentials will depend both on the radius and on the angle with respect to the symmetry axis (r, θ). In the formalism described above, a poloidal magnetic field is assumed, and generated by a current function, which is of the library inputs. It is important to stress that although the current function (which ultimately generates the magnetic field distribution) depends on the EoS for each layer of the star, this formalism does not consider hydrodynamical effects inside stars. This means that the current function is a free parameter that, together with the EoS, determines the dipole moment of the star, as well as their macroscopic properties.
Moreover, the energy-momentum tensor is decomposed into two parts: perfect fluid (PF) and purely magnetic field (PM) contributions. When the PM contribution exceed the PF contribution along the symmetry axis of the star, the code stops converging, imposing a limit of B c ∼ 10 18 G for the central magnetic fields of stars which does not depend strongly on the EoS model [42,43].
This formalism initially was applied to describe magnetic neutron stars without taking magnetic effects into account in the EoS [13,43]. It was only recently that works considering self-consistent calculations that includes magnetic fields both in the EoS and structure of stars were implemented to describe quark stars, hybrid stars and hadronic stars [39,40,44,45].
B. Macroscopic Properties
When a self-consistent approach was used for the first time for describing magnetic neutron stars, it was identified that introducing magnetic fields on the EoS do not impact significantly the macroscopic properties of neutrons stars, such as stellar masses or radii [39,40,44]. However, even though small, an increase in the mass of stars is identified when anomalous magnetic moment effects are introduced in the EoS [40]. This comes from the fact that the limit in which magnetic fields become relevant for the EoS for the core of neutron stars is the same as the one for the convergence limit for the numerical calculations.
Still, strong magnetic fields impact directly the masses (baryon and gravitational) and deformation neutron stars. These effects come mainly form the pure magnetic field contribution to the energy momentum tensor, making the stars more massive due to the extra electromagnetic energy available to prevent the stars collapse. For a fixed stellar baryon mass, the mass increase in magnetic stars is of the order of percents, and depends on the EoS and current function [40].
Another important aspect to be taken into account is the deformation of stars into an oblate shape, which is directly associated to the poloidal magnetic field distribution assumed [40,42,43]. The amount of deformation a star gets depends on the choice of current function, but also on EoS. Softer EoS's allow for higher central densities and, consequently, higher central magnetic fields. However, although stars modeled with a softer EoS present higher internal magnetic fields, they are also more compacts, meaning that the larger radii produced by stiffer EoS's generate stars that are more easily deformed [39].
It was estimated that neglecting deformation of magnetic stars by solving TOV equations when determining the structure of stars leads to a 12% overestimation of the maximum mass and a 20% underestimation of equatorial radius of a 1.4M ⊙ star [39]. These results were obtained for the maximum magnetic field limit of the LORENE code. Further detailed calculations for the limit in which a spherically symmetric geometry is valid to model magnetic stars is still needed.
C. Magnetic Field Distribution
As already discussed, when a poloidal magnetic field configuration is assumed, the magnetic distribution as a function of density (or chemical potential) depends on the EoS and on the choice of current. In past, when such calculations were not available, ad hoc formulas for magnetic field profiles inside neutron stars were used for introducing magnetic fields on the equation of states of RMF models. Such formulas introduced an exponential dependence of the fields with the baryon density, making it possible stars with surface and central magnetic fields of 10 15 G and 10 19 G, respectively, to exist.
Though those magnetic profiles were the only available way to overcome using constant magnetic fields for EoS calculations, they do not fulfill Maxwells equations [46] and, hence, should not be used in such calculations. In addition, when self-consistent calculations are performed, it is showed that the magnetic field increase is much slower than an exponential one, not differing by more than one order of magnitude from the surface field. In particular, when calculated for different models and matter compositions, the magnetic field profile inside neutron stars grows quadratically with baryon chemical potential in the polar direction [45]. However, determining the magnetic field profile inside neutron stars still needs more refining, as it also depends on the poloidal distribution assumed. Efforts towards the inclusion of combined toroidal [47][48][49] and poloidal contributions in neutron stars can certainly improve these estimations, even though this is a very challenging task both from numerical and analytical point of view.
As a final note, we mention that the extent of research topics related to magnetic fields in neutron stars is vast. Although we have used this short proceeding for reviewing only the modeling of equation of state and structure of stars, many other topics such as magnetic field impact on crust, thermal evolution and mergers have also interesting open questions [15].
|
2018-06-06T07:03:52.000Z
|
2018-04-29T00:00:00.000
|
{
"year": 2018,
"sha1": "09e15d6566bfad91eda0ed27f0ddc5ccf38dd43f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "09e15d6566bfad91eda0ed27f0ddc5ccf38dd43f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
259518172
|
pes2o/s2orc
|
v3-fos-license
|
Etomidate versus Propofol for Electroconvulsive Therapy in Patients with Major Depressive Disorders in Terms of Clinical Responses to Treatment: A Retrospective Analysis
General anesthetic agents may be associated with the clinical efficacy of electroconvulsive therapy (ECT), as they may influence seizure quality and duration. Hence, a retrospective study was conducted to compare the clinical effects and seizure variables of etomidate and propofol during ECT. Patients treated with ECT under anesthesia with etomidate (n = 43) or propofol (n = 12) were retrospectively analyzed. Seizure variables (seizure duration, intensity, and threshold) and hemodynamic changes during ECT were assessed and recorded. Clinical responses to treatment were evaluated using the Clinical Global Impression scale and mood at discharge after the course of ECT. Adverse effects were also recorded. The demographic characteristics were similar between the two groups. There were no significant differences in the Clinical Global Impression scale scores, mood at discharge, and adverse effects between the two groups (p > 0.05); however, etomidate was associated with a significantly longer motor (42.0 vs. 23.65 s, p < 0.001) and electroencephalogram (51.8 vs. 33.5 s, p < 0.001) seizure duration than propofol. In conclusion, etomidate showed more favorable seizure profiles than propofol during ECT; however, both agents (etomidate and propofol) were associated with similar clinical efficacy profiles at discharge.
Introduction
Electroconvulsive therapy (ECT) artificially induces seizures by delivering electrical stimulation through electrodes attached to the scalp. This procedure is commonly performed to alleviate the symptoms of patients with severe or refractory depressive disorders that are uncontrolled by medications. It is performed under general anesthesia to ensure patient safety and tolerability. Various anesthetic agents, such as etomidate, propofol, ketamine, and thiopental, can be used for ECT anesthesia.
Anesthetic agents may affect seizure quality [1], and in some studies, seizure duration has also been associated with the clinical efficacy of ECT [2,3]. Most anesthetic agents, including propofol, thiopental, and midazolam, have anticonvulsant properties [4,5]; etomidate has been found to reduce the seizure threshold [6] and provide significantly longer seizure duration compared to other anesthetic agents in ECT [7].
Several studies comparing the effects of anesthetic agents during ECT in patients with depressive disorders have focused on seizure-related variables rather than on clinical improvements. We hypothesized that patients administered etomidate during ECT would achieve better clinical outcomes than those administered propofol. Therefore, this retrospective analysis aimed to compare the clinical effects of etomidate and propofol based on seizure variables and hemodynamic responses in patients with depressive disorder undergoing ECT.
Study Design and Study Participants
In this retrospective study, patients who were diagnosed with major depressive disorder or depressive disorder and underwent ECT at Seoul National University Bundang Hospital between June 2003 and June 2019 were enrolled. The exclusion criteria were as follows: (1) patients who did not complete the course of ECT or had undergone ECT in the past 2 months; (2) patients who received both etomidate and propofol; (3) and patients who received other anesthetic agents during the course of ECT.
Anesthesia for ECT
All patients were monitored using pulse oximetry, electrocardiography, and noninvasive arterial pressure measurements throughout the ECT procedures. After premedication with 0.2 mg of glycopyrrolate, anesthesia was induced with 0.2-0.3 mg/kg of etomidate or 1-2 mg/kg of propofol. Succinylcholine (1 mg/kg) was used for neuromuscular blockade. Subsequently, ECT was performed using the Thymatron IV system (Somatics, LLC, Lake Bluff, IL, USA). The stimulus dosage was selected by the attending psychiatrist based on the guidelines of the institution. The stimulus charge starts from 32 mC for female patients and from 48 mC for male patients [8]. If there was no seizure or its duration was shorter than 20 s, an additional stimulation (about 50% increased energy) was adjusted according to the protocol of the guideline. In the subsequent session, the energy of the last stimulation in the previous session was chosen as the dose of the first stimulation.
Outcomes
This study compared the effects of etomidate and propofol on clinical outcomes using the Clinical Global Impression (CGI) scale. For all patients undergoing ECT, the CGI scale was used by experienced psychiatrists to assess the severity of the illness. The CGI consists of a 7-point Likert scale with scores as follows: 1, normal, not at all ill; 2, borderline mentally ill; 3, mildly ill; 4, moderately ill; 5, markedly ill; 6, severely ill; and 7, extremely ill. CGI analysis was conducted at the time of admission to the hospital (before ECT) and discharge from the hospital (after ECT). The CGI score was determined based on the overall clinical appearance, including symptoms, behaviors, and functional impairments.
Secondary outcomes, including mood at discharge, full or partial remission, length of hospital stay, and adverse events related to ECT, were also analyzed. Adverse events, including amnesia, headache, anxiety, and insomnia, were recorded if any of these occurred.
ECT-related variables (doses and types of anesthetics, number of ECT sessions, and stimulus charges) and seizure-related variables (motor and electroencephalogram (EEG) seizure duration) were recorded. In terms of stimulus charges and seizure durations, the total values were divided by the number of ECT sessions because different numbers of ECT sessions could produce biased results.
Hemodynamic variables during the ECT (heart rate and blood pressure) were recorded. Hemodynamic responses were defined as maximal changes during ECT from baseline hemodynamic parameters, which were measured prior to the induction of anesthesia. The mean values of hemodynamic responses were calculated and compared.
Statistical Analyses
All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) software version 25.0 (SPSS Inc., Chicago, IL, USA). Continuous variables are presented as mean with standard deviation or median with interquartile range, depending on the normality of distribution. Student's t-test or Mann-Whitney U test was used for comparison. Categorical variables are presented as numbers with percentages, and the chi-squared test or Fisher's exact test was used for comparison. For missing values, mean values were substituted for missing values by using unconditional mean imputation.
Results
In total, 90 patients were scheduled to undergo ECT between June 2003 and June 2019. Of these, four patients were excluded from the analysis because they canceled ECT after admission. Two patients who received thiopental during anesthesia induction and 27 patients who received more than two anesthetics during the ECT course were excluded. Two patients who had undergone ECT within the last 2 months were also excluded. Hence, 55 patients were included in the final analysis, of whom 43 were administered etomidate, whereas 12 received propofol during anesthetic induction ( Figure 1).
Statistical Analyses
All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) software version 25.0 (SPSS Inc., Chicago, IL, USA). Continuous variables are presented as mean with standard deviation or median with interquartile range, depending on the normality of distribution. Student's t-test or Mann-Whitney U test was used for comparison. Categorical variables are presented as numbers with percentages, and the chi-squared test or Fisher's exact test was used for comparison. For missing values, mean values were substituted for missing values by using unconditional mean imputation.
Results
In total, 90 patients were scheduled to undergo ECT between June 2003 and June 2019. Of these, four patients were excluded from the analysis because they canceled ECT after admission. Two patients who received thiopental during anesthesia induction and 27 patients who received more than two anesthetics during the ECT course were excluded. Two patients who had undergone ECT within the last 2 months were also excluded. Hence, 55 patients were included in the final analysis, of whom 43 were administered etomidate, whereas 12 received propofol during anesthetic induction (Figure 1). The demographic characteristics and baseline hemodynamic parameters are shown in Table 1. Based on the anesthetic administered, the patients were divided into etomidate and propofol groups. The demographic characteristics and baseline hemodynamic parameters are shown in Table 1. Based on the anesthetic administered, the patients were divided into etomidate and propofol groups. The data are presented as mean ± standard deviation or number (percentage). Abbreviations: BMI = body mass index; SSRI = selective serotonin reuptake inhibitor; SNRI = serotonin-norepinephrine reuptake inhibitor; TCA = tricyclic antidepressant; NDRI = norepinephrine-dopamine reuptake inhibitor; BDZ = benzodiazepine. * p < 0.05.
The CGI scores at admission and discharge are shown in Figure 2. CGI scores at admission were not available for six patients (three in each group); thus, they were excluded from the CGI analysis. The data are presented as mean ± standard deviation or number (percentage). Abbreviations: BMI = body mass index; SSRI = selective serotonin reuptake inhibitor; SNRI = serotonin-norepinephrine reuptake inhibitor; TCA = tricyclic antidepressant; NDRI = norepinephrine-dopamine reuptake inhibitor; BDZ = benzodiazepine. * p < 0.05.
The CGI scores at admission and discharge are shown in Figure 2. CGI scores at admission were not available for six patients (three in each group); thus, they were excluded from the CGI analysis. The other clinical outcomes at discharge are summarized in Table 2. No significant differences in mood at discharge, remission, and length of hospital stay were found between the two groups. No patient in the etomidate group experienced anxiety, while two in the propofol group experienced anxiety after the ECT (p = 0.044), which was statistically significant. The other clinical outcomes at discharge are summarized in Table 2. No significant differences in mood at discharge, remission, and length of hospital stay were found between the two groups. No patient in the etomidate group experienced anxiety, while two in the propofol group experienced anxiety after the ECT (p = 0.044), which was statistically significant. The ECT parameters, including the number of ECT sessions, seizure duration, and stimulus charges, are described in Table 3. There were some missing data regarding seizure duration (seven patients in the etomidate group and two patients in the propofol group) and stimulus charge (one patient in the etomidate group) because of lack of records. Hence, missing data were imputed using the unconditional mean imputation. Significant differences were observed in ECT-related parameters between the two groups. Patients in the etomidate group received less sessions of ECT than those in the propofol group (7.6 vs. 10, p = 0.004). A significant difference in seizure duration and stimulus charges between the groups was seen. The motor (42.0 vs. 23.65 s, p < 0.001) and EEG (51.8 vs. 33.5, p < 0.001) seizure durations per session were significantly longer in the etomidate group than in the propofol group. The mean stimulus charge per session was significantly lower in the etomidate group than in the propofol group (211.2 vs. 394.75 mC, p < 0.001).
Discussion
This retrospective analysis showed that etomidate demonstrated similar clinica comes to propofol, despite more favorable seizure profiles (longer seizure duratio lower stimulus charge) in patients with depressive disorder undergoing ECT. Hem namic responses to the ECT stimulus were significantly higher in the etomidate g
Discussion
This retrospective analysis showed that etomidate demonstrated similar clinical outcomes to propofol, despite more favorable seizure profiles (longer seizure duration and lower stimulus charge) in patients with depressive disorder undergoing ECT. Hemodynamic responses to the ECT stimulus were significantly higher in the etomidate group than in the propofol group.
To the best of our knowledge, this is the first study using the CGI scale to validate the efficacy of ECT in patients with depression. CGI is a quick, simple, and easy tool for experienced physicians to evaluate a patient's current condition based on an overall assessment of mental status, symptoms, behaviors, and functional impairment [9]. It is a reliable tool for the evaluation of psychiatric disorder [10]. There are two types of CGI: CGI-severity (CGI-S) and CGI-improvement (CGI-I) [11]. CGI-S assesses the current severity of illness, while CGI-I evaluates the degree of change before and after treatment [12]. Both CGI types use a seven-point Likert scale. In our institution, CGI-S was routinely used and recorded when the patients were discharged, but not CGI-I. This can be attributed to patient characteristics and medical systems. Patients were hospitalized for approximately 30 days in this study, but resident physicians in charge of the medical records were rotated every month. Therefore, it seemed difficult to trace and compare the severity of illness during their hospital stay.
No significant difference in the CGI scores on discharge was seen between the two groups. These results are consistent with those of previous studies [13,14]. Eranti et al. investigated the effects of anesthetic agents on the therapeutic response to ECT [13]. The responses were classified as complete recovery, major improvement, minor improvement, no change, and worsening. The choice of anesthetic agents, including etomidate and propofol, has no influence on the response rate after ECT. Graveland et al. [14] analyzed patients with depression undergoing ECT using the Hamilton Rating Scale for Depression (HAM-D) and Montgomery-Asberg Depression Rating Scale (MADRS). No significant differences in remission and response rates and reduction in HAM-D or MADRS scores were found, regardless of the anesthetic agent used. To assess the severity of depression, the authors employed specific tools, the HAM-D and MADRS, which focused on depressive symptoms. In contrast, the CGI used in our study assesses the overall and general conditions [15]. However, CGI shows similar performance to the HAM-D and MADRS [16], although it is a more conservative tool than the HAM-D and MADRS [15].
Seizure durations were significantly longer in the etomidate group compared with the propofol group. This is consistent with previous studies, which reported that seizure duration was prolonged with etomidate compared to propofol [17,18]. Among various anesthetic agents, etomidate induces the longest seizure duration, whereas propofol induces the shortest seizure duration [19]. This concurs well with orders of anesthetic agents reducing seizure threshold and confirms our findings. Etomidate may act as a proconvulsant [20], whereas propofol may act as an anticonvulsant [21]. This maybe the reason why the stimulus charges were significantly lower in the etomidate group than in the propofol group. The findings of this study imply that seizure duration is not correlated with clinical improvement in patients with depression undergoing ECT.
It is worth noting that patients in the etomidate group significantly required less ECT sessions than those in the propofol group. It may imply patients who were administered propofol need more treatment sessions to achieve the comparable clinical improvement compared with those who were administered etomidate. However, these findings need to be interpreted with caution due to relatively smaller number of patients included in the propofol group. The difference might be a statistical bias resulting from the smaller sample size, rather than a true difference.
The increase in hemodynamic parameters, such as heart rate and blood pressure, were more prominent after etomidate administration. This is in agreement with several studies, which concluded that propofol provided hemodynamic stability in comparison to etomidate during ECT procedures [22][23][24]. ECT stimulates the sympathetic nervous system, inducing unstable hemodynamic responses, such as tachycardia and hypertension [25]. Etomidate has a limited effect on cardiovascular function, leading to unstable hemodynamic responses associated with ECT [26]. In contrast, given that cardiovascular depression is seen with propofol, the hemodynamic response to ECT can be attenuated after propofol administration [26].
Limitations
The present study has some limitations. First, since this was a retrospective study, we could not control for all confounding factors that might have affected the clinical outcomes. Second, the selection of anesthetics was not unified during the course of the ECT in several patients. In our institution, anesthetic induction agents are selected at the discretion of anesthesiologists or psychiatrists. Consequently, a quarter of the patients were excluded from the analysis. Furthermore, etomidate was preferred due to its advantagerelated seizure duration, which resulted in a difference in sample sizes between the groups. Uneven sample size could potentially lead to an over-or under-interpretation of the data, reducing the reliability of our results. Third, CGI was rated by several psychiatrists; hence, there is a possibility of inter-rater variability in the assessment of the CGI. In addition, CGI was the only scale used to evaluate the patients in this retrospective study. Future studies including additional evaluation scales, such as subjective scales, and the incorporation of metrics, like time to recovery, are needed to ensure a more detailed comparison between the impacts of etomidate and propofol. A large-scale prospective randomized clinical trial should be conducted to evaluate the clinical effects of etomidate use during ECT in patients with major psychiatric disorders.
Conclusions
This retrospective study analyzed and compared the clinical outcomes of etomidate and propofol during ECT in patients with major depressive disorders. Etomidate showed clinical outcomes similar to those of propofol, despite having more favorable seizure profiles.
|
2023-07-11T16:07:13.172Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "6b0d823f2613cc0aadbc03144c6f441e229c4861",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/13/7/1023/pdf?version=1688380362",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ab73d31ee71306ba7bed232a8d5ef0ed64448f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214712501
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Islamic Service Behavior on Patient Loyalty
Introduction: Patient loyalty is a patient attitude that reflects a loyalty to the service to take advantage of repeated health services in meeting the needs of medical services. This study aims to determine the effect of Islamic service behavior on the loyalty of uninsured patients in obstetrics and gynecology department at Sayang General Hospital. Methods: This research is a descriptive and verification research. The study population consisted of uninsured obstetrics and gynecology patients at Sayang General Hospital, with a total sample of 289 people studied using accidental sampling techniques. The data collection technique used was a questionnaire. The analytical method used is descriptive analysis with data tabulation and verification analysis with regression analysis to test the hypothesis. Result: The T test showed significance <0.001. Conclusion: Behavior of Islamic services had a significant influence on the loyalty of uninsured patients in at the Sayang General Hospital.
I. INTRODUCTION
In health services competition, health service providers are required to always give good services in order to improve their quality. One effort that can be done to win this kind of competition is to provide good and optimal services, which can dramatically increase efficiency.
The paradigm of hospital health services today has undergone a fundamental change. Hospital is a business entity with many strategic business units, and therefore requires handling with proper management concept.
The concept of Islam teaches that in providing business services, whether in the form of goods or services, entrepreneurs must have good behavior, this is evident in the Koran: "And say: Work; so Allah will see your work and (so will) His Apostle and the believers; and you shall be brought back to the Knower of the Unseen and the seen, then He will inform you of what you did" (QS At-Taubah: 105).
In the hadith of the Prophet Muhammad: "Allah really likes His servants who, when doing work, do it in all seriousness (Itqan)" (HR: Baihaqi no.5080).
In its development, Berry, Parasuraman, and Zeithamal found that the ten dimensions can be made into five defining criteria. The ten dimensions can be simplified into five dimensions (measurements) that need to be considered to state the measurement of service quality, namely responsiveness, tangible, reliability, assurance, and empathy. In Islam there is an example given by Rasulullah SAW as a businessman in establishing trade relations and providing services with his customers. His good service behavior made Rasulullah SAW a successful businessman known as Al-Amin. This is due to the four properties that are implemented in the business services namely Shiddiq, Amanah, Fathonah, and Tabligh, as well as Physical Evidence in trade that makes customers more confident in collaborating with Rasulullah SAW.
Sayang General Hospital is one of the public hospitals in Cianjur Regency. The increasing needs and demands of the community for quality midwifery services become a challenge as well as an opportunity for Sayang General Hospital to continue to develop and improve the behavior of its services, especially in terms of human resources.
One of phenomena related to service behavior is the vision of the Sayang General Hospital that is "more advanced and religious Hospitals in health services" with the Hospital philosophy "comprehensive service and akhlakul karimah morality", and one of its missions "increasing the availability of professional human resources and akhlakul karimah morality". Related to loyalty in obstetrics and gynecology units in Sayang General Hospital, among others, from the results of the primary interview study (patients visiting for control at the Sayang General Hospital obstetric clinic) and secondary interviews (patients who have been treated at Sayang General Hospital), apparently there are almost the same number of uninsured patients visits every month.
Therefore, researchers are interested in measuring how much the quality of Islamic service behavior affects patient loyalty in uninsured patients in the Midwifery service at Sayang General Hospital and whether the level is in line with the expectations of patients as of service buyers, especially in the field of midwifery services.
II. METHODS
Type of research used in this research is descriptive and verificative research. In this study, the independent variable is Islamic service behavior (X) (with the dimensions of shiddiq, amanah, fathonah, tabligh, and tangible) with the dependent variable being patient loyalty (Y).
The sampling technique used was accidental sampling. Based on the Slovin formula, 286 people were sampled with inclusion and exclusion criteria as follows: A. Inclusion Criteria Outpatients in the midwifery clinic, Sayang General Hospital.
Patients without insurance.
The patient is not in an emergency and can communicate properly.
B. Exclusion Criteria
Other outpatients from non-midwifery polyclinic at Sayang General Hospital Patients with insurance, for example those with BPJS, jampersal, and so on.
The patient is in a critical condition and cannot communicate properly.
Data were analyzed using simple linear regression analysis (T test) with prior normality test, heterokedasticity test, and the coefficient of determination test.
III. RESULTS
Characteristics of respondents in this study were divided into three, consisting of age, education and occupation. In Figure 2, it can be seen that there are 37 respondents who graduated from elementary school or 12.8%, 40 who graduated from junior high school or 13.8%, 200 who graduated from high school or 69.2% and 12 who graduated from University/Academy, or 42%.
Advances in Social Science, Education and Humanities Research, volume 409
In Figure 3 it appears that 1 respondent works as civil worker or 0.3%, 15 are Traders or 5.2%, there are 2 Farmers or 0.7%, there are 26 entrepreneurs or 9.0%, there are 39 factory labors or 13.5%, there are 196 housewifes or 67.8%, there are 5 unemployed people or 1.7% and others are 5 people or 1.7%.
A. Evaluation of Respondents Based on Islamic Service Behavioral Dimensions
Researchers distributed questionnaires to 289 participants that contained respondents opinions on Islamic service behavior and patient loyalty. This questionnaire consists of statement items with four alternative answer choices and their scores. The alternative answers and scores are as follows: Respondents responses to the dimension of Shiddiq are represented by statements 1 (one) to 7 (seven). The results of processing the statement items contained in the variables can be seen in the following Table 2.
Calculations:
The total number of respondents is 289.
Average Percentage: (Total variable items ÷ (Total items per dimension x Highest value x Number of respondents) x 100%
The Shiddiq variable has 7 statement items, the total score of the 7 items is 6725, then (8092 ÷ (4 x 7 x 289) x 100% = 83.11%.
The results of the average percentage of dimensions Shiddiq is 83.11%, and if applied to the continuous line then the following figure will be obtained. Based on the average value continuous line, it can be concluded that overall, the Shiddiq dimension has a value of 83.11% and classified as Excellent. This means that the overall indicators of these sub-variables are implemented very well. The average percentage yield of the Amanah dimension is 85.12%, and if applied to the continuous line the following figure will be obtained. Based on the average value continuous line, it can be concluded that the overall value of the Amanah dimension is at 85.12% classified as Excellent, meaning that the overall indicators of this subvariable are implemented very well. The results of the average percentage of Fathonah dimension is 83.75%, and if applied to the continuous line then the following figure will be obtained. Based on the average value continuous line, it can be concluded that overall, the Fathonah dimension has a value of 83.75% and classified as Excellent, meaning that the overall indicators of this sub variable are implemented very well. The results of the average percentage of Tabligh dimension is 84.64%, and if applied to the continuous line, the following figure will be obtained. Based on the average value continuous line, it can be concluded that overall, the Tabligh dimension has a value of 84.64% and classified as Excellent, meaning that the overall indicators of this sub variable are implemented very well. The results of the average percentage of Tangible dimension is 82.85%, and if applied to the continuous line, the following figure will be obtained. Based on the average value continuous line, it can be concluded that overall, the Tangible dimension has a value of 82.85% and classified as Excellent, meaning that the overall indicators of this sub variable are implemented very well. In this study, the overall value of the Shiddiq dimension was 83.11%, Amanah was 85.12%, Fathonah was 83.75%, Tabligh was 84.64%, and Tangible was 82, 85%. The results of the average percentage of the Loyalty dimension is 84.73%, and if applied to the continuous line, the following figure will be obtained. Based on the average value continuous line, it can be concluded that overall, the Loyalty dimension has a value of 84.73% and classified as Excellent, meaning that the overall indicators of this sub variable are implemented very well.
B. Simple Linear Regression Analysis
There are two variables measured using measurement instruments that are developed based on various supporting theories, with an interval scale-score. These variables are assigned the following symbols: X= Islamic Service Behavior Y = Loyalty
1) Classical assumption test:
To meet the regression model in this study, a classic assumption test was done with the normality test and the heterokedasticity test.
a) Normality test. According to Sarwono and Nursalim [1], data normality is a test that is required to determine whether a data is normal or not. In this study, a normality test will be conducted in this study to determine whether or not a descriptive data is normal using a scatter plot chart, therefore in this study the author uses the SPSS 23.0 program with the following results:
Advances in Social Science, Education and Humanities Research, volume 409
Based on the above chart, it appears that the data rotates along the diagonal line, therefore it can be concluded that the data is normally distributed and this research model has met the normality test.
b) Heteroscedasticity test:
This test aims to see whether in the regression model the residual variable inequality occurs from one observation to another. There is no heterokedasticity in the regression model in this study, thus the Homoscedastic assumptions are met and the regression model is feasible to be used in testing.
C. Testing the Hypothesis of Islamic Service Behavior (X) on Loyalty (Y)
Results of Analysis with the T Test showed the following results: The simple linear regression formula is as follows: From the table, the correlation is 0.796; based on the guidelines for interpretation of the correlation coefficient (Sugiyono, 2013), it can be concluded that the correlation between variables in the category is strong.
D. Coefficient of Determination
To see the effect of independent variables outside of X, we see the SPSS summary output model below where the value of R Square is 0.634. This number is used to calculate the Coefficient of Determination (KD).
IV. DISCUSSION
In Figure 1, it can be seen that the largest number of respondents are from the reproductive age group, this is in accordance with the highest percentage of patients, namely maternity patients, while the rest are gynecological patients with an average age above 35 years. This is supported by the statement of Umar that physiological abilities can decrease at the age of 30-45 years and along with age [2], in general the human body can experience a decrease in ability by 1% each year [3]. If this age range often receives good health services, patient loyalty will increase. Loyalty is a form of customer loyalty or commitment that can be demonstrated through a positive attitude during repeated transactions.
Most respondents have a senior high school education, as shown in Figure 2. In accordance with Hartutik and Ratri [4], the level of education can affect respondents' insights, where they have a certain level of expectation for the services they receive in accordance with the costs incurred. In this case, someone who has a low level of expectation will feel more satisfied with the service received.
We can see in Figure 3 that the largest population is housewives. One of the factors that influence patient loyalty is when someone has experience to shape the attitude of these patients when getting maximum service from a hospital unit. Figure 4 shows the Shiddiq dimension in service behavior, meaning that it is the ability to provide honestly and correctly the type of service that has been promised to the customer or patient. Every employee is expected to have the ability to always act and act honestly and correctly in their knowledge, skills, independence, mastery and professionalism, so that the work activities carried out produce satisfying forms of service, without any complaints and excessive impressions of the services received by Public. From the results of this study, it is known that the majority of respondents are in the good category, which supports the behavior of Shiddiq, health workers such as nurses and hospital staff who serve patients with governance in accordance with sharia principles and the availability of correct information that is conveyed honestly by the health workers. Meanwhile, there were respondents who chose the "inadequate" or "very inadequate" category, that is, those who thought that the promise of services provided to patients had not been kept, and therefore needed attention. In this context, Allah also wants each of His people to keep the promises made and stated as in Q. An-Nahl / 16: 91: In the verse above it is explained that Allah knows all our deeds, said Allah SWT in QS. An-Nahl: 91, in this verse clearly commands the following: fulfill the covenant of God when you have taken it and do not break the oaths after their confirmation while you have made God a witness, Indeed, Allah knows what you do, whether it is intention, speech, or action and both promises, oaths and others, real or confidential. It can be concluded that Shiddiq behavior is a very important factor for patients to want to be treated or hospitalized when sick. Therefore the hospital must improve its services, time of patient registration, time of treatment, time to end treatment, so that the patients' hopes can be fulfilled. This is understandable because the characteristics of people who are seeking treatment are different from those of healthy people: sick people need services that are more focused in all aspects of service. Thus, it can be said that Shiddiq behavior will determine patient loyalty in the long run.
Lovelock and Wright suggest that there must be a match between the medical services provided and what is needed from time to time [5]. If all the services provided have not been able to satisfy the patient, this will result in low level of patient loyalty. Figure 5 shows the Amanah dimension, namely that service requires trustworthy behavior and certainty for the service provided. The form of certainty of a service is very much determined by the guarantee of the employee providing the service, so that the person who receives the service feels satisfied and confident that all forms of service affairs are carried out completely and completely in accordance with the speed, accuracy, convenience, smoothness and quality of the services provided.
Employee security and their ability to generate trust and core commitment are committed to providing satisfaction to customers, providing guarantees to reduce the risk of loss for customers with guaranteed quality of work. From the results of this study, it is known that most patients choose the good and excellent categories, the thing that supports the health of the patient feels that doctors and nurses are transparent and fair in handling patients, so that patients feel safe while undergoing treatment at Sayang General Hospital. Patients also feel that the behavior exhibited by doctors and midwives is in accordance with standards and can be trusted.
The results of this study support the opinion of Parasuraman which states that each form of service requires certainty for the services provided [6]. The form of certainty of a service is very much determined by the guarantee of the employee providing the service, so that the person who receives the service feels satisfied and trust that all forms of services are carried out completely and completely in accordance with the speed, accuracy, convenience, smoothness and quality of the services provided. Supporting concepts and theories put forward by Oemi state that the basics of a service in establishing a partnership are the beliefs that are fostered to consumers, so the loyalty given greatly affects the level of customer satisfaction [7]. Consumers will be confident of the services provided if aspects of service quality are met, namely convincing attitudes, demonstrated motivation, and suitability in various services provided. Figure 3.6 shows the Fathonah dimension in providing services that require intelligence from employees to serve the community in accordance with the level of absorption, and understanding. This requires a wise explanation, detailed, fostering, directing and persuading to address all forms of procedures and work mechanisms that apply in a health facility, so that the form of service receives positive response. Parasuraman, Zeithaml, and Barry in Tjiptono stated that the skills and readiness of employees to help customers and deliver services will be better if done clearly, quickly, and understandably, easily accessible, with low waiting time, and willingness to listen to complaints [8].
The results of this study indicate that the majority of respondents chose "good". Factors that support Fathonah are smartness, wisdom, and intelligence shown by doctors and nurses so that patients feel comfortable and feel that doctors and nurses are able to be responsible in carrying out their services, one of which is ease and accuracy of administration, attention to quality of service, and willingness to provide timely services. Regarding the willingness of health workers to provide assistance to patients has been explained in one of the verses of the Qur'an, namely QS. At-Taubah/9 :71.
Meaning:
The
believing men and believing women are allies of one another. They ask what is right and forbid what is wrong and establish prayer and give zakah and obey Allah and His
Messenger. Those -God will have mercy upon them. Indeed, God is Exalted in Might and Wise. (QS. At-Taubah/9:71).
In the Qur'anic Interpretation compiled by the Indonesian Ministry of Religion, Surah At-Taubah verse 71 explains that believers, both men and women, defend each other. As believers, they defend other believers because of religious relationships and even more so if the believer is his brother, because of blood relations. Women also as a believer helped defend his brothers from among male believers because of religious relations in accordance with the nature of her femininity, as the wives of the Prophet and the wives of friends Advances in Social Science, Education and Humanities Research, volume 409 also joined the battlefield together with the Islamic army to provide drinking water and prepare water food, because the believers are their own kind, and they are bound by a rope of faith that evokes a sense of brotherhood, unity, love for one another and to help one another. All of that is driven by the loyal spirit of friends who make them as one body or one wall structure that mutually reinforces in upholding justice and raising the words of God. In providing services, health workers must be fair, responsible, conscientious, and timely to patients, always provide assistance, and foster good relationships with patients and their families so that patient confidence arises in the health workers and the hospital.
Meanwhile, there are patients who choose "inadequate", due to the lack of timeliness of services provided by doctors and administrative officers, for example the delay in the doctor visiting the patients he cared for. This research is in accordance with the opinion of Parasuraman which states that every employee, in providing forms of service, prioritizes aspects of service that greatly affect the behavior of people who receive services, so it requires the ability of responsiveness of employees to serve the community in accordance with the level of absorption, understanding, discrepancies in various forms of service that he is not aware of [7]. This requires a wise explanation, detailed, fostering, directing and persuading to address all forms of procedures and work mechanisms that apply in an organization, so that the form of service gets a positive response. Figure 7 shows the Tabligh dimension, namely good delivery, in which service will run smoothly and properly if every party concerned with service communicates decently and ethically, especially in assisting childbirth with the intention of producing the next national and religious generation with the lives of patients always being the stakes. Allah Almighty says in QS. Al-Maidah/5: 32.
Meaning:
And whoever saves one -it is as if he had saved mankind entirely. (Q.S Al-Maidah/5: 32) Those verses explained that providing good services or assisting in childbirth by providing good services has a truly extraordinary value of kindness, both for hospitals and for health workers in the future.
Rasulullah SAW said in the hadith narrated by best friend Anas bin Malik RA "A person's faith is not perfect until he loves his brother as he loves himself". (HR. Bukhori).
The essence of this hadith is "Treat your brother as you treat yourself". We certainly want to be treated well, we definitely want to be served well, we certainly want to be served quickly, then apply your desires when you serve others. Tjiptono also defines customer value as an emotional bond that exists between the customer and the producer after the customer uses the company's products and services and finds that the product or service has added value [8].
In figure 8, Tangible dimension of service can be seen, that is the actual form of actualization that can be physically seen or used in accordance with their functions and uses to help services for those who want service, so that they feel satisfied with the service felt, thereby showing work performance for giving services provided [7].
According to Zeithaml in Fandi Tjiptono, tangible is direct evidence that includes physical facilities, equipment, employees and means of communication [9]. Service is a process that consists of many activities that involve interactions between customers and service providers. The purpose of this interaction is to be able to satisfy the needs of customers. Physical facilities are the physical environment in which services are created and directly interact with patients. Because services cannot be touched, patients often see visible physical cues or evidence, to evaluate the services obtained before and after consuming these services [9].
The results of this study are in accordance with the theory put forward by Pasuruan et al., which states that the Tangible dimensions given by the company to customers such as physical facilities, equipment, employee friendliness will affect the level of customer loyalty [7].
The results of this study are also in accordance with the opinion of Zeithaml which states that tangible is a matter that significantly influences the patient's decision to buy and use the services offered [6]. From the results of this study note that the majority of respondents were chose good category, things that support tangible (physical evidence) both are the availability of patient's hijab, the appearance of doctors, nurses, and hospital staff who are neat and polite, the availability of religious facilities, and the location of the toilet which is not facing the Qibla. In addition, there are respondents who are in the category of not good, things that cause tangible (physical evidence) to be not good, where respondents assume that cleanliness and neatness of the treatment room is still considered lacking and the hospital is considered not to keep up with the times or move slowly, for example the building is too old, and this definitely requires more attention.
This study is not in line with research conducted by Kasih entitled "Analysis of the Relationship between Service Quality and Loyalty of Non-Recipient BPJS Patients in Inpatients at Mardi Rahayu Kudus Hospital" [10], which states that there is no relationship between tangible dimensions (direct evidence) with patient loyalty.
This study is in line with research conducted by Sulni, et al entitled The Relationship between Quality of Health Services and Patient Loyalty at the Baranti Health Center in Sidrap Regency [11], which states that there is a significant relationship between tangibility (direct evidence) and patient loyalty, with the results of physical evidence research Good service at the Baranti Health Center is a clean room condition Advances in Social Science, Education and Humanities Research,volume 409 and supported by the availability of supporting facilities in the room, the comfort of the room and the appearance of a neat and clean health worker. Physical evidence of poor service is due to the stuffy waiting room, so the patients feel uncomfortable.
In Table 7, it is seen that all dimensions are included in the Excellent category, meaning that the behavior of Islamic services at Sayang General Hospital is implemented very well, and when the services provided are good, patients will feel satisfied and comfortable, resulting in their loyalty.
The results of this study are in line with Sukowati [12], which says that Islamic services are urgently needed by outpatient and inpatient care staff of Dr. Asmir Salatiga Army Hospital because it can accelerate the recovery of patients and improve the quality of hospital services by maintaining the value of worship that is mandatory and responsibility when providing nursing services to patients. Meanwhile, according to the Director of Ibnu Sina Islamic Hospital Makassar, in his research Hafid said services such as da'wah and spiritual guidance must be given to patients and staff of Ibnu Sina Hospital in order to improve the welfare of officers and their patients [13].
According to Sunawi, the character of rabbaniyah or a belief and surrender of all things only to Allah SWT is one of the characteristics that distinguish Islamic hospital services from non-Islamic hospitals [12]. As for service orientation, non-Islamic hospitals also continue to use elements such as the character of morality, waqi'iyah, and insaniyah, but in its management there are still differences in terms of how they are applied and the scope of their development.
In Figure 9, the loyalty variable appeared excellent. According to Sharon and Santoso in their research, if the services provided are good, it will affect the level of patient loyalty, and the quality of service has a positive effect on patient loyalty at the Tugurejo Hospital [14]. Health providers can be used as a builder of patient loyalty by optimizing health services that are convincing and very decisive for service providers, such as doctors and midwives who must be able to provide an explanation regarding the type of disease, treatment, and appropriate treatment, can convince patients and provide clear information so that patients treated at the hospital will feel assured during their stay.
Interpersonal experience of patients can develop the level of patient loyalty to service providers that they get from doctors, midwives, and hospital service behavior. Service behavior itself has an impact on service quality in increasing patient loyalty. It can be said that service behavior can be positively related to overall service quality and there is a significant correlation between the two [15].
Meaning:
And whoever saves one -it is as if he had saved mankind entirely (Q.S Al-Maidah/5: 32).
The above verse explains that providing good service or caring for the sick with good service is a truly extraordinary value for good, both for hospitals and for health workers in the future.
The hadith about services that "must" be given to others. Rasulullah SAW said in the hadith narrated by best friend Anas bin Malik RA: A person's faith is not perfect until he loves his brother as he loves himself". (HR. Bukhori). The essence of this hadith is "Treat your brother as you treat yourself". We certainly want to be treated well, we definitely want to be served well, we certainly want to be served quickly, then apply your desires when you serve others. Tjiptono also defines customer value as an emotional bond that exists between the customer and the producer after the customer uses the company's products and services and finds that the product or service has added value [8].
If patients already have a sense of belonging and have a good emotional bond with the hospital, they usually do not want to move to another hospital, despite the price changes in the hospital they occupy. They are comfortable, trusting and sympathetic to the hospital and will easily promote the hospital to their families and others. This indirectly brings a positive impact on the hospital. Table 9 shows a constant of -3.896, meaning that if the Islamic service behavior (X) is 0, loyalty (Y ') will be negative, that is -3.896. Variable regression coefficient (X) of 0.201; it means that the coefficient is positive, which is a positive relationship between Islamic service behavior and loyalty. The better the behavior of Islamic services, the more patient loyalty increases.
From the explanation above it can be concluded that: The hypothesis is determined based on the significance value with the following test criteria: If the result of the significance value> predetermined significance value, Ho is rejected.
If the result of the significance value< predetermined significance value, Ho is accepted.
In Table 9 we can find out that the significance value <the specified significance value is 0,001 <0.05, and therefore H0 is accepted. This means that the Islamic service Behavior variable (X) gives a significant influence on the loyalty (Y) of uninsured patients in the midwifery unit at Sayang General Hospital. This might happen to patients in Cianjur because Cianjur has a philosophy of ngaos, mamaos and maenpo that reminds us all of the three aspects of life perfection. Ngaos is a Islamic tradition that colors the atmosphere and nuances of Cianjur with people who are clinging to diversity. Image as a religious area is said to have been pioneered since Cianjur was born around 1677 where the Cianjur region was built by scholars and students of the past who were passionately developing Islamic culture. That is why Cianjur also had the nickname of the santri city. This Islamic spirit is always bequeathed to the next generation in Cianjur, and this is in line with what is the vision, mission, and philosophy of the Sayang General Hospital. But this cannot be made a generalization of the condition of another society.
According to Fattah, service will be accepted by patients if the work is good, so patients feel their own satisfaction [16]. Patients who are satisfied with a service will return to use their services when they need it, and therefore the patient's loyalty to the hospital can grow. If the officer in the hospital can perform midwifery services for patients seriously, then the work result will be good. The behavior of Islamic services is part of the services available at Sayang Hospital. When the services provided are good, then the level of patient loyalty will be affected. Because the service implemented at Sayang Hospital is Islamic, patients who are hospitalized at the hospital feel comfortable because they are not only physically cared for, but also spiritually.
Based on Table 11, it can be seen that the Islamic service behavior variable simultaneously has an effect of 0.634. Meanwhile, another influence of 0.366 was determined by other variables that could not be explained in this study, for example the internal attitude of each patient while being treated, such as istiqomah attitude. Khulafaur rasyidin or caliph successor to the prophet Muhammad SAW who led Muslims has his own definition regarding istiqomah, Abu Bakar Ash-Shiddiq R.A mentions that istiqomah is the behavior of someone who does not associate Allah with others or does not commit shirk. Umar bin Khattab R.A interpreted istiqomah as something that should hold on to one command and not do anything that is forbidden. Uthman bin Affan R.A mentioned that istiqomah means sincere. Ali ibn Abi Talib R.A mentioned that istiqomah means carrying out the obligations that were ordered by Allah SWT. This istiqomah attitude can be an internal factor of patients in treatment at the Hospital, so that it can also affect the patient's loyalty to the Hospital.
As a case study, there are limitations in this research. Some limitations that occur in the implementation of this study include the possibility of bias that can affect the results of the study because respondents may be uncomfortable or do not know the answer so as not to provide real information when filling out the questionnaire, the short implementation time makes this study seem in a hurry can result in information received by respondents to be less than optimal, and researchers can not directly observe a number of things regarding patient loyalty. In addition, interviewer bias can also occur in asking about this research so that it is not conveyed properly. And the occurrence of communication that did not go well when the interview was conducted, may result the respondent did not understand the questions asked by the interviewer.
V. CONCLUSION
Based on the results of research that has been done at Sayang General Hospital, although this cannot be made a generalization of the condition of another society, it can be concluded that the behavior of Islamic services has a significant effect on the loyalty of uninsured patients in midwifery services at the Sayang General Hospital, as can be seen from patients who will go to the hospital if there are complaints about their health, visiting for control, the patient feels relieved to be treated at the hospital, wants to go back to the hospital, is trusting in the quality of hospital services, is not interested in other hospitals, and will recommend Sayang General Hospital to others for treatment. This is very good because it can reduce marketing costs, transaction costs, customer turnover costs, and failure costs. By applying the values of Shiddiq, Amanah, Fathonah, and Tabligh, and maintaining Tangible quality in every aspect of the obstetrics and gynecology department of the Sayang General Hospital, which reflects the implementation of Rasullah SAW's character in rendering services in the Hospital.
|
2020-03-19T10:11:22.983Z
|
2020-03-03T00:00:00.000
|
{
"year": 2020,
"sha1": "023906adbee52207c5efec52d6300426f8470648",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.200225.028",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4d9637f3ab935e092c81c86ad4d87d4b07587aab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
219559264
|
pes2o/s2orc
|
v3-fos-license
|
Almost everywhere convergence of Bochner-Riesz means for the Hermite operators
Let $H = -\Delta + |x|^2$ be the Hermite operator in ${\mathbb R}^n$. In this paper we study almost everywhere convergence of Bochner-Riesz means for the Hermite operator $H$. We prove that $$ \lim\limits_{R\to \infty} S_R^{\lambda}(H) f(x)=f(x) \ \ \text{a.e.} $$ for $f\in L^p(\mathbb R^n)$ provided that $p\geq 2$ and $\lambda>2^{-1}\max\big\{ n\big({1/2}-{1/p}\big)-{1/ 2}, \, 0\big\}.$ Surprisingly, for the dimensions $n\geq 2$ our result reduces the borderline summability index $\lambda$ for a.e. convergence as small as only \emph{ half} of the critical index required for a.e. convergence of the classical Bochner-Riesz means for the Laplacian. For the dimension $n = 1$, we obtain that $ \lim\limits_{R\to \infty} S_R^{\lambda}(H) f(x)=f(x)$ a.e. for $f\in L^p({\mathbb R})$ with $ p\geq 2$ whenever $\lambda>0$.
Introduction
Convergence of Bochner-Riesz means of the Fourier transform in the L p spaces is one of the most fundamental problems in classical harmonic analysis. For λ ≥ 0 and R > 0, the Bochner-Riesz means of the Fourier transform on R n are defined by Here t + = max{0, t} for t ∈ R and f denotes the Fourier transform of f . The convergence of S λ R f → f in L p -norm as R → ∞ is equivalent to the boundedness of S λ := S λ 1 in L p (R n ), and the longstanding open problem known as the Bochner-Riesz conjecture is that for 1 ≤ p ≤ ∞ and p 2, S λ is bounded on L p (R n ) if and only if λ > λ(p) := max n It was shown by Herz that the condition (1.2) on λ is necessary for L p boundedness of S λ , see [24]. Carleson and Sjölin [10] proved the conjecture when n = 2. Afterward, substantial progress has been made in higher dimensions, see [5,23,31,40,41] and references therein. However, the conjecture still remains open for n ≥ 3. Concerning the pointwise convergence, Carbery, Rubio de Francia and Vega [8] showed for any function f ∈ L p (R n ), lim R→∞ S λ R f (x) = f (x) a.e. provided p ≥ 2 and λ > λ(p). When n = 2 the results were previously obtained by Carbery [7] who proved the sharp L p boundedness of the maximal Bochner-Riesz means. Also, see [14] for earlier partial result based on the maximal Bochner-Riesz estimate. Remarkably, the result by Carbery et al. [8] settled the a.e. problem up to the sharp index λ(p) for 2 ≤ p ≤ ∞. There are also results at the critical exponent, i.e., λ = λ(p) (for example, see [1,33]). Almost everywhere convergence of S λ R f with f ∈ L p , 1 < p < 2, exhibits different nature and few results are known in this direction except n = 2 ( [34,38,39]).
Bochner-Riesz means for the Hermite operator. In this paper we consider almost everywhere convergence of Bochner-Riesz means for the Hermite operator H on R n , which is defined by The operator H is non-negative and selfadjoint with respect to the Lebesgue measure on R n . For each non-negative integer k, the Hermite polynomials H k (t) on R are defined by H k (t) = (−1) k e t 2 d k dt k e −t 2 , and the Hermite functions h k (t) := (2 k k! √ π) −1/2 H k (t)e −t 2 /2 , k = 0, 1, 2, . . . form an orthonormal basis of L 2 (R). For any multiindex µ ∈ N n 0 , the n-dimensional Hermite functions are given by tensor product of the one dimensional Hermite functions: h µ i (x i ), µ = (µ 1 , · · · , µ n ). (1.4) Then the functions Φ µ are eigenfunctions for the Hermite operator with eigenvalue (2|µ| + n) and {Φ µ } µ∈N n 0 form a complete orthonormal system in L 2 (R n ). Thus every f ∈ L 2 (R n ) has the Hermite expansion (1.5) where P k denotes the Hermite projection operator given by For R > 0 the Bochner-Riesz means for H of order λ ≥ 0 is defined by In one dimension, it was known from [2,42] that if λ > 1/6, S λ R (H) is uniformly bounded on L p , 1 ≤ p ≤ ∞ and, for 1/6 > λ ≥ 0 and 1 ≤ p ≤ ∞, S λ R (H) is uniformly bounded on L p (R) if and only if λ > (2/3)|1/p − 1/2| − 1/6. In higher dimensions (n ≥ 2) the L p boundedness of S λ R (H) is different and for λ > (n − 1)/2. Thangavelu [43] showed the uniform boundedness of S λ R (H) on L p , 1 ≤ p ≤ ∞. In particular, S λ R (H) converges to f in L 1 (R n ) if and only if λ > (n − 1)/2. For 0 ≤ λ ≤ (n − 1)/2 and 1 ≤ p ≤ ∞, p 2, it still seems natural to conjecture that S λ R (H) are uniformly bounded on L p (R n ) if and only if λ > λ(p)(see [46, p.259]). Thangavelu also showed S λ R (H) f p ≤ C f p if and only if λ > λ(p) under the assumption that f is radial, thus the condition λ > λ(p) is necessary for L p boundedness of S λ R (H). The necessity of the condition λ > λ(p) for L p boundedness can also be shown by the transplantation result in [29] which deduces the L p boundedness of S λ R (H) from that of S λ R . Karadzhov [27] verified the conjecture in the range 1 ≤ p ≤ 2n/(n + 2). The boundedness for p ∈ [2n/(n − 2), ∞] follows from duality. However, it remains open to see if the conjecture is true in the range 2n/(n + 2) < p ≤ 2n/(n + 1).
Almost everywhere convergence. Concerning a.e. convergence of S λ R (H) f , it is known in [42,43] (see also [45,Chapter 3]) that for every a.e. whenever λ > (3n − 2)/6. Recently, Chen, Lee, Sikora and Yan [12] studied L p boundedness of the maximal Bochner-Riesz means for the Hermite operator H on R n for n ≥ 2, that is to say, and it was shown that the operator S λ As a consequence, we have e. for f ∈ L p (R n ) and p, λ satisfying (1.8). For more regarding the Hermite expansion (1.5) and the Bochner-Riesz means for the Hermite operator, we refer the reader to [13,19,28,30,43,44,46] and references therein.
The following is the main result of this paper which gives a new range of p and λ for the a.e convergence of the operator S λ R (H).
As is already mentioned, S λ R (H) converges in L p only if λ > λ(p). Surprisingly, our result tells that we only need half of the critical summability index in order to guarantee a.e. convergence of S λ R (H) f . Unlike the classical Bochner-Riesz means the critical indices for L p convergence and a.e. convergence for Hermite operators do not match. Let us now recall from [9, pp.320-321] (also [33]) how the sharpness of the result in [8] can be justified. In order to study a.e. convergence of S λ R f with f ∈ L p , S λ R f should be defined at least as a tempered distribution for f ∈ L p . If so, by duality S λ is defined from Schwartz class S to L p . This implies the convolution kernel K λ of S λ is in L p ′ , so it follows that λ > λ(p) because K λ ∈ L p ′ if and only if λ > λ(p). However, such argument does not work for the Bochner-Riesz means for the Hermite operator since S λ R (H) f is well defined for any f ∈ L p and any λ ≥ 0. For the present we do not have any evidence which supports the sharpness of the condition λ > λ(p)/2.
Our result also relies on the maximal estimate which is a typical device in the study of almost everywhere convergence. In order to prove Theorem 1.1 we consider the corresponding maximal operator S λ * (H) and prove the following, from which we deduce almost everywhere convergence of Once we have Theorem 1.2 it is easy to prove Theorem 1.1. Indeed, via a standard approximation argument (see, for example, [36] and [44, Theorem 2]) Theorem 1.2 establishes a.e. convergence of S λ R (H) f for all f ∈ L 2 (R n , (1 + |x|) −α ) provided that λ > max (α − 1)/4, 0 . Now, for given p ≥ 2 and λ > λ(p)/2 we choose an α such that α > n(1 − 2/p) and λ > max (α − 1)/4, 0 . Our choice of α ensures that f ∈ L 2 (R n , (1 + |x|) −α ) if f ∈ L p as it follows by Hölder's inequality. Therefore, this yields a.e. convergence of S λ The use of weighted L 2 estimate in the study of pointwise convergence for the Bochner-Riesz means goes back to Carbery et al. [8]. It turned out that the same strategy is also efficient for similar problems in different settings. For example, Bochner-Riesz means at the critical index λ(p) for p > 2n/(n − 1) (see [1,33]), and for Bochner-Riesz means associated with sub-Laplacian on the Heisenberg group, see [21,26].
Square function estimate on weighted L 2 -space. The proof of the sufficiency part of Theorem 1.2 relies on a weighted L 2 -estimate for the square function S δ which is defined by Here, for any bounded function M(2k + n)P k . The following is our main estimate on which our results are based. Proposition 1.3. Let 0 < δ ≤ 1/2, 0 < ǫ ≤ 1/2, and let 0 ≤ α < n. Then, there exists a constant C > 0, independent of δ and f , such that where A ǫ α,n (δ) := (1.12) A similar weighted L 2 estimate with the homogeneous weight |x| −α was obtained in [8] for the square function associated to the Laplacian ∆ : Though we make use of the weighted L 2 estimate as in [8] there are notable differences which are due to special properties of the Hermite operator and they eventually lead to improvement of the summability indices. Let P k be the Littlewood-Paley projection operator which is given by . Because of the scaling property of the Laplacian, the estimate for S δ (P k f ) can be reduced to the equivalent estimate for S δ (P 0 f ). This tells that contributions from different dyadic frequency pieces are essentially identical. However, this is not the case for S δ f . As for the Hermite case estimate (1.11), the high and low frequency parts exhibit considerably different natures. Unlike the classical Bochner-Riesz operator, we need to handle them separately.
Indeed, as is to be seen in Section 4 below, the proof of Proposition 1.3 depends heavily on the following two facts. The first is the estimate which holds for all α ≥ 0 and f ∈ L 2 (R n ) (see Proposition 2.1 below). Clearly, this can not be true if H is replaced by the Laplacian. It should be noted that the estimate (1.13) is more efficient when we deal with the low frequency part of the function. The second is a type of trace lemma (Lemma 3.1) for the Hermite operator. In fact, we obtain the estimate for every k ∈ N and all α > 1. In our proof of (1.11) this inequality (1.14) takes the place of the classical trace lemma (see (3.1)), which played an important role in establishing the weighted L 2 estimate [8]. In contrast with the case of Laplacian where the corresponding trace inequality should take a scaling-invariant form, that is to say, the weight should be homogeneous (cf. (3.1)), we have the inhomogeneous weight (1 + |x|) −α in both of the estimates (1.13) and (1.14). As to be seen later, this is due to the fact that the spectrum of the Hermite operator H is bounded away from the origin.
We show Proposition 1.3 by making use of both of the estimates (1.13) and (1.14). The proof of Proposition 1.3 divides into two parts depending on size of frequency in the spectral decomposition (1.6). For the high frequency part (k δ −1 in (1.6)) the key tool is the estimate (1.14), which we combine with spatial localization argument based on the finite speed of propagation of the wave operator cos(t √ H). The estimate (1.14) can be compared with the restriction-type estimate due to Karadzhov [27]: The bound in (1.14) is much smaller than that in (1.15) when k is large. So, the estimate (1.14) becomes more efficient in the high frequency regime. In fact, the estimate (1.15) was used to show the sharp L p -bounds on S δ for 2n/(n − 2) ≤ p ≤ ∞, n ≥ 2, [12,Proposition 5.6]. In the low frequency part (k δ −1 in (1.6)), inspired by [12,Lemma 5.7] (see also [19]), we directly obtain the estimate using the estimate (1.13). The estimate (1.13) does not seem to be so efficient since the bound gets worse as the frequency increases, but it is remarkable that this bound is good enough to yield the sharp result in Theorem 1.2 via balancing the estimates for low and high frequencies (see Remark 4.2).
The remainder of the paper is organized as follows. In Section 2 we obtain some weighted L 2estimates for the operators (1 + H) α , α ≥ 0, and the Littlewood-Paley inequality for the Hermite operator. In Section 3 we prove a trace lemma for the Hermite operator H and its generalization, which play a crucial role in the proof of Theorem 1.2. The proof of Theorem 1.2 will be given in Section 4 (sufficiency) and Section 5 (necessity).
L 2 -estimates for the Hermite operator
In this section, we prove the estimate (1.13) and a Littlewood-Paley inequality for the Hermite operator in R n , which is to be used in the proof of Theorem 1.2 in Section 4. In the following, S (R n ) stands for the class of Schwartz functions in R n . Proof. This follows from the fact that the first eigenvalue of H is bigger than or equal to 1.
Lemma 2.3. Let H be the Hermite operator in R.
Then, for all φ ∈ S (R), we have Proof. Since Hφ 2 2 = (−∆ + x 2 )φ, (−∆ + x 2 )φ , a simple calculation shows This and the above give as desired. For the last inequality we use Lemma 2.2.
Lemma 2.4. Let n = 1. Then, for φ ∈ S (R) and for k ∈ N, we have Proof. We begin with noting that, if k = 1, the first estimate in (2.1) holds with C 1 = √ 3 by Lemma 2.3, and the second with D 1 = 1. We now proceed to prove (2.1) for k ≥ 2 by induction. Assume that (2.1) holds for k − 1 with some constants C k−1 and D k−1 . A computation gives
By (2.2), Lemma 2.3, and our induction assumption we see that
Hence, we get the estimate On other hand, we also have So, we readily get the estimates in (2.1). This completes the proof.
Now we are ready to prove Proposition 2.1.
Since all H i are non-negative selfadjoint operators and commute strongly (that is, their spectral resolutions commute), the operators n i=1 H ℓ i i are non-negative selfadjoint for all ℓ i ∈ Z + . Hence for all k ∈ N. Combining this with the above inequality we get This proves estimates (1.13) for all α ∈ N. Now, by virtue of Löwner-Heinz inequality (see, e.g., [15, Section I.5]) we can extend this estimate to all α ∈ [0, ∞). This completes the proof of Proposition 2.1.
We now recall a few standard results in the theory of spectral multipliers of non-negative selfadjoint operators (see for example, [18,19]). The important fact is that the Feynman-Kac formula implies the Gaussian upper bound on the semigroup kernels p t (x, y) associated to e −tH : for all t > 0, and x, y ∈ R n .
This can be proved by following the standard argument, for example, see [35,Chapter IV]. We include a brief proof for convenience of the reader.
A trace lemma for the Hermite operator
In the work of Carbery, Rubio de Francia and Vega [8], the main tool was the trace lemma, which states a function in the Sobolev spaceẆ α,2 (R n ) can be restricted to S n−1 as an L 2 function on S n−1 .
An alternative formulation is that, for all 0 < ǫ < 1/2, which in turn, by taking Fourier transform and Plancherel's theorem, is equivalent to In the following we establish the estimate (1.14) which is the counterpart of the above estimate (3.1) in the setting of the Hermite operator. Proof. The proof of (1.14) is inspired by the result of Bongioanni-Rogers [4, Theorem 3.3]. To show (1.14), it is sufficient to show for every M ≥ 1. Indeed, the estimate (1.14) immediately follows by decomposing R n into dyadic shells and applying the condition α > 1 and (3.2) to each of them.
Let us prove (3.2). For every f ∈ S (R n ), we may write its Hermite expansion as in (1.5). Considering this spectral decomposition, we set where µ = (µ 1 , · · · , µ n ) and . . , f n are orthogonal and µ i ≥ |µ|/n whenever f i , Φ µ 0 (see for example, [4]). Recalling that the Hermite functions Φ µ are eigenfunctions for the Hermite operator H, it is clear that for each i = 1, . . . , n . By symmetry we have only to show (3.5) with i = 1. For the purpose we do not need the particular structure of f 1 , so let us set g := f 1 for a simpler notation.
Thus, by Fubini's theorem it follows that, for M > 0, Therefore, to complete the proof we need only to show that Hence, we may assume µ 1 > M 2 . By the property of the Hermite functions (see [45, Thus, we get the desired estimate, which completes the proof of Lemma 3.1.
Our next aim is to find a suitable trace lemma in our setting of the Hermite operator H on R n . For any function F with support in [0, 1] and 2 ≤ q < ∞, we define For q = ∞, we put F N 2 ,∞ = F ∞ (see [13,17,19]). Then we have the following result which is a consequence of Lemma 3.1.
By orthogonality it is clear that Thus, the estimate (3.8) follows from (3.7).
Thus T can be extended continuously to a bilinear mapping of
for some constant C ε > 0 independent of f and F.
Proof. Fixing N ∈ N, we consider a set A of functions defined on R with supports contained in [N/4, N] as follows: Then we define a normalized counting measure ν on R by setting, for any Borel set Q, We also define an L q norm on A by Hence, G L q (dν) ∼ δ N G N 2 ,q , and the space A equipped with this norm becomes a Banach space which is denoted by A q . It also follows (for example, see [3]) that We consider the bilinear mapping T given by From Lemma 3.2 and duality, for any ε > 0 we have Then, it follows that δ N F N 2 ,q = δ N G N 2 ,q and |F(x)| ≤ |G(x)| for x ∈ R. Since |F| 2 ( By duality we get the desired estimate, which completes the proof of Lemma 3.4.
To complete the proof of the sufficiency part of Theorem 1.2, it remains to prove Proposition 1.3.
4.2.
Weighted inequality for the square function. In this subsection, we establish Proposition 1.3. For the purpose we decompose S δ into high and low frequency parts. Let us set Since the first eigenvalue of the Hermite operator is larger than or equal to 1, In order to prove Proposition 1.3 it is sufficient to show the following.
Lemma 4.1. Let A ǫ α,n (δ) be given by (1.12). Then, for all 0 < δ ≤ 1/2 and 0 < ǫ ≤ 1/2, we have the following estimates: Both of the proofs of the estimates (4.4) and (4.5) heavily rely on the generalized trace lemmata, Lemma 3.2 and Lemma 3.4. Though, there are distinct differences in their proofs. As for (4.4) we additionally use the estimate (1.13) which is efficient for the low frequency part. Regarding the estimate (4.5) we use the spatial localization argument which is based on the finite speed of propagation of the Hermite wave operator cos(t √ H ). Similar strategy has been used to related problems, see for example [12]. In this regards, our proof of the estimate (4.5) is similar to that in [8]. In high frequency regime the localization strategy becomes more advantageous since the associated kernels enjoy tighter localization. This allows us to handle the weight (1 + |x|) −α in an easier way. The choice of δ − 1 2 in the definitions of S l δ , S h δ is made by the optimizing the estimates which result from two different approaches, see Remark 4.2.
We observe that, for Hence, for t ∈ I i we have and thus 2 k+2 Substituting this into (4.7), we have that Now we claim that, for 1 ≤ t ≤ δ −1/2 , Before we begin to prove it, we show that this concludes the proof of estimate (4.4). Combining (4.11) with (4.10), we see that Since the length of interval I i is comparable to 2 k−1 δ, taking integration in t and using disjointness of the spectral supports, we get This, being combined with Proposition 2.1, yields the desired estimate (4.4).
4.4.
Proof of (4.5): high frequency part. We now make use of the finite speed of propagation of the wave operator cos(t √ H ). From (2.3), it is known (see for example [16]) that the kernel of the operator cos(t √ H ) satisfies
For any even function
Thus from the above we have (4.15) supp r]. This will be used in the sequel. Recalling that φ δ (s) = φ(δ −1 (1 − s 2 )), for j ≥ j 0 we set By a routine computation it can be verified that 17) for any N and all j ≥ j 0 (see also [14, page 18]). By the Fourier inversion formula, we have By the finite speed propagation property (4.15), we particularly have Now from (4.6), it follows that For k ≥ 1 − log 2 √ δ and j ≥ j 0 , let us set Using the above inequality (4.20), (4.18), and Minkowski's inequality, we have In order to make use of the localization property (4.19) of the kernel, we need to decompose R n into disjoint cubes of side length 2 j−k+2 . For a given k ∈ Z, j ≥ j 0 , and m = (m 1 , · · · , m n ) ∈ Z n , let us set which are disjoint dyadic cubes centered at 2 j−k+2 m with side length 2 j−k+2 . Clearly, R n = ∪ m∈Z n Q m . For each m, we define Q m by setting and denote To exploit orthogonality generated by the disjointness of spectral support, we further decompose φ δ, j which is not compactly supported. We choose an even function θ ∈ C ∞ c (−4, 4) such that θ(s) = 1 for s ∈ (−2, 2). Set for all ℓ ≥ 1 such that 1 = ∞ ℓ=0 ψ ℓ,δ (s) and φ δ, j (s) = ∞ ℓ=0 ψ ℓ,δ φ δ, j (s) for all s > 0. We put it into (4.22) to write Recalling (4.8) and (4.9), we observe that, for every t ∈ I i , it is possible that ψ ℓ,δ (s/t) η i ′ (s) 0 only when i − 2 ℓ+6 ≤ i ′ ≤ i + 2 ℓ+6 . Hence, From this and Cauchy-Schwarz's inequality we have Combining this with (4.24), we get To continue, we distinguish two cases: j > k; and j ≤ k. In the latter case the associated cubes have side length ≤ 4 so that the weight (1 + |x|) α behaves like a constant on each cube Q m , so the desired estimate is easier to obtain. The first case is more involved and we need to distinguish several cases which we need to deal with separately.
4.4.1.
Case j > k. From the above inequality (4.25) we now have ≤ I 1 ( j, k) + I 2 ( j, k) + I 3 ( j, k), (4.26) where We first consider the estimate for I 1 ( j, k) which is the major one. The estimates for I 2 ( j, k), I 3 ( j, k) are to be obtained similarly but easier. In fact, concerning I 3 ( j, k), the weight (1 + |x|) −α behave as if it were a constant, and the bound on I 2 ( j, k) is much smaller because of rapid decay of the associated multipliers.
Estimate of the term I 1 ( j, k) . We claim that, for any N > 0, where A ǫ α,n (δ) is defined in (1.12).
Estimate of the term I 2 ( j, k). As is clear in the decomposition of E k, j , the term I 2 ( j, k) is a tail part and we can obtain an estimate which is stronger than we need to have. In fact, we show for any N > 0. Indeed, we clearly have From the definition of ψ ℓ,δ and (4. Thus, After putting this in (4.28) we take summation over m ∈ M 0 to obtain As before we may use (4.33) since j > k. Since j 0 = [− log 2 δ] − 1 and k ≥ [− 1 2 log 2 δ], taking sum over ℓ we obtain (4.36).
Estimate of the term I 3 ( j, k) . We now prove the estimate We begin with making an observation that provided that m M 0 . Thanks to this observation the estimates for E k, j, ℓ m,i ′ (t) is much simpler. By (4.38) it is clear that Since (ψ ℓ,δ φ δ, j )(t −1 √ H) 2→2 ≤ (ψ ℓ,δ φ δ, j ) ∞ , it follows from (4.35) that we have for τ ≫ 1. By examining the proofs in the above one can obtain the bounds on S τ and S τ in the space L 2 (R n , (1 + |x|) −α ). In fact, it is not difficulty to see that, for n ≥ 2 and α > 1, Optimization between these two estimates gives the choice τ = δ − 1 2 .
Proof of Theorem 1.2: Necessity
In this section we will discuss the necessary condition for boundedness of the Bochner-Riesz means on L 2 (R n , (1 + |x|) −α ) and show that Theorem 1.2 is sharp up to the endpoint.
This clearly implies the necessary part of Theorem 1.
. The proof of Theorem 5.1 is based on the following lemma on the weighted estimates of the normalized Hermite functions.
Proof. We begin with showing that there exists a constant C > 0 such that, for any large N, with C independent of N, which is equivalent to (5.5) as is easy to see by change of variables. In order to see (5.6), we make change of variables y = 2θ−sinθ. The condition t ∈ [1/2, 1/ . So, (5.6) follows if we show that there exists a constant C > 0 independent on N such that but this is clear from an elementary computation.
|
2020-06-11T01:01:37.816Z
|
2020-06-10T00:00:00.000
|
{
"year": 2021,
"sha1": "75564b08c039fabef7c0eba6ce319472b39f2df4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.05689",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "75564b08c039fabef7c0eba6ce319472b39f2df4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
220278477
|
pes2o/s2orc
|
v3-fos-license
|
Numerical Simulation of Storm Surges and Waves Caused by Typhoon Jebi in Osaka Bay with Consideration of Sudden Change of Wind Field
Typhoon Jebi (T1821) was a high-speed typhoon with a moving speed exceeding 60 km/h at landfall, and with a further acceleration of the moving speed just before 14:00 on September 4, 2018, formed an extremely strong wind field, causing a storm surge and high wave disasters along the northern coast of Osaka Bay. This study conducted a numerical simulation of storm surges and waves using the wave-surge combined numerical model taking the effects of moving speed acceleration into consideration by enhancing the JMA GPV wind data. The computation successfully reproduced the disaster external forces in Kobe and Osaka Ports, Yodo River mouth, and Kansai International Airport. The major findings of the study are summarized as follows: (1) Numerical analysis considering the rapid increase of surface wind due to acceleration of the typhoon moving speed reproduced the storm surge anomaly exceeding 2.77 m observed at the northern end of Osaka Bay. (2) The storm surge water level in Yodo River exceeding O.P.+5.2 m was also reproduced. (3) The computed total water level was CDL+4.16 m (storm surge + wave run-up height + margin height) on the southeast side of Kansai International Airport Island that exceeded the current crown height of CDL+3.9 m.
INTRODUCTION
In 2018 Typhoon Jebi (T1821) passed the west side of the Kii Channel and Osaka Bay at a high speed of 60 to 65 km/h while maintaining a strong central pressure of 950 hPa, and after further acceleration from the northern end of Osaka Bay entered the Sea of Japan with an average moving speed of more than 93 km/h. The wind speed increased rapidly due to the acceleration of the typhoon moving speed, and the strong winds occurred when the wind direction changed from east to south between 13:00 and 14:00 on September 4, 2018. On Kansai Airport Island, a ferocious wind was recorded, and the maximum instantaneous wind speed was 58.1 m/s at 13:38 and the 10-minute average wind speed was 46.5 m/s at 13:47 (Japan Meteorological Agency: JMA (2018)). Since the central pressure of the typhoon at landfall was as low as 950 hPa, in addition to the suction effect of more than 0.5 m, the rapid increase in moving speed just before 14 To prevent coastal disasters, it is necessary to clarify the relationship between the design crown height of coastal embankments and the maximum significant wave height and storm surge anomaly caused by the typhoon. The design crown height is defined as the design storm surge height + wave run-up height + margin height. The design storm surge height is defined as the average syzygy high tide level (T.P.+0.9 m in Osaka Bay) + estimated storm surge anomaly. Assuming that the wave run-up height is half of the wave height, the design storm surge height in Osaka Bay is estimated as T.P.+4.9 m in the region where the wave height is 2.0 m and the estimated storm surge anomaly is 3.0 m. The occurrence of storm surge and high wave disasters due to overtopping and overflowing is expected in many reclamation areas where the design crown height was set at less than T.P.+4.9 m. In reality, immense damage occurred mainly in the reclamation area along the coast of Osaka Bay, such as inundation of runways at Kansai International Airport (KIX) and inundation and container outflows in many artificial islands.
After the typhoon passed, a numerical simulation of storm surges and high waves along the coast of Osaka Bay was conducted using meteorological reanalysis data (the surface winds and atmospheric pressure) of the Japan Meteorological Agency (JMA)'s Grid Point Value (GPV) hourly data, and it was revealed that the computational results were underestimated.
To consider the effects of rapid increase in the wind speed associated with the movement of a typhoon, we modified the wind speed field between 13:00 and 14:00, September, 4th. We reproduced storm surges and waves in Osaka Bay by using the modified wind field. Using the simulation output, we discuss the temporal and spatial distribution of external forces of storm surge disasters that occurred in Hanshin Port, the Yodo River channel, and Kansai International Airport Island.
Numerical model framework
Storm surges occur due to the formation of static sea level gradients due to atmospheric pressure gradients and the water level rise caused by the dynamic flow driven by wind and waves being stopped at the shore (drifting effect). The former is called the suction effect, and the latter flow formation mechanism is classified into offshore wind-driven flow and nearshore current with wave setup in the coastal zone due to surf zone wave breaking. As for the suction effect, a water level rise of about 1 cm occurs with a pressure drop of 1 hPa regardless of the water depth. The drifting effect is only flow formation in the deep sea, but in shallow water, the increased flow velocity is blocked by the shore and a sudden rise in water level occurs. Furthermore, the mean sea level rise (wave setup) is also generated by the breaking wave (surf zone wave breaking) where the energy carried by the wave is transferred to nearshore currents and wave setup. Most atmospheric energy contributes to the generation and development of wind waves due to wind shear stress acting on the ocean surface. The offshore white-cap breaking phenomenon (white capping) generates mean ocean currents and turbulence, and the energy propagated as waves generates large-scale turbulences and nearshore currents with wave setup in the surf zone.
Based on this understanding, we established the numerical model combining the third-generation shallow water wave model, SWAN Cycle III Version 41.01 (2014) and the current Princeton Ocean Model (POM) of ECOMSED (version 1.3) that is the three-dimensional hydrodynamic module, ECOM, developed by Mellor (1980, 1987). For the nesting computation, we adopted the method of surface water level communication from the mother domain to the next domain. In this calculation, we used GPV_MSM_S data (surface pressure and wind speed) of the JMA as the meteorological field. As the driving force of the flow, we adopted the concept of breaker stress, and defined wave breaking stress in which the wave energy lost by whitecap wave breaking and surf zone breaking functioned as surface shear stress in the direction of energy propagation at group velocity.
Wave energy was transferred to both the mean current and turbulence. In this energy transfer ratio of mean current and turbulence, meaning the mean current generation efficiency, we assumed that the whitecap breaking has an efficiency of 0.9 and the surf zone breaking has 0.66. As the same analysis method, it is possible to estimate the driving force via the radiation stress gradient, but when using a rough grid system of wave simulation, there is a limit to the spatial resolution of the gradient, which is not efficient. The model structure of the wind-wave-current relation and the used surface stresses are shown in Figure 2.
Driving forces of storm surge generation
When the coastal ocean current is computed combined with wave analysis, the surface stress of the ocean current can be estimated by breaking wave stress or radiation stress gradient. The outline of the driving force that generates storm surge (external force of ocean current generation) is shown below.
Wave induced surface stress
Wave generation energy shifts to currents (average flow and turbulence) through wave breaking. In order to describe this process, we propose the wave breaking stress defined by Eq. (1), in which the driving force of the ocean current generation is described using energy transfer from the wave energy dissipation rates to ocean currents through both of the whitecap breaking and the surf zone wave breaking.
where, S surf is the energy dissipation rate due to surf zone breaking (in W/m 2 ), S wcap is the energy dissipation rate due to white capping (in W/m 2 ), and c g is the group velocity. For TDIR in SWAN output, the direction of energy transport is used to set the tensor in x (east is positive) and y (north is positive) directions. For the formulation of the surf zone breaking (the energy dissipation in random waves due to depth-induced breaking), the bore-based model of Battjes and Janssen (1978) is used in the wave model of SWAN. In this computation, third-generation physics GEN3 of Westhuysen was employed, so the formulation of whitecap dissipation is the nonlinear saturation-based white capping combined with wind input (Yan, 1987).
The efficiency, ε w , and ε s , in Eq. (1) can be considered as the energy ratios of the generated current and turbulence, through white capping and surf zone breaking, respectively. In this computation, we set ε w = 0.9 and ε s = 0.66.
(2) Radiation stress gradient: Wave-induced forces per unit surface area, F x and F y (N/m 2 ), are formulated by the gradient of the radiation stress tensor as follows: where, using the energy density, E, the water density,ρ, the gravity acceleration, g, wave propagation angle,θ, and angular frequency,σ, the radiation stress tensors are given as: (3) Wind stress (quadratic law of wind speed) When estimating sea surface shear stress from only wind speed information, the quadratic law is frequently employed. Equation (6) is the wind stress formulation used in POM of ECOMSED.
=
���⃗ � ���⃗ � Figure 3 shows a snapshot of the spatial distribution of (a) wave breaking stress, (b) wind stress of 1.83 times quadratic law, and (c) radiation stress gradient. Strong stresses appear in the coastal area in the Kii Channel for the wave breaking stress and the radiation stress gradient. This is due to wave breaking in the surf zone. The offshore radiation stress gradient is clearly smaller than the cases of wind stress and wave breaking stress. This indicates that the radiation stress gradient cannot be used as the driving force for the ocean current generated in the offshore region. From our experience of past storm surge analysis, the wind stress of Eq. (6) should be enhanced by multiplying the factor of 1.83 to reproduce the observed surge heights. This plane distribution almost corresponds to that of the wave breaking stress evaluated by Eq. (1). This indicates that the sea surface stress obtained by Eq. (6) is being underestimated. Actually, the computed storm surge by the surface stress of Eq. (6) is quite small, and the enhanced wind stress can reproduce the observed storm surge height.
Computational domain and procedure
Since the three areas of Osaka Bay, Harimanada Sea, and Kii Channel are connected by straits, the resonance characteristics of ocean oscillation in each area need to be analyzed simultaneously as a cooperative system. Since the Kii Strait is connected to the Pacific Ocean, the boundary conditions of ocean waves and tides are given at the southern end of this area. In this analysis, the computational domain was a combination of three sea areas as shown in Figure 4. The larger computational domain is Osaka Bay Domain, which is an orthogonal lattice network with a spatial resolution of 0.002778 degree (hereinafter called "250 m mesh"). Two nesting domains were set for the area of Hanshin Port, Yodo River estuary and the Kansai International Airport Island. The former is a nesting domain of a 0.000556 degree (hereinafter called "50 m mesh") mesh for the Port of Kobe, Port of Osaka, and Yodo River estuary and lower channel. The latter is a nesting domain with a 50 m mesh area surrounding the sea of Kansai International Airport and a "10 m mesh" (0.000111 degree) around the airport island to reproduce storm surges in the river channel and possible storm surge occurrence around the airport island.
Ocean and coastal topography was reproduced by the datasets of M7014 and M7010 of the M7000 series nautical chart provided the Japan Hydrograph Association. The coastal lines are set by the "10 m mesh" DEM of the national land base map. The reference plane of the nautical chart is the lowest depth of chart reference (Chart D.L.), which coincides with the reference plane of Osaka Pail (O.P.). The detailed topography in the Hanshin Port area was not taken into account because of lack of data in the M7000 series nautical chart. A uniform water depth of 8.85 m from MSL was assumed. In the lower part of the Yodo River, the average depth of O.P.+5.0 m of the standard river profile data was used, where O.P. is 1.3 m lower than Tokyo Pail (T.P.), O.P. = T.P.+1.3 m. Figure 5 shows the position of Typhoon Jebi, the isobar near the center, the moving speed, and the central pressure every hour. The blue lines in the figure indicate the domains for storm surge and wave computations. This figure shows that Typhoon Jebi landed in Tokushima Prefecture at 11:00, then moved north at a moving speed of 60-64 km/h, and landed again in Hyogo prefecture around 14:00. After arriving in Hyogo prefecture, the moving speed increased rapidly, and the typhoon moved out to the Sea of Japan at an average speed of 93 km/h. The moving speed of the typhoon increased sharply by about 1.5 times, and the moving wind speed is thought to have increased on the right side of the typhoon path. On the left side, the wind speed slowed down.
Modification of wind field
This rapid acceleration of the typhoon moving speed around 13:40 is thought to be caused by the upper air westerly jet, but this phenomenon is not reflected in JMA GPV surface wind that is updated every hour. Figure 6 shows the 10-minute average wind speed observed by KIX (red line) and the 20-minute linear interpolations (blue line) of the hourly JMA GPV wind speed (orange circle). The black line in the figure shows the time series of wind speed at the KIX observation point in the wind field modified by the method shown below. In order to introduce this short-term meteorological change into storm surge and high wave computation, it is necessary to modify the GPV wind field using two simple change methods. One is to amplify the wind speed field by multiplying the factor to match the observed value at the KIX observation point, and the other is to use the simple geostrophic model such as the Myers model to estimate the increased typhoon moving speed and add it to the GPV data. The former correction was performed by multiplying the GPV surface wind field at 14:00 by 1.35 times uniformly (the orange circle at 14:00). The latter assumes that the typhoon travel speed is 115 km/h at 14:00 (65 km/h at 13:00), and the increased moving wind speed field is computed by the geostrophic wind model, and it is added to the GPV data. In the figure, small black circles indicate wind speeds obtained by 20-minute interpolations of hourly GPV data (orange circles). In the computation, SWAN updated the wave field every 20 minutes and the surface shear stress field is calculated and passed to POM for computing storm surges. In the subsequent storm surge and wave computation, the results of the modified GPV wind data and the original JMA GPV wind data are shown in comparison. Figure 7 shows that the strong wind region is reproduced at the northern end of Osaka Bay by modification of the GPV wind. In the following computation, the uniform enhancement of GPV surface wind speed at 14:00 was used because of no significant difference between (a) and (c).
Significant wave height
Wave analysis is conducted with SWAN using third-generation model physics GEN3, Westhuysen, that used nonlinear saturation-based white capping combined with wind input. On the south boundary of Osaka Bay Domain, the offshore wave boundary (wave height, period, and wave direction) was imposed by the observed values of the 20-minute intervals of NOWPHAS GPS wave gauge data of the off Tokushima Kaiyo (the wave information network in Japan, 2019). For wave direction and period, some corrections were performed with reference to the 3-hour interval reanalysis dataset of NOAA WW3, (Wave Watch III data access site of the National Oceanic and Atmospheric Administration, USA). (ftp://polar.ncep.noaa.gov/pub/history/waves) Figure 8 shows the distribution of the maximum significant wave height computed by SWAN Figure 12 shows the results by original JMA GPV wind. The local high anomalies along the coast in the Kii Channel are the wave setup inside the surf zone. This is called the "wave setup storm surge" (Washida et al., 2019). This wave setup storm surge cannot be reproduced without taking into account the surf zone dynamics analyzed by the radiation stress gradient or breaking wave stress. The vertical axis of the bird's eye view is standardized at a height of 3.0 m. The difference between the maximum storm surge anomaly calculated by the two wind speed fields (original and modified) is particularly remarkable at the northern end of Osaka Bay. The maximum storm surge anomaly calculated by the modified wind field is more than 2.8 m, but the maximum anomaly by the original JMA GPV wind field is about 2.4 m at the end of the bay.
Fig. 11. The spatial distribution of the maximum computed storm surge anomalies in Osaka Bay
Domain where the modified GPV surface wind was used. Figure 13 shows the observed (blue circles) and the computed (lines) storm surge anomalies at the Port of Osaka, Port of Kobe, and Sumoto in Osaka Bay Domain. The maximum storm surge anomaly of 2.77 m observed at the Port of Osaka is indicated by a red circle. The maximum anomaly computed by the modified wind field (black line) is about 2.6 m. This large difference may explain why the local amplification effect due to topography is not taken into account by a course horizontal resolution of 250 m mesh in Osaka Bay Domain. Comparing the observed and computed time series of storm surge anomalies at three stations, it is clear that the computed result tends to rise more quickly. This is the same tendency as significant wave heights simulation in which the computed wave height rises earlier than the observed wave height. The difference between this computation and the observation may depend on the boundary conditions and wind field differences in wave analysis.
SURGE AND WAVES IN HANSHIN PORTS AND YODO RIVER MOUTH
In the northern part of Osaka Bay, inundation disasters due to overtopping and overflowing in the artificial islands occurred in many places in the Port of Kobe area and a storm surge anomaly exceeding O.P.+5.2 m, (T.P.+3.9 m) downstream of Yodo River was recorded. In order to elucidate this mechanism of these abnormal water level and inundation disasters, a detailed analysis was conducted in Hanshin Port and Yodo River mouth using 50 m mesh resolution nesting from Osaka Bay Domain (250 m mesh). The nesting of wave analysis was performed by SWAN code and the nesting of current analysis was done by passing the water level boundary condition in POM computation. The offshore breakwater that protects the Port of Kobe from high waves seems to be functioning, but the invasion of waves from the opening cannot be ignored in the east part of Rokko Island. In Osaka Port, the coastal area is protected from high waves by offshore artificial islands.
Off Rokko Island
Because high waves strike the offshore artificial islands directly, a higher design crown height is necessary in this area.
Storm surge anomaly
The spatial distribution of the maximum storm surge anomaly in the Hanshin Port area is shown in Figure 15. Since there is no data in the port area in the digital chart M7000 series, the water depth behind the offshore breakwater is assumed to be uniformly 8.85 m (MSL or higher). For this reason, it is necessary to consider the possibility that there is a slight error in the reproduction accuracy of the storm surge anomaly in the port area. Similarly, the water depth in the Yodo River is also set uniformly at 2.88 m (above MSL), so note that the computed storm surge anomalies here do not have high reliability.
Although not confirmed by analysis, the storm surge control effect of the offshore breakwater at the Port of Kobe seems to be weak. Furthermore, it is necessary to note that the storm surge stay period may be lengthened to prevent the seawater accumulated inside the offshore breakwater from flowing out due to longer breakwaters. The maximum storm surge anomaly in the Yodo River is reproduced as M.S.L.+3.8 m (O.P.+5.2 m), which is consistent with the observation results. Figure 16 shows snapshots of the current velocities and storm surge anomalies at 14:20 and 14:40 when the tide level is supposed to have reached its maximum. In this phase, the water level rises but the flow velocity is weak. A strong westward flow is seen between Rokko Island and the offshore breakwater. Figure 17 shows output points of the storm surge anomaly (upper) and the time series of the storm surge anomaly (lower). The lower left figure shows the result using the modified wind field, and the right shows the result by original JMA GPV_MSM surface winds. The maximum anomalies above MSL at each output point are shown in meters. According to the report of the
Storm surge in Yodo River mouth
In order to establish countermeasures against storm surges in the Yodo River estuary and determine the design crown height of coastal revetments and river dikes, detailed numerical analysis of estuary storm surges using accurate topographic data of river channels is necessary. In the river estuary and channel, storm surge and tsunami rapidly increase the height. If this increase cannot be dealt with by raising the river dike alone, a dredging of the river channel can be considered as an alternative. In this computation, we analyzed the degree of effect that can be expected when dredging 2.0 m uniformly in the river channel. Since there is no detailed information on Yodo River channel topography, the river mouth storm surge was analyzed assuming the depth to be M.S.L.-2.88 m.
When the river storm surge amplification phenomenon cannot be dealt with just by raising the river bank, as an alternative method, river channel dredging is a possibility. We analyzed what effect can be expected if uniform 2.0 m dredging in the river channel is conducted. Figure 18 shows the difference (in cm) of maximum storm surge anomalies before and after the river channel dredging. This figure shows that a 2.0 m river channel dredging can suppress the rise of the water level in the river channel by up to 12 cm, but the water level has increased by 14 cm in the estuary. If the dredged material can be utilized for reclamation of the offshore artificial islands, it is worth conducting a detailed analysis on the storm surge reduction effects of dredging including the cost performance.
From the comparison of the river water surface profiles (shown in Figure 19) at the time of the maximum storm surge anomaly and 10 minutes before it, it is clear that the water level rise at the estuary is due to the relaxation of the river water surface gradient by the dredging. This is the mechanism of storm surge reduction by dredging and the water level rise at the estuary is much smaller than water level reduction in the river channel.
Significant wave height
In Typhoon Jebi, Kansai International Airport also suffered a flood disaster due to wave overtopping. According to the third party committee established by the operating company, Kansai Airport (2019), it was concluded that 90% of the flood was caused by the overtopping of the high waves striking the revetment. The committee estimated the maximum wave height off the airport island to be 5.2 m at the observation station, west side of the island by the numerical simulation conducted by the committee and pointed out that it was a huge wave that had never been observed before. However, major overtopping disasters were prominent on the southeast side of the airport island on the leeward side of waves in which the significant wave height of this study was about 2.5m. The reason why the committee did not discuss significant wave heights on the southeast side is unknown. In order to examine the mechanism of this unexpected disaster event, SWAN wave simulation was conducted in this study to confirm the wave field around the island. The results from discussion in the committee were summarized by Ito et al. (2019). Figure 20 shows the maximum significant wave height distribution of SWAN wave simulation conducted in this study using the modified GPV wind and the original JMA GPV wind data. This figure shows that the maximum significant wave height in the southeastern region of KIX Island is about 2.5 m even in the modified wind field computation results.
Storm surge anomaly around airport island
In addition to examining the possibility of the occurrence of wave setup and storm surges around the airport island, storm surge analysis was conducted to confirm how much the rise in mean sea level caused by storm surges promoted overtopping disasters. Figure 21 shows the distribution of the maximum value of the storm surge anomaly in two cases using the modified wind speed field and the original JMA GPV data. The results of the modified wind speed field show that the maximum storm surge anomaly in the southeastern part of the airport island is about 1.2 m.
The design crown high of the revetment is defined by the design high tide level (storm surge anomaly+ syzygy average high tide level) + half wave height+ margin height.
From the results of significant wave height and storm surge anomaly shown here, the following can be estimated. Half of the significant wave height of 2. If the sum of the maximum water level and the margin height is higher than the actual height of the revetment, overtopping and overflowing disasters may occur. According to data by Kansai Airport (2019), the current height of the southeast bank of the airport island is C.D.L.+3.9 m. If the margin height is 0.54 m (assuming 20% of the estimated maximum water level 2.67 m), the water level as an external force is C.D.L.+4.12 (= 3.62+0.54) m. The exact margin height setting and an analysis of wave overtopping discharge are necessary to confirm the mechanism of inundation disaster of Kansai Airport. Figure 22 shows the maximum storm surge anomaly in the second nesting KIX computational domain (10 m mesh). The wave setup storm surge around the airport island was not reproduced in the simulation and the storm surge anomaly was high in the northwestern part of the island. Figure 23 is a snapshot of the storm surge anomaly and the spatial distribution of the mean current at the time when the maximum storm surge occurred. The flow from the southwest is prominent, and a strong current is formed between the airport island and the land. A tanker anchored off the southwestern part of the airport island was swept away by a strong wind and collided with the connecting bridge. It is understood that the southwest current mentioned here may have caused the tanker to drift from the anchoring point to the end of the connecting bridge. Fig. 23. Snapshots of the storm surge anomaly and the ocean current (modified GPV wind).
Wave-surge combined height distribution in the north end of Osaka Bay
Assuming that a large-scale inundation occurs mainly in the zero-meter zone along the coast of Osaka Bay due to extreme storm surges, the measures for protecting human lives, avoiding impacts on the central functions of cities, social and economic functions, and focusing on early recovery, should be considered in advance by the related organizations. For this purpose, the Osaka Bay Storm Surge Countermeasures Council was established by the Kinki Regional Development Bureau at the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). This council held a conference on "Current status and challenges for storm surge in Osaka Bay", in which Reference document 1 of the First meeting (held on July 11, 2007) was distributed. Its sources are Kobe City materials, Hyogo prefecture materials, Osaka City materials, Osaka Prefecture materials, and "Osaka Bay Alongshore Coastal Conservation Basic Plan" (Osaka Prefecture, Hyogo prefecture). For document 1, refer to the URL of the Kinki Regional Development Bureau (2007), Ministry of Land, Infrastructure, Transport and Tourism (MLIT).
The design crown height along the north end of Osaka Bay shown in Figure 24(a) was cited from the Reference document 1. Figure 24(b) shows the summary of output on the computed maximum water level (surge anomaly + half of significant wave height + astronomical tide) in this study. (1) The area of the Port of Osaka is protected by offshore artificial islands. The seawall with a design crown height of T.P.+6 m or more is essential on an offshore artificial island exposed to high waves. (2) Storm surges and tsunamis are strengthened in rivers. The prevention of storm surges at the Yodo River estuary and river channel between the two ports of Kobe Port and Osaka Port is weak due to the geographical characteristics of the management system by different prefectures. The disaster prevention function that only strengthens the river bank is insufficient, and new prevention measures are necessary.
CONCLUSIONS
The fact that a large disaster occurred due to Typhoon Jebi that was much weaker than Second Muroto Typhoon confirms that the recent coastal development has not achieved the required disaster prevention measures. Typhoon Jebi was a high-speed moving typhoon and its moving speed accelerated near the end of Osaka Bay resulting in the enhancement of storm surges by the sudden increase of surface wind speed. To predict possible storm surges, not only the central pressure, but also the moving speed are important factors.
The major findings of the study are summarized as follows: (1) The wave-surge combined numerical model used reproduced the phenomenon that Typhoon Jebi caused a storm surge anomaly exceeding 2.77m in the north end of Osaka Bay and it was mainly caused by the sudden increase of surface winds due to the acceleration of the typhoon's moving speed.
(2) The storm surge in the Yodo River channel exceeding O.P.+5.2 m was reproduced by the model. It was revealed that storm surge prevention in the Yodo River between two ports is fragile. (3) The computed storm surge and wave height on the southeast side of Kansai International Airport Island exceeded the current crown height of C.D.L.+3.9 m which was not adequate to prevent the inundation disaster due to storm surge and high waves caused by Typhoon Jebi.
|
2020-07-01T01:20:22.630Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f7ed9bec263a3c137f1051211187cb37db1b49bb",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jnds/40/2/40_44/_pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f7ed9bec263a3c137f1051211187cb37db1b49bb",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
253098194
|
pes2o/s2orc
|
v3-fos-license
|
VP-SLAM: A Monocular Real-time Visual SLAM with Points, Lines and Vanishing Points
Traditional monocular Visual Simultaneous Localization and Mapping (vSLAM) systems can be divided into three categories: those that use features, those that rely on the image itself, and hybrid models. In the case of feature-based methods, new research has evolved to incorporate more information from their environment using geometric primitives beyond points, such as lines and planes. This is because in many environments, which are man-made environments, characterized as Manhattan world, geometric primitives such as lines and planes occupy most of the space in the environment. The exploitation of these schemes can lead to the introduction of algorithms capable of optimizing the trajectory of a Visual SLAM system and also helping to construct an exuberant map. Thus, we present a real-time monocular Visual SLAM system that incorporates real-time methods for line and VP extraction, as well as two strategies that exploit vanishing points to estimate the robot's translation and improve its rotation.Particularly, we build on ORB-SLAM2, which is considered the current state-of-the-art solution in terms of both accuracy and efficiency, and extend its formulation to handle lines and VPs to create two strategies the first optimize the rotation and the second refine the translation part from the known rotation. First, we extract VPs using a real-time method and use them for a global rotation optimization strategy. Second, we present a translation estimation method that takes advantage of last-stage rotation optimization to model a linear system. Finally, we evaluate our system on the TUM RGB-D benchmark and demonstrate that the proposed system achieves state-of-the-art results and runs in real time, and its performance remains close to the original ORB-SLAM2 system
I. INTRODUCTION
Visual SLAM (vSLAM) systems try to estimate a robot's location based on the multi-view geometry of the scene, combined with computer vision algorithms, while generating a 3D map of the environment. It is a critical tool for 3D reconstruction, image refinement, 3D holographic application, visual place recognition, AR/VR reality, and autonomous vehicles like micro air vehicles (MAVs). Various vSLAM approaches have been created based on various sensors such as single camera, stereo camera, RGB-D camera and eventcamera. Moreover, feature-based approaches have traditionally attracted the most attention, since, they rely on well-suited computer vision algorithms to extract features and are more resistant to changes in light than direct methods. However, in low-textured or man-made environments where the extracted point features are not well-distributed or sufficient, incorporating other geometric primitives from multi-view geometry into the system, such as lines planes, or VPs (Vanishing Points) can boost the robustness of these systems. [1], [2], [3], [4].
On the other hand, most applications in practice work on certain scenarios, such as man-made environments.In these environments, to boost the performance of the vSLAM system, the hypothesis of the Manhattan World (MW) is used [5]. The MW, is a man-made environment with significant structural regularity, and with most areas of surrounding environment being described as a box world with three mutual orthogonal dominant directions. As a result, each MW is associated with a frame, which is denoted as Manhattan Frame (MF) and can be inferred from the VPs, which is the intersection of the image projections of the 3D parallel lines in MW. As a result, employing the VPs can lead to a reduction in pose drift [6], [7].
Motivated by the insights made above, we present a vSLAM system that integrates simple computer vision algorithms for extracting lines, and VPs to reduce the drift in pose and optimize it. More specifically, the main contributions of this paper are summarized below:: • Employ real-time computer vision algorithms for extracting lines and VPs.. • An optimization strategy of absolute rotation based on VPs. • A simple linear system for estimate translation.
II. RELATED WORK
Next we briefly review the related works on vSLAM system with the focus on features and on leveraging the structural regularity to improve system performance. First, we have ORB-SLAM2 [8] which is a popular feature-based monocular vSLAM system, that extends the multi-threaded and keyframebased architecture of PTAM [9]. It uses ORB features, builds a co-visibility graph and performs loop closing and localization tasks.To improve the robustness of point-based methods, the authors in [9] extracted lines from the environment and propose an algorithm to integrate them into a monocular Extended Kalman Filter SLAM system (EKF-SLAM). Finally, in PL-SLAM [1] points and lines extracted concurrently into a pointbased system.
On the other hand, we have vSLAM systems based on the MW assumption, like [6] which proposes a 2 stage MF tracking to estimate the rotation of the pose based on the VPs and parallel lines and a refinement strategy to pose optimization. [10] proposes a mean-shift algorithm to track the rotation of MF across scenes, while using 1-D density alignments for translation estimation. OPVO [11] improves the translation estimation by using the KLT tracker. Both methods require two planes to be visible in the frame at all times. LPVO [12] eases this requirement by incorporating lines into the system. Structural lines are aligned with the axes of the dominant MF and can be integrated into the mean shift algorithm, improving robustness. Hence, for LPVO, only a single plane is required to be visible in the scene, given the presence of lines. Drift can still occur in translation estimation, as it relies on frame-to-frame tracking.
III. SYSTEM OVERVIEW The proposed vSLAM system, addresses three major challenges: real-time complexity, optimize global rotation utilizing orthogonality and parallelism constraints, and a simple translation refinement using the information of global rotation. So, in the following paragraphs, each module of the proposed VP-SLAM system is presented briefly.
The proposed VP-SLAM system is illustrated in " Fig. 1", which is based on the ORB-SLAM2 [8] architecture. The system is composed of two modules: the front-end and the back-end. The front-end is responsible for real-time VO, providing ego-motion estimation, with local optimization for the current frame and keyframe decision, while the back-end is responsible for the map representation, local map optimization, and the strategy for inserting and culling keyframes. Similar to [1], in the front-end section, points and lines are output in parallel in each RGB frame. Then, similar to ORB-SLAM2 [8], we use the constant velocity motion model to obtain an initial pose estimation, and then use the points and lines to optimize it. Then, to further optimize the rotation, we present an optimization approach that incorporates the extracted VP and the information about the parallelism of the lines. Then, knowing the optimized rotation, we can refine the translation by solving a linear system, as shown in [6]. Finally, note that throughout the text, tables and vectors are in bold capital letters, while scalar gradients are represented as simple letters. Also, to represent the camera pose of the i-frame relative to the global coordinate system, we use the notation ( R iw , t iw ) where R iw ∈ SO(3) is the rotation matrix and t iw ∈ R 3xn the translation vector.
We also use δ i k to represent the set of directions of VPs in the C i frame, and with d k as the set of three orthogonal dominant directions in MW. Additionally, we define "[·] x " as a 3 x 3 skew symmetric matrix, and we use "∼" to represent an equality up to a scale.
A. Real time Complexity
The computational complexity is one of the most important concerns to address in vSLAM algorithms. To preserve the real-time properties of ORB-SLAM2 [8], we carefully selected, and utilized, fast methods for operating line extraction and VPs estimation.
1) Lines: To detect line segments in the image, we use the robust LSD (Line Segment Detector) detector [13] which is an O(n) line segment detector where n is the number of pixels in the image. To describe, the line segments we use the LBD (Line Band Descriptor) descriptor [14] which relies on local appearance of lines and geometric constraints while preserving the real time complexity and robustness against image artifacts.
2) VPs Estimation: To extract the VPs we adopt the [15] method, which is based on 2 lines and guarantees that the execution time is O(n) and the extracted vanishing points are optimal and orthogonal VPs in MW . The main idea of the method is to exploit the Gaussian sphere as the parameter space of the rotation, with the principal point (x 0 , y 0 ) T being the center of the sphere. Thus, two parallel lines in 3D are projected onto the Gaussian sphere as two great circles that intersect at a point. The direction of this point from the origin of the sphere is considered as a candidate vanishing point direction ( v 1 ). To achieve real-time performance, a polar grid is created that intersects the image plane and spans the latitude and longitude of half of the Gaussian sphere with a size of 90x360 and a precision of 1 o . Thus, a pair of lines intersecting at a point in the image plane contributes to the corresponding cell of the polar grid by weight: where ||l 0 || and ||l 1 || represent the length of two line segments in pixel and θ is the angle between them and score is the accumulated score on each polar grid cell. After creating the first candidates VPs (v 1 ) directions and considering the orthogonal constraint, the second VP (v 2 ) must lie on the great circle that is perpendicular to the great circle of (v 1 ) generating 360 evenly distributed candidates VPs (v i 2 360 i=1 ) directions. Given the first (v 1 ) and the second (v 2 ) VPs directions, the third candidate VPs directions can be obtained by the cross product of each pair of v 1 and (v i 2 360 i=1 ).Thus, given all candidate directions of VPs, the best estimate of VPs are calculated from the set (VPs) with the highest score, where the score of a (VPs) hypothesis set is the sum of the scores of three polar grid cells that belong to the three associated VP directions.
B. Absolute Rotation Optimization
Inspired by [6], [7] we propose a strategy that further optimizes the camera rotation C i by exploiting the extracted VPs and the set of three mutually orthogonal dominant directions j is a column vector. Specifically, given an image with a cluster of 3D parallel lines in the scene, these lines must be aligned with a dominant direction d i k in MW. Thus, given at least two clusters of lines in the image, the normal vector of the great circle on the Gaussian sphere of the corresponding line in the associated cluster s i must be perpendicular to the dominant direction of the cluster. Therefore, for the i-th camera frame C i , we can formulate a least-square problem to calculate the set of dominant directions d i k in the current frame as below: where S ∈ R 3xn and n is the number of lines in the cluster. The normals s i represent the columns of S can be obtained from are the endpoints of the line on the image plane. Finally, solving this problem with SVD we obtain the set of mutual orthogonal dominant directions d i k in the current C i frame. Also, since, the direction of the VPs reflect the orientation of the current C i frame w.r.t MW, and the initial set d k computed from the Eq. (1) (instead of computed from VPs on the inital frame as in [7]) present the orientation of the initial frame C 0 w.r.t MW we can construct the following equation that connects the estimated VP with the set of initial mutual orthogonal directions d k as follow: Thus, as a result, we conclude that: where, δ i k is the Vanishing Direction (VD) of the VP vp i k .
Therefore, to optimize further the absolute rotation R iw of the current C i frame with respect to condition Eq. (2) we define the following cost function to minimize: Note that if the initial frame C 0 has not at least two clusters with enough lines, we continue with the next frame until we find a frame C i that meets the condition. Additionally, the ω i in Lie algebra is mapping from R iw in Lie group. We employ the Levenberg-Marquardt (LM) optimization to minimize the cost function E(ω i ). The Jacobians of the cost function E(ω i ) are: The initial value of R iw is obtained by optimizing both the reprojection error of lines and reprojection error of points.
C. Translation Refinement
Having optimized the global rotation R iw from the previous step we can use this information to construct a linear system to solve for refine the translation. Moreover, the availability of global rotation R iw can lead to simplify the initial non-linear reprojection error to a linear one. Specifically, from the: with known the absolute rotation, we lead to: where [] (j) is j-th row of a vector. Re-arranging (4) we lead to minimizing the following system: where: vi−cy fy The system in Eq. (5) is a least square problem that can be solved under a RANSAC framework in order to find the maximum inlier set from all pairs of observations of the current C i frame to solve the following normal equation:
IV. EXPERIMENT EVALUATION
To validate the performance of the proposed SLAM system in terms of computation time and accuracy, we compare our SLAM system with the state-of-the art ORB-SLAM2 [8] in real-world scenarios using the TUM RGB-Benchmark [10]. All experiments were carried out with an AMD Ryzen 5 2600 6-core CPU. In this paper, we aim at demonstrating the effectiveness of exploiting VPs and dominant directions in a MW, for pose optimization. For, a fair comparison, we disable the loop closing function from both systems. This is due to the fact that when the loop closing module is enabled, the two systems will converge to the same trajectory and have the same absolute pose error, so we will not see the results of our method.
A. Localization Accuracy in the TUM RGB-D Benchmark
We test our method on TUM-RGB-D dataset [16] which consists of several real-world camera sequences which contains a variety of scenes, like cluttered areas and scenes containing varying degrees of structure and texture, with ground truth trajectories and images with (640x480) resolution, recorded at full frame rate (30 Hz). Table.I shows the absolute translation RMSE (Root Mean Square Error) of the compared VO methods and SLAM methods, and the estimated trajectories of different sequences in TUM-RGBD benchmark [16]. All trajectories were aligned with 7DoF with the ground truth before computing the ATE (Absolute Translation Error) error with the script provided by the benchmark [16]. As shown in Table.I even with disable the Loop Closure module, the accuracy of the proposed VP-SLAM, is near the state of the art. This is due to our concern of keeping the performance of VP-SLAM as real time as possible and concurrently utilizing additional information from the geometry of the environment. Moreover, we propose two simple programmable modules for optimization and refinement, compared to ORB-SLAM2 that uses non-linear pose graph optimizations at the back end. Additional, disabling the Loop Closure module the accumulating error of rotation can be alleviated to some extent, but as shown the trajectory optimized by our method is close to the ORB-SLAM2 [8]. Additionally, in Fig. 2, (left) we see the trajectory that produces our VP-SLAM, in contrast with the ORB-SLAM2 Fig. 2 (right). Finally, in Fig. 3 we present the line clusters that computed after extracting the VPs.
B. Time Complexity
We also evaluate the runtime complexity of our VP-SLAM against the state of the art ORB-SLAM2 [8]. We summarize In Table.II the time required for each subtask in "Tracking" and "Local Mapping" in our VP-SLAM and ORB-SLAM2 [8]. Note that the most time-consuming task is the local BA and the map feature creation in both systems. In conclusion, despite the line feature detection and VPs extraction combined with the optimization strategy increase the cost time, the whole system can still perform in real time.
V. CONCLUSION
In this paper, we propose a real time monocular visual SLAM system that exploits the structure of man-made environments to further optimize the pose. More specifically, it is especially suitable for environments with more geometric structures, because it can detect VPs and lines from a single image. We propose two methods, one of which is leverage the information of VPs and the parallelism constraint of lines to construct an optimization strategy for rotation, while the second one uses the information from the former optimization to build a distinct refinement strategy for translation. Finally, the experiments on benchmark datasets with real world scenes show that the accuracy of the proposed system is close to the state of the art ORB-SLAM2 [8]. Additionally, the performance maintains real-time and indicate that drift could be further reduced.
In the future, we intend to strengthen our system with a stronger optimization technique and test it in more indoor environments.
|
2022-10-25T01:16:05.562Z
|
2022-10-23T00:00:00.000
|
{
"year": 2022,
"sha1": "a74613e2f0f8f0caceb4b3d82f14fc33acafafb9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a74613e2f0f8f0caceb4b3d82f14fc33acafafb9",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
232479172
|
pes2o/s2orc
|
v3-fos-license
|
Clonal Dissemination of KPC-2, VIM-1, OXA-48-Producing Klebsiella pneumoniae ST147 in Katowice, Poland
Abstract Carbapenem-resistant Klebsiella pneumoniae (CRKP) is an important bacterium of nosocomial infections. In this study, CRKP strains, which were mainly isolated from fecal samples of 14 patients in three wards of the hospital in the Silesia Voivodship, rapidly increased from February to August 2018. Therefore, we conducted microbiological and molecular studies of the CRKP isolates analyzed. Colonized patients had critical underlying diseases and comorbidities; one developed bloodstream infection, and five died (33.3%). Antibiotic susceptibilities were determined by the E-test method. A disc synergy test confirmed carbapenemase production. CTX-Mplex PCR evaluated the presence of resistance genes blaCTX-M-type, blaCTX-M-1, blaCTX-M-9, and the genes blaSHV, blaTEM, blaKPC-2, blaNDM-1, blaOXA-48, blaIMP, and blaVIM-1 was detected with the PCR method. Clonality was evaluated by Multi Locus Sequence Typing (MLST) and Pulsed Field Gel Electrophoresis (PFGE). Six (40%) strains were of XDR (Extensively Drug-Resistant) phenotype, and nine (60%) of the isolates exhibited MDR (Multidrug-Resistant) phenotype. The range of carbapenem minimal inhibitory concentrations (MICs, μg/mL) was as follows doripenem (16 to >32), ertapenem (> 32), imipenem (4 to > 32), and meropenem (> 32). PCR and sequencing confirmed the blaCTX-M-15, blaKPC-2, blaOXA-48, and blaVIM-1 genes in all strains. The isolates formed one large PFGE cluster (clone A). MLST assigned them to the emerging high-risk clone of ST147 (CC147) pandemic lineage harboring the blaOXA-48 gene. This study showed that the K. pneumoniae isolates detected in the multi-profile medical centre in Katowice represented a single strain of the microorganism spreading in the hospital environment.
Introduction
Klebsiella pneumoniae is a critical multidrug-resistant (MDR) bacterium in humans responsible for numerous hospital infections linked to high morbidity and mortality since treatment options are limited (Navon-Venezia et al. 2017). K. pneumoniae from the family Enterobacteriaceae, occurs in the human and animal gastrointestinal tract microbiome. It is a commonly found opportunistic pathogen associated with the hospital environment and, overall, accountable for approximately a third of all Gram-negative infections. It has a role in extraintestinal infections, such as urinary tract infections, pneumonia, surgical site infections, cystitis, and life-threatening infections, including endo-carditis and septicemia. It is also a significant cause of severe community-onset infections, such as necrotizing pneumonia, endogenous endophthalmitis, and pyogenic liver abscesses (Podschun and Ullmann 1998).
With the ever-growing antibiotic resistance, K. pneu moniae is a pathogen recognized for its antibiotic resistance; hence, it is categorized as an ESKAPE organism, besides other essential MDR pathogens (Boucher et al. 2009). The accumulation of ARGs by K. pneumoniae, by de novo mutations, is continuous under antibiotic selective pressure, and through the acquisition of plasmids and transferable genetic elements, it stimulates extensively drug-resistant (XDR) strains harboring a 'super resistome' . In the past twenty years, many highrisk (HiR) MDR and XDR K. pneumoniae sequence 1 108 types have appeared that exhibit great capacity of causing multicontinental outbreaks and continued global dissemination (Navon-Venezia et al. 2017).
Currently, the spread of carbapenem-resistant K. pneumoniae (CRKP) has become a severe problem in the molecular epidemiology of hospital infections.
Frequently, carbapenems serve as the last resort in the effective treatment of serious infections caused by multidrug-resistant bacteria. Enzymes that hydrolyze carbapenems, called carbapenemases, are the major cause of carbapenem resistance (Matsumura et al. 2017). The molecular classes A, B, and D carbapenemases are rapidly disseminated worldwide, challenging the treatment of Gram-negative infections (Nordmann and Poirel 2014). Recent reports have demonstrated that various carbapenem-hydrolyzing enzymes are disseminated worldwide in CRKP isolates. The fast evolution of carbapenem resistance quickly evolved in Enterobacteriaceae in the past decade and became a developing global threat. The majority of studies on antibiotic-resistant K. pneumoniae focus on characterizing carbapenemase producers (KPC, NDM, VIM, and OXA-48), various clonal groups or complexes (e.g., CG15, CG17, CG258, or CC147), and epidemic plasmids (IncA/C, IncFII, IncL/M, and IncN) that have been suggested to participate in their global expansion (Nordmann and Poirel 2014). Carbapenemase co-producers have been reported in distinct geographic locations: European countries (France, Germany, Greece, Italy, and Poland), Israel, the United States, China, and West Asia (Turkey) (Baraniak et al. 2011;Nordmann and Poirel 2014;Baraniak et al. 2015;Guo et al. 2016;Lee et al. 2016;Zautner et al. 2017;Bukavaz et al. 2018).
The majority of KPC-producing microorganisms also express β-lactamases and possess genes conferring resistance to other antimicrobials, i.e. aminoglycosides, fluoroquinolones, or co-trimoxazole (Nordmann and Poirel 2014). The resistance rates vary significantly across countries; MDR K. pneumoniae is endemic in Mediterranean countries, and Eastern and South-Western Europe. It stems from ESβL production in more than 50-60% strains, and non-susceptibility to third-generation cephalosporins, fluoroquinolones, and aminoglycosides (Navon-Venezia et al. 2017).
In 2011, the National Reference Center for Susceptibility Testing (NRCTS) and the KPC-PL Study Group published the first report from Poland that presented the molecular characteristics of K. pneumo niae producing KPC carbapenemases (Baraniak et al. 2011). Between 2011 and, the bacteria caused 1,067 infection outbreaks; among them, 123 were caused by K. pneumoniae, and a higher number of outbreaks were reported from Masovian and Silesian voivodships (Baraniak et al. 2011). Poland belongs to the countries of the highest rate of K. pneumoniae isolates' resistance to all groups of drugs subjected to monitoring, and these rates are twice as high as elsewhere in the European Union (EU)/European Economic Area (EEA). In these countries, resistant K. pneu moniae isolates consist on average 20.5% of all MDR multidrug-resistant strains (Bukavaz et al. 2018).
Given the abovementioned data and the increased frequency of isolation of CRKP strains from the hospital environment, we conducted a microbiological and molecular characterization of carbapenem-resistant K. pneumoniae isolates with emphasis on the antibiotic resistance profile, identification of ESβL genes, detection of carbapenemase genes, and the isolates' genetic relationship.
Materials and Methods
Hospital settings and sample collection. The study was performed in the Upper-Silesian Medical Centre of the Silesian Medical University in Katowice (GCM), one of the largest multi-profile medical centres, and one of the largest hospitals in Poland. The hospital consists of 24 departments and treats over 160 thousand patients per year. Between February and August 2018, 15 non-duplicate CPKP isolates were collected from fecal samples of 14 patients admitted to three hospital wards characterized below. The Department of Neurology with the Stroke Sub-department (NR) receives 1,750 admissions per year and has 14 rooms with 44 beds; the Department of Internal Medicine and Rheumatology (REU) receives 1,935 admissions per year and has 12 rooms with 37 beds, and the Department of Anaesthesia and Intensive Care (OAIT) receives 1,750 admissions per year and has five rooms with ten beds (Table I). A total of 505 Enterobacteriaceae isolates were obtained from the patients in these wards over one-year.
Bacterial identification, antimicrobial susceptibility testing and phenotypic screening. Bacterial identification and preliminary susceptibility testing were performed using the automated VITEK® 2 Compact System (bioMérieux, France). The MICs of the 23 antimicrobial agents were evaluated with E-test strips (AB BIODISK, bioMérieux, France). MIC value results were interpreted according to the EUCAST breakpoints (EUCAST 2019). All isolates were screened phenotypically for the presence of KPC-, OXA-48-, and metallo-β-lactamases (MBL). Double-disc synergy tests (DDST) were carried out to confirm ESβL production (CLSI 2018) and to detect MBL production, as published previously (Matsumura et al. 2017 (Doi et al. 2008). According to the protocol, the boronic acid combined-disk tests using meropenem (10 µg) as the antibiotic substrate and 3-aminophenylboronic acid (3-APBA) (300 g, Sigma-Aldrich) as the inhibitor of KPC production was used. The detection of OXA-48 was performed in line with the EUCAST guidelines (Shaker et al. 2018). MDR and XDR were defined according to a standardized international document (Magiorakos et al. 2012).
PFGE. The genetic relatedness of isolates was investigated using the Pulsed Field Gel Electrophoresis (PFGE) method, following genomic DNA extraction and digestion with XbaI endonuclease (Fermentas, Lithuania), as described previously (Han et al. 2013). Salmonella ente rica subsp. enterica serovar Braenderup strain H9812 (ATCC® BAA664™) and K. pneumoniae ATCC® BAA-1705™ were used as reference markers. PFGE banding patterns were compared using Bio Numerics v.6.5 (Applied Maths, Belgium) software. The relatedness was determined by the unweighted pair group method using the average linkages (UPGMA), and the similarity of bands was calculated using the Dice coefficient.
MLST. Multilocus Sequence Typing (MLST) was conducted with the use of seven conserved housekeeping genes (gapA, infB, mdh, pgi, phoE, rpoB, and tonB) (Guo et al. 2016). A complete protocol of the MLST procedure, including the allelic profile and sequence type (ST) assignment techniques, is available in MLST databases.
Results
Patients' characteristics. Patients' characteristics is presented in Table I. During seven months, the 15 K. pneumoniae isolates were collected from fourteen patients aged 65.5 years on average. The average duration of hospitalization was 37 days. All patients were found to be colonized by similar CRKP isolates. One patient developed a bloodstream infection (only blood isolate available) and was successfully treated with linezolid and colistin. Two of the improved patients were treated with carbapenem. Five patients died; their deaths were not associated with any particular etiology, though. CRKP strains were isolated at various stages of patients' hospital stay. An average time from patient admission to the microorganism isolation was 16 days; in patient no. 3 the corresponding period was 64 days, while in patients nos. 1 and 5 -the period was only five days. None of the patients had traveled abroad shortly before their hospital admissions, so any travel experience could explain the colonization.
Epidemiologic investigation. The studied K. pneu moniae isolates' epidemic curve exhibited a bimodal distribution of cases with two peaks separated by 148 days (Fig. 1). It could indicate double discrete periods (February-March 2018 and July-August 2018) with unobserved cases in January, April, May, and June. The NR ward was involved in these distinct periods. Following the index case, four new cases were registered in the OAIT, six cases in the NR, and three in the REU (Table I). 1 111 Antimicrobial resistance. The antimicrobial resistance profiles for the isolates are listed in Table II. We have classified nine (60%) strains as MDR and six (40%) as XDR. All isolates showed resistance to penicillins, cefalosporins, carbapenems (doripenem, ertapenem, meropenem), quinolone (ciprofloxacin), aminoglycosides (amikacin, gentamycin, netilmycin, tobramycin), and other antibiotics (aztreonam, trimethoprim/sulfamethoxazole, fosfomycin). Eleven isolates (73.3%) were sensitive to amikacin, two isolates (13.3%) were in the intermediate range with the MIC values equal to 12 µg/ml, and two isolates (13.3%) were resistant. Moreover, seven strains (42.8%) were resistant to tigecycline, five (35.7%) were in the intermediate range with the MIC values above 2 µg/ml. Interestingly, two isolates (13.3%) showed susceptibility to imipenem, and the resistance of one isolate (6.7%) to imipenem was within the intermediate range.
Phenotypic screening of carbapenemases and associated β-lactamases. In all K. pneumoniae isolates (n = 15, 100%), the ESβL mechanism was not found in phenotypic tests. Also, MBL production was not phenotypically detected in these isolates. However, all strains were positive for K. pneumoniae carbapenemase (KPC) production by the modified Hodge test (n = 15, 100%). In addition, phenotyping detection of carbapenemresistant class D using the carbapenemase detection set with temocillin disc (30 µg) revealed that all isolates were positive for OXA-48 (n = 15, 100%).
β-lactamase genes detection. PCR amplification and sequencing analysis confirmed the presence of bla KPC-2 , bla OXA-48 , bla VIM-1 in all 15 of the K. pneumoniae isolates. In addition, the only identified bla CTX-M variant in all 15 strains was bla CTX-M-15 . The results are presented in Fig. 2. No isolate was found to be positive for bla SHV , bla TEM , bla NDM-1 , and bla IMP .
Molecular epidemiology. Out of fifteen isolates concerned, all belonged to a single PFGE cluster (clone A) (Fig. 2).
Discussion
The presented work stems from the problem of clonal dissemination of KPC-2, VIM-1, and OXA-48-producing K. pneumoniae ST147 noticed in one hospital in Katowice, the Silesian Voivodeship, Poland.
K. pneumoniae belongs to Gram-negative bacteria of the family Enterobacteriaceae. The pathogens can easily cause hazardous epidemic outbreaks and spread globally in the form of clones with increased virulence and epidemicity. These features are associated with the widespread presence of Enterobacteriaceae in humans (gastrointestinal carriage) and their great importance as frequent infection etiologic agent in facilities that carry out inpatient and outpatient treatment (Navon-Venezia et al. 2017). Studies that focus on K. pneumoniae are therefore desired, as they provide vital scientific Fig. 2 grounds for the relevant action to be taken in the field of public health in respect of molecular epidemiology. This study's subject was a particular group of microorganisms classified as bacterial alarm agents (BCA) of particular virulence or resistance, i.e., carbapenemaseproducing K. pneumoniae strains belonging to the order Enterobacterales. All K. pneumoniae isolates were derived from intestinal colonization cases in the patients studied. This work elaborated epidemiological opinions on the BCA epidemic threat occurrence in the multiprofile medical center in Katowice. It was also based on suspicion of an epidemic outbreak since we noticed clonal dissemination of several CRKP isolates and one case of symptomatic hospital infection with a blood CRKP strain. This strain was isolated in a given department 48 hours after the patient's admission. The colonization of fourteen consecutive hospitalized patients with the same strain of the CRKP was also reported. CRKP isolates had unique properties such as the accumulation of numerous antibiotic resistance mechanisms and the capability to persist in a hospital environment. It resulted in special attention being paid to CRKP strains' health care risk (Navon-Venezia et al. 2017). The genes encoding KPC, MBL, and OXA-48 are located on the so-called mobile genetic elements (plasmids, transposons) that allow effective spreading within bacterial populations. The genes encoding KPC carbapenemases (bla KPC ) are located on the Tn4401 transposon located on plasmids with different types of replicons (IncF, IncL/M, ColE1, IncR, and IncX3). These plasmids show the ability to conjugate and propagate the bla KPC genes to new bacterial populations (Baraniak et al. 2011).
On the other hand, the most important families of the MBL acquired are the enzymes IMP and VIM occurring both in non-fermenting and intestinal bacilli. The bla IMP and bla VIM genes always exist as cassettes inserted into integrons. In turn, integrons can be located on transposons and move with them between DNA molecules (Izdebski et al. 2018b). ESβL enzymes exist mainly as acquired, plasmid-encoded β-lactamases. ESβL-encoding genes are often located on conjugation plasmids (IncFII, IncI), including those with a broad host range (IncA/C, IncL/M). It allows them to spread rapidly, also between strains belonging to different species. Frequently noted are encoded by bla CTX-M genes ESβL enzymes, active against cefotaxime CTX-M and localized on plasmids in Enterobacterales (Izdebski et al. 2018a;2018b).
The isolates' antibiotic susceptibility testing confirmed the β-lactamase antibiotics resistance profile typical for carbapenemase producers, as presented elsewhere (Zacharczuk et al. 2011;Guo et al. 2016;Izdebski et al. 2018a). All strains expressed high-level resistance to doripenem, ertapenem, meropenem, although they showed varying levels of resistance to imipenem. Ertapenem is the carbapenem that has been suggested as most suitable for detecting the presence of bla KPC , which was confirmed by high MIC values for ertapenem (> 32 μg/ml) in the present study. The presence of KPC does not always result in expression of in vitro resistance to imipenem or meropenem, as this carbapenemase shows limited efficiency of β-lactam ring hydrolysis in carbapenem particles. In this study, E-test showed resistance to ertapenem of 7/25804, 11/25808, 13/25810 strains, and susceptibility or medium susceptibility to imipenem (MICs 4-8 μg/ml), while PCR confirmed the presence of the bla KPC-2 gene in all three isolates. Imipenem and meropenem resistance of other strains might have resulted from decreased outer membrane permeability or other β-lactamases providing a synergistic effect with KPC (Protonotariou et al. 2018). The strains' resistance to third-generation cephalosporins confirmed this hypothesis similarly to the various β-lactamases production by K. pneumoniae. The variability in CRKP strains' carbapenem susceptibility illustrates the difficulties encountered while trying to identify strains producing carbapenemases only with the E-testing of resistance profiles. To reliably identify such strains, additional and more specific techniques should be applied (Xu et al. 2005;Magiorakos et al. 2012;Han et al. 2013;Caltagirone et al. 2017;Bukavaz et al. 2018).
The bla KPC-2 , bla OXA-48 , and bla VIM-1 carbapenemase encoding genes and an additional bla CTX-M-15 cefotaximase encoding genes have been found to coexist in all the strains concerned. Lee et al. (2016) found that K. pneumoniae can carry multiple β-lactamase genes in the same strain, which could be partly responsible for the pathogen's selective success. All types of the bla genes were reported in combinations for this species (Lee et al. 2016). So far, K. pneumoniae that produce KPC-2 and KPC-3 carbapenemases have been frequently found in Europe, including Poland (Baraniak et al. 2011;Zacharczuk et al. 2011;Baraniak et al. 2015;Grundmann et al. 2017). On the other hand, Matsumura et al. (2017) presented the global dissemination of Enterobacteriaceae strains carrying the bla VIM genes. The VIM-producing Enterobacteriaceae are generally present in Europe, particularly in Greece, Spain, Hungary, and Italy (Matsumura et al. 2017). Significantly, an increasing problem with VIM-producing K. pneu moniae has been recorded in Greece, where the strains have accounted for the deaths of 48% of patients over the last two years (Protonotariou et al. 2018). In Poland, Izdebski et al. (2018b), collected one hundred and nineteen cases of VIM/IMP-positive Enterobacteriaceae in the period from 2006 to 2012, and showed many specific or entirely new features of these microorganisms, undoubtedly related to the properties of pathogens isolated in the abovementioned central-southern countries Europe, including several likely imported from abroad (e.g., from Greece) (Izdebski et al. 2018b).
114
The strains containing OXA-48 β-lactamases provide another example of rapid immigration of harmful microorganisms to Europe from their endemic regions, mostly eastern and southern Mediterranean countries (Egypt, Morocco, and Turkey). OXA-48-like-positive strains cause hospital outbreaks of epidemics in Belgium, France, Netherlands, Germany, Spain, and other countries (Nordmann and Poirel 2014;Grundmann et al. 2017). In the most recent report developed by OXA-48-PL Study Group monitoring the Enterobac teriaceae OXA-48-positive strains' spread, the authors demonstrate that these strains have been isolated relatively rarely in Poland (Izdebski et al. 2018a). It was confirmed by other European and international research teams (Nordmann and Poirel 2014;Grundmann et al. 2017). Conjugative transfer of a specific plasmid group is regarded to be the central mechanism in the spread of OXA-48 among the Enterobacteriaceae populations (Nordmann and Poirel 2014).
The high MIC values (> 256 µg/ml) for cefotaxime may suggest that the strains produced CTX-M β-lactamases. Multiplex PCR and sequencing have confirmed bla presence in all isolates. The presence of various CTX-M β-lactamases in K. pneumoniae in Poland has been confirmed by other research teams (Baraniak et al. 2011;Izdebski et al. 2018a;2018b). No bla SHV or bla TEM have been identified among the isolates. These results are in line with the persisting trend of diminished prevalence of hospital-associated K. pneu moniae strains producing SHV or TEM β-lactamases, which has also been noted by other scientists (Rodrigues et al. 2014). The type of ESβLs present in K. pneu moniae shifted in the 2000s, which caused outbreaks in hospitals due to the acquisition of plasmids and transposons encoding bla CTX-M-type ESβLs, consequently resulting in the predominance of CTX-M-producing strains (Calbo and Garau 2015).
Among non-β-lactam antibiotics, colistin has shown the highest activity towards the strains concerned. Similar results have been published by other authors (Zacharczuk et al. 2011). Among aminoglycosides, amikacin showed the highest activity. The vast majority of strains (92.8%) were susceptible to amikacin, unlike in China, where total K. pneumoniae resistance to amikacin was demonstrated (Guo et al. 2016). The other aminoglycosides (gentamicin, netilmicin, and tobramycin) were nonreactive to the CRKP isolates concerned, in line with data reported by other authors (Zacharczuk et al. 2011;Guo et al. 2016). On the other hand, Bukavaz et al. (2018) presented other aminoglycoside susceptibility profiles in CRKP strains. These discrepancies are most likely due to different quantities or origins of the strains concerned, though they might as well be related to particular hospitals' or ward's antibiotic policies.
The present analysis has shown a relatively high percentage of isolates resistant to tigecycline (42.8%), a synthetic minocycline analogue with a broad spectrum of activity. The antibiotic is successfully applied to treat infections caused by multi-resistant bacterial strains, including K. pneumoniae. During tigecycline treatment, Enterobacteriaceae representatives, including CRKP strains, may become resistant to the antibiotic, as evidenced by Pfarrell et al. (2018). They demonstrated cross-resistance between tigecycline and minocycline in resistant Enterobacteriaceae strains due to the MDR pumps, which remove drugs from cells. The distribution of acquired tigecycline resistance can vary depending on the geographic location or time; therefore, it is recommended to concern data on local resistance, particularly while treating severe infections (Pfarrell et al. 2018).
Genetic similarity of CRKP isolates was determined by comparing PFGE profiles (Han et al. 2013). This method has been widely used by European and non-European research teams that study epidemic outbreaks of K. pneumoniae and evidence global spread of this species' epidemic clones (Guo et al. 2016;Bukavaz et al. 2018;Protonotariou et al. 2018). All K. pneu moniae strains analyzed in this study belonged to ST147 (CC147). Navon-Venezia et al. (2017) reported that ST147 K. pneumoniae clone spread globally. The involvement of this pandemic clone in the spread of CTX-M-15 and other carbapenemases in European, Asian, and Middle East countries was confirmed by others (Rodrigues et al. 2014;Guo et al. 2016;Zautner et al. 2017). The data found in the literature on K. pneu moniae clonal organization has been confirmed by research focused on the molecular epidemiology of infections due to antibiotic-resistant Enterobacteriaceae (Guo et al. 2016;Protonotariou et al. 2018).
Because of the clonal nature of the isolates studied, the clinical data about their origin was reviewed to examine whether the same clonal type was circulating in the three various hospital wards. The study focused mainly on stroke patients or patients suffering from cardiovascular complications who were transferred to multiple-bed hospital rooms in Stroke Recovery Units, which were the only places where they were in touch. Moreover, it is plausible that the neurology ward played a major role in disseminating the bacteria throughout the hospital. The spread of the pathogen isolates across three different hospital wards could imply that the personnel who can access all hospital wards freely, unlike the patients and their relatives, might have transmitted the infections. Scientific literature emphasizes the role of hospitalization (and its duration) in the process of bacterial colonization. The frequency of gastrointestinal colonization with K. pneumoniae strains in hospitalized patients was high in our study. Gastrointestinal 1 115 colonization with CRKP is an important risk factor for infections with strains of the same phenotype due to the easy transfer of the genes and strains' persistence in hospital environments, which hinders their eradication. This research evidences K. pneumoniae adaptation typical of endemic hospital strains, which are corroborated by the accumulation of locally spreading resistance mechanisms to the antibiotics most commonly used for therapeutic purposes.
Ethics approval
The study was approved by the Bioethics Committee of the Jagiellonian University in Krakow, Poland (KBET/1072.6120.264.2019).
Authors' contributions
DO collected the data, performed the molecular analysis and drafted the manuscript; HK-C collected the isolates with clinical data, performed the hospital laboratory analysis and coordinated the microbiological analysis; MB consulted the cases and edited the manuscript; MBW supervised the research and analysis, coordinated and edited the manuscript.
Funding
The present work was financially supported by Jagiellonian University Medical College in Krakow (Poland), Grant No: K/ZDS/007829.
|
2021-04-02T05:34:44.904Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e520bc030f55557cf8277e70181534bfb35ec6bd",
"oa_license": "CCBYNCND",
"oa_url": "https://www.exeley.com/exeley/journals/polish_journal_of_microbiology/70/1/pdf/10.33073_pjm-2021-010.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e520bc030f55557cf8277e70181534bfb35ec6bd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208176202
|
pes2o/s2orc
|
v3-fos-license
|
Modal-aware Features for Multimodal Hashing
Many retrieval applications can benefit from multiple modalities, e.g., text that contains images on Wikipedia, for which how to represent multimodal data is the critical component. Most deep multimodal learning methods typically involve two steps to construct the joint representations: 1) learning of multiple intermediate features, with each intermediate feature corresponding to a modality, using separate and independent deep models; 2) merging the intermediate features into a joint representation using a fusion strategy. However, in the first step, these intermediate features do not have previous knowledge of each other and cannot fully exploit the information contained in the other modalities. In this paper, we present a modal-aware operation as a generic building block to capture the non-linear dependences among the heterogeneous intermediate features that can learn the underlying correlation structures in other multimodal data as soon as possible. The modal-aware operation consists of a kernel network and an attention network. The kernel network is utilized to learn the non-linear relationships with other modalities. Then, to learn better representations for binary hash codes, we present an attention network that finds the informative regions of these modal-aware features that are favorable for retrieval. Experiments conducted on three public benchmark datasets demonstrate significant improvements in the performance of our method relative to state-of-the-art methods.
I. INTRODUCTION
Multimodal hashing [1] is a task of embedding multimodal data into a single binary code, which aims to improve performance by using complimentary information provided by the different types of data sources. Since good representations are important for multimodal hashing, in this paper, we focus on developing a better feature learning approach.
To learn the representations, multimodal fusion [2] is proposed, which aims to generate a joint representation from two or more modalities in favor of the given task. Multimodal fusion can be mainly divided into two categories [2]: modelagnostic approaches [3] and model-based approaches [4]. The model-agnostic methods do not use a specific machine learning method. According to the data processing stage, model-agnostic methods can be mainly split into early and late fusion. Early fusion immediately combines multiple raw/preprocessed data into a joint representation. In contrast, late fusion performs integration after all of the modalities have made decisions. The model-based approaches fuse the heterogeneous data using different machine learning models, Email address: zenghen@mail2.sysu.edu.cn (H. Zeng), (laihanj3, issjyin)@mail.sysu.edu.cn (H. Lai e.g., multiple kernel learning [5], graphical models [6] and neural networks [7]. Recently, deep multimodal fusion has attracted much attention because it is able to extract powerful feature representations from raw data. As shown in Figure 1 (A), the common practices for deep multimodal fusion are as follows [8], [9], [10]: 1) Each modality start with several individual neural layers to learn intermediate feature.
2) These multiple intermediate features are merged into a joint representation via a fusion strategy. Such a fusion approach is referred to as intermidiate fusion [11] because the powerful intermediate features obtained by deep neural networks (DNNs) are merged to construct the joint representation. Deep multimodal learning has been shown to achieve remarkable performance for many machine learning tasks, such as deep cross-modal hashing [12] and deep semantic multimodal hashing [13].
While they have achieved great success, most existing methods focus on designing better fusion strategies, e.g., gated multimodal units (GMUs) [14] and multimodal compact bilinear pooling (MCB) [15] In the context of multimodal hashing, two factors are considered in the proposed modal-aware operation. The first consideration is how to learn the the non-linear dependences from other modalities. Inspired by the kernel methods [5], we present a kernel network to learn the underlying correlation structures in other modalities. Given two intermediate features from two modalities, we first calculate the kernel similarities, i.e., dot-product similarities, between the two features. Then, the similarities are used as weights to reweight the original features. The second consideration is how to learn better intermediate features for binary hash codes. The binary representations always introduce information loss compared to the original real values, e.g., each bit has only two values: 0 or 1. To reduce the information loss, we further propose an attention network that focuses on selecting the informative parts of multimodal data. The uninformative parts will be removed and will not be used to encode the binary codes. Thus, this method is able to alleviate the information loss to some extent because the binary codes are generated from the informative parts of multimodal data that are favorable for retrieval. To fully utilize the modalities, all of the intermediate features are incorporated to learn the attention maps.
The main contributions of this paper can be summarized as follows.
• We propose a modal-aware operation to learn the intermediate features. This operation can learn the information contained in other modalities prior to fusion, which is helpful for better capturing data correlations. • We propose a kernel network to capture the non-linear dependences and an attention network to find the informative regions. These two networks learn better intermediate features for generating binary hash codes. • We conduct extensive experiments on three multimodal databases to evaluate the usefulness of the proposed modal-aware operation. Our method yields better performance compared with several state-of-the-art baselines.
A. Multimodal Fusion
Multimodal fusion is an important step for multimodal learning. A simple approach for multimodal fusion is to concatenate or sum the features to obtain a joint representation [16]. For instant, Hu et al. [17] concatenated text embeddings and visual features for image segmentation. Reconstruction methods were also proposed to fuse the multimodal data. For example, autoencoders [18] and deep Boltzmann machines [19] were trained to reconstruct both modalities with only one modality as the input. Subsequently, inspired by the success of bilinear pooling and gated recurrent networks, Fukui et al. [15] proposed multimodal compact bilinear pooling to efficiently combine multimodal features, and John et al. [14] proposed a gated multimodal unit to determine how much each modality affects unit activation. Liu et al. [20] multiplicatively combined a set of mixed source modalities to capture cross-modal signal correlations. Although many approaches have been proposed for multimodal fusion, these deep learning methods do not fully explore the dependences among the modalities prior to the fusion operations. In this paper, we argue that capturing the dependences among the heterogeneous modalities will benefit multimodal fusion.
B. Multimodal Retrieval
A similar work is that on cross-modal hashing [21]. Given a query of one modality, the goal of cross-modal hashing is to retrieve the relevant data from another modality. For example, Cross-view hashing (CVH) [22] and semantic correlation maximization (SCM) [23] use hand-crafted features. Deep cross modal hashing (DCMH) [12] and pairwise relationship guided deep hashing (PRDH) [24] are deep-network-based methods. Attention-aware deep adversarial hashing [25] and self-supervised adversarial hashing (SSAH) [26] apply the adversarial learning to generate better binary codes. Although many approaches have been proposed for cross-modal hashing, our multimodal hashing is different from the cross-modal hashing. The proposed multimodal hashing aims to learn the joint representations but not coordinated representations, in which the joint approach combines multiple samples into the same representation space while coordinated approach process the multiple data separately and enforce similarity-preserving among different modalities [2].
Other similar works include those on multi-view hashing that leverages multiple views to learn better binary codes. Some represetative studies focus on multiple feature hashing (MFH) [27], composite hashing with multiple information sources (CHMIS) [28], multi-view latent hashing (MVLH) [29], dynamic multi-view Hashing (DMVH) [30] and so on. In this paper, we only consider the multimodal data but not the multiple views, e.g., SIFT and HOG from the same image modality.
Limited attention has been paid for multimodal hashing. Wang et al. [1] proposed deep multimodal hashing with orthogonal regularization to exploit the intra-modality and intermodality correlations. Cao et al. [31] proposed an extended probabilistic latent semantic analysis (pLSA) to integrate the visual and textural information. In this paper, we focus on learning better intermediate features for multimodal hashing.
III. OVERVIEW OF DEEP MULTIMODAL HASHING
In this section, we briefly summarize the deep multimodal hashing framework.
Let S = {S i } n i=1 denote a set of instances, where each instance is represented in multiple modalities. For ease of presentation, we only consider two modalities, i.e., image and text, to explain our main idea. We denote the instance where I i and T i are image and text descriptions of the i-th instance, and Y i is the corresponding ground-truth label. Let H = {H i } n i=1 denote the binary codes, where H i ∈ {−1, 1} l is the l-dimensional binary code associated with S i . The aim of multimodal hashing is to learn hash functions that encode the instance S i into one binary code H i while preserving the similarities between the instances. For example, if S i and S j are similar, the Hamming distance between H i and H j should be small. When S i and S j are dissimilar, the Hamming distance should be large. Different from unimodal data, each instance consists of multiple unimodal signals. Combining these signals into a joint representation becomes a critical step. Currently, the deep multimodal learning (DML) approaches have been shown to achieve remarkable performance because they can learn the powerful features from all of the modalities. Merging these powerful features into a joint representation will lead to better and flexible multimodal fusion.
An illustration of a deep network for multimodal hashing is shown in Figure 2. The network is divided into three sequential parts: 1) the feature learning module, which learns the efficient intermediate features from the image and text raw data; 2) the multimodal fusion module, which merges the two intermediate features into a joint representation; and 3) the hashing module, which encodes the joint representations to the binary codes, followed by a similarity-preserving loss.
In the feature learning module, the convolutional layers are applied to produce powerful feature maps for the image modality. The images go through several convolutional layers to obtain high-level intermediate feature maps. For the text modality, the feed-forward neural network with stacked fullyconnected layers is utilized to encode the text into semantic text features.
In the fusion module, with two intermediate features, a fusion strategy is utilized to obtain a joint representation. Many methods for fusion have been proposed, e.g., concatenation, gate multimodal units (GMUs) [14] and multimodal compact bilinear pooling (MCB) [15].
In the hashing module, the joint representation is mapped into a feature vector with the desired length, e.g., an l-bit approximate binary code. Then, the similarity-preserving loss is used to preserve the relative similarities of multimodal data.
However, in the above deep multimodal hashing, these intermediate features are learned separately and had no prior knowledge of other modalities before the fusion. In this The image feature maps f I with size of H × W × C and the text feature vector f T with feature length K. "⊗" denotes matrix multiplication and " " denotes element-wise multiplication. "conv" and "fc" denote the convolutional and fully connected layers, respectively. "GAP" represents the global average pooling layer.
paper, we present modal-aware operation that aims to learn better intermediate feature representations. It contains a kernel network that aims to learn the correlations among different modalities and an attention network that finds the informative regions. These two aspects are described in detail in the next section.
IV. MODAL-AWARE OPERATION
In this section, we present a modal-aware operation that consists of two parts: a kernel network and an attention network.
A. Kernel Network
The kernel network takes two intermediate features as inputs: image feature maps and the text feature vector. More specifically, suppose that f I ∈ R H×W ×C represents feature maps for the image modality, where H, W and C are the height, weight and channel, respectively. f T ∈ R K is the corresponding textural feature, where K is the feature length.
Inspired by the non-local features [32] and kernel methods, the outputs of the kernel network are defined aŝ where K I (x, y) and K T (x, y) are the kernel functions that measure the similarity between the inputs x and y. We use the kernel methods to exploit the correlation structures obtained in other modalities. In Eq. (1), the intermediate features of the image modality are learned from both the textural and image features. First, the kernel similarity between the image feature and textural feature is calculated. Then, this similarity is used to reweight the original feature. Thus, using these operations, the image feature is embedded into textural information. The same approach is used for the text modality. We note that we use different kernel functions because the textural feature is a one-dimensional vector while the image feature maps are three-dimensional tensors.
To train the kernel network in an end-to-end manner, the kernel function K(x, y) is further expressed as the inner product in another space H, which is reformulated as where φ(·) and ϕ(·) are two mapping functions to project the data into another space. Since we use deep networks to learn the multimodal data, we also design two networks as these two mapping functions. That is, the convolutional layer and the fully connected layer are utilized as the mapping functions: φ(·) is a convolutional layer and ϕ(·) is a fully-connected layer. Figure 3 shows the specific structure of the kernel network. For the image modality, the network takes the feature maps f I and the textural vector f T as inputs. The approach consists of three parts: 1) two mapping functions φ I (f I ) (a convolutional layer) and ϕ I (f T ) (a fully connected layer) are first learned; 2) the kernel similarity is calculated using the inner product layer; and 3) the origin features are reweighted using the kernel similarity. In the first part, φ I is a convolutional layer with 1 × 1 kernel size, and ϕ I is a single-layer neural network with transformation matrix W ∈ R K×C that maps the textural feature and the visual features to the same dimension, which is given by Since V I is a tensor while T I is vector, we first reshape the feature maps by flattening the height and width of the original features: , where V I i ∈ R C and M = H × W . The inner products between these M features and the text feature T I can be calculated. The output off I can be defined asf wheref I i is the i-th vector corresponding to V I i . A similar approach is used for the text modality. First, the global average pooling (GAP) layer reduces f I with dimensions H × W × C to dimensions 1 × 1 × C by taking the average of each H × W feature map. Letf I denote the output vector of the GAP layer. Sincef I is a vector, φ T and ϕ T are two fully connected layers: where φ T is connected with the transformation matrix W φ T ∈ R C×K and ϕ T is connected with the transformation matrix W ϕ T ∈ R K×K . Finally, the output for the text modality can be formulated asf
B. Attention Network
Inspired by how humans process information, we propose an attention network that adaptively focuses on salient parts to learn more powerful multiple intermediate features. To compute the attention efficiently, we aggregate information from all intermediate features. That is, we exploit both features rather than using each independently to locate the informative regions. The detailed operations are described below. Figure 4 shows the specific structure of the attention network. First, the visual feature mapsf I are forwarded to a global average pooling layer to produce a visual vector F I . Then, we concatenate visual and textural features as neural network followed by a softmax function to obtain the attention distributions.
where W I ∈ R (C+K)×C and W T ∈ R (C+K)×K are transformation matrices. b I and b T are model biases. Here, a I is also called the channel attention map [33], which exploits the inter-channel relationship of the features. The main different is that our method use both the visual and textural features from different modalities to find the salient channels. Then, elementwise multiplication is applied to obtain the final outputs f I and f T , which are defined as wheref (:, :, i) is the i-th channel with size H × W and a i is the i-th value in vector a.
V. IMPLEMENTATION DETAILS
The proposed modal-aware feature learning for multimodal hashing is shown in Figure 5. We apply modal-aware operations in the earlier layers. Please note that it only has two fullyconnected layers for text modality. Hence, the two modalaware operations were applied after each fully-connected layer.
A. Network Architectures
For the image modality, ResNet-18 [34] is used as the basic architecture to learn the powerful image features. ResNet is a residual learning framework that has shown great success in many machine learning tasks. In the ResNet-18, the last global average pooling layer and a 1000-way fully connected layer are removed. The feature maps in Conv4 2 and Conv5 2 are used as the image intermediate features f I , respectively. For the text modality, the well-known bag-of-words (BoW) vectors are used as the inputs. Then, the vectors go through a feedforward neural network (BoW → 8192 → 512) to learn the semantic text features f T .
After the modal-aware operation, we have two features: f I and f T . Since f I is a tensor, the global average pooling layer is used to map f I into a vector F I . Then, a simple approach that concatenates these two features is applied to obtain a joint representation. Let F = [ F I ; f T ] represent the joint representation. The joint representation is forwarded to an l-way fully connected layer to generate l-bit binary codes H.
B. Training Object
We use the triplet ranking loss [35] to train the deep network. We note that other losses, e.g., contrastive loss [36], can also be used in our framework and the loss function is not our focus in this paper. Specifically, given a triplet of instances (S i , S j , S k ), in which the instant S i is more similar to S j than to S k , these three instances go through the deep multimodal network, and the outputs of the network are H i , H j and H k , which are respectively associated with the instances. The similarity-preserving loss function is defined by where i, j, k is the triplet form and ε is the margin.
VI. EXPERIMENTS
In this section, we conduct extensive evaluations of the proposed method and compare it with several state-of-the-art algorithms.
A. Datasets
• NUS-WIDE [37]: This dataset consists of 269,648 images and the associated tags from Flickr. Each image is For all of the experiments, we follow the experimental protocols of DCMH [12] to construct the query sets, retrieval databases and training sets. The NUS-WIDE dataset contains 81 ground-truth concepts. To prune the data without sufficient tag information, a subset of 195,834 image-text pairs that belong to the 21 most-frequent concepts are selected, as suggested by [12]. The randomly sampled 2,100 image-text pairs (100 pairs per concept) are used as the query set, and the rest of the image-text pairs are constructed as the retrieval database. In the retrieval database, 10,000 image-text pairs are randomly selected to train the hash functions. In the MIR-Flickr 25k and IAPR TC-12 databases, the randomly sampled 2,000 image-text pairs are used as the query set. The rest of the pairs are used as the database for retrieval. We randomly select 10,000 pairs from the retrieval database to form the training set.
B. Experimental Settings
We implement our codes based on the open source deep learning platform PyTorch 1 . For the image modality, ResNet-18 is adapted as the basic architecture. The weights of ResNet-18 are initialized with the pretrained model that learns from the ImageNet dataset. For the text modality, the weights of all fully connected layers are randomly initialized following a Gaussian distribution with a standard deviation of 0.01 and a mean of 0. We train the networks by the stochastic gradient solver, i.e., ADAM (weight decay = 0.00001). The batch size is 100, and the base learning rate is 0.0001, which is changed to one-tenth of the current value after every 20 epochs. For fair comparison, all of deep learning methods are based on the same network architectures and same experimental settings. 1 https://pytorch.org/ Evaluations: Following the common practice, the mean average precision (MAP), precision-recall and precision w.r.t different numbers of top returned samples are used as the evaluation metrics. MAP is used to measure the accuracy of the whole binary codes based on the Hamming distances. The precision-recall aims to measure the hash lookup protocol and the precision considers only the top returned samples.
C. Comparison with State-of-the-art Methods
In the first set of experiments, we compare the performance of the proposed method with state-of-the-art baselines. We evaluate two different approaches as the baselines.
The first set of baselines is the unimodal approaches. In this set of baselines, only one modality is used to train the hash functions. For the image modality, several state-of-the-art image hashing algorithms are selected: deep pairwise-supervised hashing (DPSH) [40], deep supervised hashing (DSH) [41], HashNet [42] and deep triplet hashing (DTH) [35]. DPSH and DSH belong to deep pair-wise approaches, and DTH is a triplet-based approach. HashNet aims to minimize the quantization errors of the hash codes. For fair comparison, the deep architectures for these four methods are all the same as ours. For the text modality, we use the same network for text data, which is referred to as TextHash. TextHash only uses the text representations to learn the binary codes.
The second set of baselines is different fusion strategies used to combine multiple modalities. We note that only the fusion module in Figure 2 uses different fusion strategies and the other modules are the same.
• Concat We concatenate both intermediate features of the image and text modalities to train the hashing architectures. • GMU A gate multimodal unit (GMU) [14] is an internal unit in a neural network for data fusion. GMU uses multiplicative gates to determine how modalities influence the activation of the unit. • MCB Multimodal compact bilinear pooling (MCB) [15] uses bilinear pooling [43] to combine visual and text representations. Table I show the results of comparisons of the obtained MAP values for the three mulitmodal datasets. Figure 6 and Figure 7 show the precision-recall and precision curves on 32 bits. Our proposed method yields the highest accuracy and beats all the baselines for most levels. Two observations can be made from the results as follows. 1) Compared with the unimodal approaches, our method performs significantly better than all baselines. For instance, our method yields the higher accuracy compared to the Tex-tHash that only use the text modality. For image hashing methods, our method obtains a MAP of 0.7395 on 16 bits, compared with the value of 0.7115 of the HashNet on NUS-WIDE. On MIR-Flickr 25k, the MAP of DTH is 0.8332, while the proposed method is 0.8658 on 32 bits. The proposed method shows a relative increase of 4.6%∼6.9% on the IAPR TC-12 compared to the DTH algorithm. Note that DTH and our method use the same triplet ranking loss function and DTH achieves an excellent performance. Even so, our method performances better than DTH. These results indicate that multi-modal approaches can improve the performance.
2) Compared with other deep fusion strategies, our method also yields the best performance on all databases. Firstly, compared to the Concat approach, the only different is that using or not using the modal-aware operations, these comparisons can show us whether the modal-aware features can contribute to the accuracy or not. The results indicate that our modal-aware features can achieve better performance. For example, the MAP of our proposed method is 0.7395 when the bit length is 16, compared to 0.7274 of Concat on NUS-WIDE. Thus it is desirable to learn the powerful features for multi-modal retrieval. Compared to the GMU and MCB two baselines which achieve excellent performances, our proposed method also yields better performance. The main reason is that our method can incorporate the information from other modalities to learn the intermediate features, while the intermediate features of GMU and MCB are learned via individual neural layers.
D. Ablation Study
In the second set of the experiments, an ablation study was perform to elucidate the impact of each part of our method on the final performance.
In the first baseline, we explore the effect of the kernel network. In this baseline, the attention network is fixed and we do not use the kernel network. That is the features are directly forwarded to the attention network and the only difference is that using or not using the kernel network, which is referred to as w/o KN.
The second baseline explores the effect of the attention network. In this baseline, the kernel network was first performed to obtain two intermediate features. Then, we concatenate the two features to obtain the joint representation. We note that the only difference between the baseline and our method is the use or lack of use of the attention network. We use w/o AN to denote the baseline that is not using the attention network.
The comparison results are shown in Table II and Figure 8. The results show that our proposed method can achieve better performance than the two baselines. For instance, our method obtains a MAP of 0.7627 on 48 bits, compared to 0.7519 of the w/o AN and 0.7467 of the w/o KN. The results indicate that it is desirable to learn the intermediate features with both the kernel network and the attention network.
In this paper, the text is represented as a bag-of-word vector. Other text representations, e.g., Sent2Vec or BERT [44], can be used in our framework. For example, on IAPR TC-12 database, each image is associated with a text caption. Thus Sent2Vec, which is computed via the pre-trained model 2 , can be used as the text representations. Table III shows the comparison results with respect to MAP.
VII. CONCLUSION
In this paper, we proposed a modal-aware operation for learning good feature representations. The key to success comes from designing a generic building block to capture the underlying correlation structures in heterogeneous multimodal data prior to multimodal fusion. First, we proposed a kernel network to learn the non-linear relationships. The kernel similarities between two modalities were learned to reweight the original features. Then, we proposed an attention network, which aims to select the informative parts of the intermediate features. The experiments were conducted on three benchmark datasets, and the results demonstrate the appealing performance of the proposed modal-aware operations.
|
2019-11-19T02:17:21.000Z
|
2019-11-19T00:00:00.000
|
{
"year": 2019,
"sha1": "9ddc9cd00bd80fc829a2c92c706b536219a2ab98",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9ddc9cd00bd80fc829a2c92c706b536219a2ab98",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236769435
|
pes2o/s2orc
|
v3-fos-license
|
On the structure tensor of $\mathfrak{sl}_n$
The structure tensor of $\mathfrak{sl}_n$, denoted $T_{\mathfrak{sl}_n}$, is the tensor arising from the Lie bracket bilinear operation on the set of traceless $n \times n$ matrices over $\mathbb{C}$. This tensor is intimately related to the well studied matrix multiplication tensor. Studying the structure tensor of $\mathfrak{sl}_n$ may provide further insight into the complexity of matrix multiplication and the"hay in a haystack"problem of finding explicit sequences tensors with high rank or border rank. We aim to find new bounds on the rank and border rank of this structure tensor in the case of $\mathfrak{sl}_3$ and $\mathfrak{sl}_4$. We additionally provide bounds in the case of the lie algebras $\mathfrak{so}_4$ and $\mathfrak{so}_5$. The lower bounds on the border ranks were obtained via various recent techniques, namely Koszul flattenings, border substitution, and border apolarity. Upper bounds on the rank of $T_{\mathfrak{sl}_3}$ are obtained via numerical methods that allowed us to find an explicit rank decomposition.
Introduction
In 1969, Strassen presented a novel algorithm for matrix multiplication of n × n matrices. Strassen's algorithm used fewer than the O(n 3 ) arithmetic operations needed for the standard algorithm. This led to the question: what is the minimal number of arithmetic operations required to multiply n × n matrices, or in other words, what is the complexity of matrix multiplication [Str69] [Str83]. An asymptotic version of the problem is to determine the exponent of matrix multiplication, ω, which is the infimum of all values such that for all > 0, multiplying n × n matrices can be performed in O(n ω+ ) arithmetic operations. Any bilinear operation, including matrix multiplication, may be thought of as tensor in the following way: Let A, B, and C denote vector spaces over C. Given a bilinear map A * × B * → C, the universal property of tensor products induces a linear map A * ⊗ B * → C. Since Hom C (A * ⊗ B * , C) A ⊗ B ⊗ C, then we can take our bilinear map to be a tensor in A ⊗ B ⊗ C. Let M n denote the matrix multiplication tensor arising from the bilinear operation of multiplying n × n matrices.
An important invariant of a tensor is its rank. For a tensor T ∈ A ⊗ B ⊗ C the rank, denoted R(T ), is the minimal r such that T = r i=1 a i ⊗ b i ⊗ c i with a i ∈ A, b i ∈ B, c i ∈ C for 1 ≤ i ≤ r. Given precise T i = a i ⊗ b i ⊗ c i , then we call T = r i=1 T i a rank decomposition of T . Strassen also showed that the rank of the matrix multiplication tensor is a valid measure of its complexity; in particular, he proved ω = inf{τ ∈ R | R(M n ) = O(n τ )} [Str69].
For a tensor T ∈ A ⊗ B ⊗ C, the border rank of T , denoted R(T ), is another invariant of interest and defined to be the minimal r such that T = lim →0 T where for all > 0, T has rank r. Given rank decompositions of T = r i=1 T i ( ), we then call lim →0 r i=1 T i ( ) a border rank decomposition of T . Later, in 1980, it was shown by Bini that the border rank of matrix multiplication is also a valid measure of its complexity by proving that ω = inf{τ ∈ R | R(M n ) = O(n τ )} [Bin80].
Intimately related to the matrix multiplication tensor is the structure tensor of the Lie algebra sl n , the set of traceless n × n matrices over C equipped with the Lie bracket [x, y] = xy − yx. The structure tensor of sl n is defined as the tensor arising from the Lie bracket bilinear operation, and we denote it by T sln . One example of how matrix multiplication is related to T sln is by closer examination of a skew-symmetric version of the matrix multiplication tensor; consider the tensor arising from the Lie bracket bilinear operation on gl n , (which is just M n , but considered as a Lie algebra) [CHI + 18]. Since gl n = sl n ⊕ z, where z indicates the scalar matrices which are central in gl n , T sln determines the commutator action on all of gl n . While the matrix multiplication tensor has been well studied [CHI + 18] [CHL19], the structure tensor of sl n has not been studied to the same extent. Currently, the only known non-trivial results are lower bounds on the rank of the structure tensor of sl n . In [dGH86] it was shown R(T sln ) ≥ 2n 2 − n − 1. Studying the structure tensor of sl n may provide some further insight into two central problems in complexity theory.
In complexity theory, it is of interest to find explicit objects that behave generically. This type of problem is known as a "hay in a haystack" problem. Algebraic geometry tells us that a "random" tensor T in C m ⊗ C m ⊗ C m will have rank/border rank m 3 3m−2 . By an explicit sequence of tensors, we will mean a collection of tensors T m ∈ C m ⊗ C m ⊗ C m such that the coefficients of T m are computable in polynomial time in m. The "hay in a haystack" problem for tensors is to find an example of an explicit sequence of tensors of high rank or border rank, asymptotically in m. Currently, there exists an explicit sequence of tensors, S m over C, such that R(S m ) ≥ 3m − o(log(m))[AFT11] and a different explicit sequence of tensors over C, T m , such that R(T m ) ≥ 2.02m − o(m) [LM19]. One should note that the sequence T m of [LM19] has border rank equal to 2m when m = 13 and has been shown to exceed 2m for m > 364175. It would be of interest to find sequences of tensors for which the border rank exceeds 2m for smaller values of m.
The second problem is Strassen's problem of computing of the complexity of matrix multiplication. The exponent of sl n is defined as ω(sl n ) := lim inf n→∞ log n (R(T sln )).
By Theorem 4.1 from [LY20], the exponent of matrix multiplication is equal to the exponent of sl n . Consequently, upper bounds on the rank and even the border rank of T sln provide upper bounds on ω.
These two problems motivate our study of the border rank of T sln . We prove new bounds in the case of sl 3 and sl 4 .
Additionally, we obtain lower bounds on the border rank of the structure tensors of so 4 and so 5 . We note that so 4 sl 2 × sl 2 and so 5 sp 4 .
Preliminaries
The above definition of T sln is independent of choice of basis, but we may also write the tensor in terms of bases. Let {a i } n 2 −1 i=1 be a basis of sl n and {α i } n 2 −1 i=1 a dual basis. Recall that sl n has a bilinear operation called the Lie bracket, given The structure tensor of sl n in this basis is Let e j i denote the n × n matrix with 1 in the (i, j)th entry. If we take the standard weight basis of sl n , namely the matrices e j i for 1 ≤ i = j ≤ n and e i i − e i+1 i+1 for 1 ≤ i ≤ n − 1, then we can express the tensor T sl 3 as follows.
We establish some basic definitions of algebraic geometry and representation theory that will be used in our study of the structure tensor of sl n .
2.1. Algebraic Geometry. The language of algebraic geometry will prove useful in studying our problems. Throughout, let A, B, C, and V be complex vector spaces. Let π : V \{0} → PV be the projection map from V to its projectivization. Denote For v 1 , · · · , v k ∈ V , let v 1 , · · · , v k denote the projectivization of the linear span of v 1 , · · · , v k .
See [Sha13] for the definitions of the Zariski topology, projective variety, and the dimension of a projective variety. For a projective variety, X ⊂ PV , let S[X] denote its homogeneous coordinate ring and let I(X) ⊂ Sym(A * ) denote the ideal of X. Given Y ⊂ PV a nondegenerate projective variety, we define the rth secant variety of Y , denoted σ r (Y ) ⊂ PV .
The closure operation indicated in Definition 1 is the Zariski closure. As polynomials are continuous, this closure also contains the Euclidean closure of the set. In this case, we have that the Euclidean closure is in fact equal to the Zariski closure (see Theorem 3.1.6.1 in [Lan17] or Theorem 2.3.3 in [Mum95]).
Let A, B, C be vector spaces over C. The Segre embedding is defined as map denote the image of the Segre embedding, which is a projective variety of dimension dim A + dim B + dim C − 3. Note that X is the space of rank one tensors in P(A ⊗ B ⊗ C). Consequently, one can redefine the border rank, R(T ), for T ∈ A ⊗ B ⊗ C as the r such that T ∈ σ r (X) and T / ∈ σ r−1 (X). We make a remark here that R(T ) ≥ R(T ), trivially, as a tensor rank decomposition can be considered as a border rank decomposition by taking the tensor rank decomposition as a constant sequence.
For a secant variety σ r (Y ) ⊂ PV in general, the expected dimension will be min{r dim Y + (r − 1), dim PV }, since we expect to choose r points from Y and have an additional r − 1 parameters to span the r-plane generated by those points. The dimension of σ r (Seg(PV × PV × PV )) is the expected dimension, except when r = 4 and dim V = 3 [Lic85].
Another variety we will make use of is the Grassmannian, denoted G(k, V ). Let Λ k V denote the kth exterior power of the vector space V . The Grassmannian variety 2.2. Representation Theory. Recall that A, B, C, and V are complex vector spaces. A guiding principle in geometric complexity theory is to use symmetry to reduce the problem of testing a space of tensors to testing particular representatives of families of tensors. To describe the symmetry of our tensor, we use the language of the representation theory of linear algebraic groups and Lie algebras. See [FH91] for the definitions of linear algebraic groups, semisimple Lie algebras, representations, orbits of group actions, G-modules, irreducibility of a representation/module, Borel subgroups, a maximal torus, and the correspondence between Lie groups and Lie algebras.
The group GL(A) × GL(B) × GL(C) acts on A ⊗ B ⊗ C by the product of the natural actions of GL(A) on A, etc. Identify (C * ) ×2 with the subgroup Definition 7. For a tensor T ∈ A ⊗ B ⊗ C, define the symmetry group of T to be the group In the case of T sln , our symmetry group, G T sln , is in fact isomorphic to SL n . For any element g ∈ SL n , we have the element g * ⊗ g * ⊗ g acts on sl * n ⊗ sl * n ⊗ sl n and leave T sln invariant. It is always the case that for any automorphism of sl n , we will have an automorphism of sl * n ⊗ sl * n ⊗ sl n . See [Mir85] for a proof that these are all elements of the symmetry group for T sln .
Let B T ⊂ G T denote a Borel subgroup. In the case of T sln , where our symmetry group is isomorphic to SL n , we take B T to be the Borel subgroup of upper triangular matrices of determinant 1. We note that Borel subgroups are not unique, but are all conjugate. For this Borel subgroup, let N ⊂ B T denote the group of upper triangular matrices with diagonal entries equal to 1, called the maximal unipotent group, and let T denote the subgroup of diagonal matrices, also called the maximal torus.
In our case where G T SL n , every irreducible G T -module will have a highest weight line and addtionally will be be uniquely determined by this highest weight line.
Given a symmetry group, G T , we also have a symmetry Lie algebra, denoted g T , which will be more convenient to work with. Let b T denote the Borel subalgebra and h denote the Cartan subalgebra, which will be the Lie algebras of the Borel subgroup, B T , and maximal torus, T, respectively. In the case of our symmetry Lie algebra, sl n , we take h to be the subalgebra of traceless diagonal matrices. Additionally, for N ⊂ B T , we have the corresponding Lie subalgebra n ⊂ b T , which will consist of the strictly upper triangular elements of sl n . We will refer to elements of n as raising operators.
For a Lie algebra representation, a vector is a weight vector, if h[v] = [v]. One may regard the weight of a vector v as a linear functional λ ∈ h * , such that for all Call these weights ω 1 , · · · , ω n−1 ∈ h * the fundamental weights of sl n . It is well known that the highest weight λ of an irreducible representation can be represented as an integral linear combination of fundamental weights. For notational convenience, we denote the irreducible representation with highest weight λ = a 1 ω 1 + · · · + a n−1 ω n−1 by [a 1 · · · a n−1 ] and denote its highest weight vector by v [a 1 ··· a n−1 ] . We note that this notation is different from the widely used diagram notation of Weyl [FH91] The purpose for introducing the language of homogeneous varieties is to introduce the following Normal Form Lemma. Recall that V is a complex vector space.
Lemma 10 (Normal Form Lemma). Let X = G/P ⊂ PV be a homogeneous variety } has a single closed orbit O min in X. Then any border rank r decomposition of v may be modified using G v to a border rank r decomposition lim →0 x 1 ( ) + · · · + x r ( ) such that there is a stationary point If, moreover, every orbit of G v ∩ G x 1 contains x 1 in its closure, we may further assume that for all j = 1, lim →0 x j ( ) = x 1 .
See Lemma 3.1 in [LM17] for the proof. This Lemma allows one describe the interactions between the different G-orbits. It can be thought of as a consequence of the Lie's theorem. Lie's Theorem states that for a solvable group, H, an H-module, W , and a point [w] ∈ PW , the closed orbit H[w] contains an H-fixed point [FH91].
Methodology
The most fruitful current techniques for finding lower bounds on the border rank of a tensor are Koszul flattenings, the border substitution method, and border apolarity. We review each of these techniques.
3.1. Koszul flattenings. For T ∈ A ⊗ B ⊗ C, we may consider it as a linear map T B : B * → A ⊗ C. We have analogous maps T A , T B , T C , which are called the coordinate flattenings of T . Consider the linear map obtained by composing the map Note that π ⊗ Id C is the tensor product of the exterior multiplication map with the identity on C. Denote this composition by T ∧p A . Let rank denote the rank of a linear map.
for the proof. One should note that we achieve the best bounds when dim A = 2p + 1. Thus, if dim A > 2p + 1, we may restrict T to subspaces A ⊂ A of dimension 2p + 1, since border rank is upper semi-continuous with respect to restriction i.e. for a restriction T of T , R(T ) ≤ R(T ). Koszul flattenings alone are insufficient to prove R(T sln ) ≥ 2(n 2 − 1), as the limit of the method for T ∈ C m ⊗ C m ⊗ C m is below 2m − 3(m even) and 2m − 5(m odd). See [LO15] for more on this method.
3.2. Border Substitution. The only known technique for computing lower bounds on the rank of a tensor is the substitution method. A tensor T is A-concise if the coordinate flattening map T A is injective, and define similarly for B-concise and Cconcise. If a tensor is A-concise, B-concise, and C-concise, then we simply call it concise. We remark that T sln is in fact a concise tensor, since sl n is a simple Lie algebra and the coordinate flattening maps do not send everything to 0.
The substitution method is a standard technique for obtaining lower bounds on the rank of a tensor [AFT11]. This technique can be extended to border rank.
for proof. Note that the notation T A ⊗B * ⊗C * is a restriction of T when considering T as a trilinear form T : A * ⊗ B * ⊗ C * → C. If we letà = A/(A ) ⊥ then our restricted tensor will be an element ofà ⊗ B ⊗ C.
Also note that in the border substitution proposition, we are minimizing over all elements in the Grassmannian. In practice, applying border substitution uses tensors with large symmetry groups G T . The utility is that one may restrict to looking at representatives of closed G T -orbits in the Grassmannian, rather than by examining all elements of the Grassmannian. One often achieves the best results on the rank of a tensor by using border substitution in conjunction with Koszul flattenings. Naively, the largest lower bound obtainable by the method, i.e. the limit of the method, is at most dim A+dim B +dim C −3, however, the limit is in fact slightly less. For tensors T ∈ C m ⊗ C m ⊗ C m , the limit of the method is 3m − 3 3m + 9 4 + 9 2 . See [LM17] for a proof of this and more on this method. The best lower bound achieved on the border rank that is mentioned in the introduction is achieved using this method [LM19].
3.3. Border Apolarity. In order to establish larger lower bounds on R(T sl 3 ) than can be achieved by Koszul flattenings and border substitution for T sl 3 , we will use border apolarity, as developed in [BB20] and [CHL19].
Suppose T has a border rank r decomposition, T = lim →0 T , where T = . I ijk will exist, since the Grassmannian is compact, however, the resulting ideal I may not be saturated. See [BB20] for further discussion on this.
Recall that a tensor T is concise if all the coordinate flattening maps In a nutshell, border apolarity gives us some necessary conditions on the possible limiting ideals, I, that can arise from a border rank decomposition.
Theorem 13. (Weak Border Apolarity) Let X = PA×PB×PC and S[X] be its coordinate ring. Suppose a tensor T has R(T ) ≤ r. Then there exists a (multi)homogeneous ideal I ⊂ S[X] such that In addition, if G T is a group acting on X and preserving T , then there exists an I as above which in addition is invariant under a Borel subgroup of G T .
In [BB20], see Theorems 3.15 (Border Apolarity) for proof of the first part and see Theorem 4.3 (Fixed Ideal Theorem) for a proof of the second part. Theorem 3.15 and 4.3 of [BB20] are not stated here as they are stated in greater generality than we require using the language of schemes. We remark that the Weak Border Apolarity Theorem provides sufficiency that if a border rank r decomposition exists, then there will exist another border rank r decomposition satisfying the given conditions. We note that the second condition says that we may in fact take the r points of the border rank decomposition to be in general position, and so our initial supposition that the r points are in general position is justified. Lie's Theorem and the Normal Form Lemma allow us to take I 111 to be B T -fixed. The Fixed Ideal Theorem of [BB20] uses the same reasoning to generalize this to prove B T -invariance for all multigraded components I ijk , not just a finite number of multigraded components.
In [CHL19], using Weak Border Apolarity Theorem, they assert that for T a concise tensor with a border rank r decomposition, there will exist an ideal I satisfying the following: (1) I ijk is B T -stable (I ijk is a Borel fixed weight space) (2) I ⊂ Ann(T ) i.e. I 110 ⊂ T (C * ) ⊥ ⊂ A * ⊗B * , etc. and I 111 ⊂ T ⊥ ⊂ A * ⊗B * ⊗C (3) For all i, j, k such that i + j + k > 1, then codim I ijk = r, i.e. the condition that we may take the r points of the border rank decomposition to be in general position. (4) Since I is an ideal, the image of the multiplication map We note that the last condition is simply the condition that I is an ideal where the multiplication respects the grading. The border apolarity algorithm presented in [CHL19] makes use of these conditions and attempts to iteratively construct all possible ideals, I, in each multidegree. In particular, if at any multidegree ijk, there does not exist an I ijk satisfying the above, then we may conclude that R(T ) > r. We remark that the candidate ideals, I, do not necessarily correspond to an actual border rank decomposition of the tensor. We describe the algorithm of [CHL19] precisely below: If codim of image is at least r, then have a candidate triple. F 111 is candidate for I 111 if it is codim r, it is contained in T ⊥ and contains image of above map. (4) Analogous higher degree tests (5) If at any point there are no such candidates R(T ) > r, otherwise stabilization of candidate ideals will occur at worst multi-degree (r, r, r) The condition that I ijk is B T -fixed allows us to greatly reduce the search for possible candidate ideals. The B T -fixed spaces are easier to list than trying to list all possible I ijk . This condition allows the algorithm of [CHL19] to be feasible for tensors with large symmetry groups. Then, using the assumption that our points are in general position, one has rank conditions on the multiplication maps, as the images must have codim at least r.
3.3.1. Implementation of Border Apolarity for T sl 3 . We show how to implement the algorithm of [CHL19] for T sln by describing how to compute all possible B T -fixed I 110 . Additionally, we can leverage the skew-symmetry of T sln to reduce the amount of computation involved for determining potential I 110 , I 101 , I 011 .
Let T = T sl 3 ∈ sl * 3 ⊗ sl * 3 ⊗ sl 3 = A ⊗ B ⊗ C. The first and third condition from border apolarity tells us to compute all B T -fixed weight subspaces F 110 ⊂ A * ⊗ B * of codimension r; however, since T is concise with dim T (C * ) = dim sl 3 = 8 and F 110 ⊂ T (C * ) ⊥ by the second condition of border apolarity, we compute all B T -fixed weight spaces F 110 ⊂ T (C * ) ⊥ of codimension r − 8.
Standard computational methods, see [FH91], yield that the irreducible decomposition of A * ⊗ B * as sl n -modules is as follows.
In particular, for sl 3 , we have the following decomposition into sl 3 -modules. The utility of this decomposition is that we can generate all B T -fixed weight subspaces F 110 by taking a collection of weight vectors v i from the poset such that v 1 ∧ v 2 ∧ · · · ∧ v 56−(r−8) is a highest weight vector in G(56 − (r − 8), T (C * ) ⊥ ) and consequently is closed under the raising operators. One should note that if v i comes from a weight space of dimension greater than 1 then one needs to include linear combinations of basis vectors of that weight space. For r = 63, we have that F 110 will be of the form v 1 . Since it must be closed under raising operators, it will necessarily be a highest weight vector. Therefore, our choices will be v 1 = v λ where of v λ is a highest weight vector of weight λ = [2 2], For r = 62, F 110 will be of the form v 1 ∧ v 2 . Necessarily, we must have that v 1 must be a highest weight vector. The second vector v 2 may either be another highest For smaller values of r, the number of possible Borel fixed spaces is much larger and more difficult to list by hand without the aide of a computer. The computationally difficult step in this algorithm lies in computing the ranks of the multiplication maps such as F 110 ⊗ A * → S 2 A * ⊗ B * . In some cases there are many parameters which arise from choosing weight vectors from high dimensional weight spaces, such as the [0 0] weight space in Figure 1. Recall that a linear map has rank at most k is the k + 1 minors all vanish. In order to determine whether the multiplication map has image codimension r, we look at the appropriate minors of this linear map. When there are no parameters involved, this is a simple linear algebra calculation. However, in some cases, the entries of the multiplication map are linear polynomials in the parameters coming from choosing a linear combination of weight vectors. In order to determine whether the multiplication map has image of codimension r, one needs to look at the ideal of these minors, as well as some polynomial equations in the parameters that are needed for the space to be Borel fixed. One must do a Groebner Basis computation on this ideal to determine whether all the minors vanish or not. This can become an unfeasible computation if there are too many parameters and/or equations.
New Bounds
It is known that R(T sl 2 ) = 5 [dGH86], so we aim to find bounds on R(T sln ) for n = 3 and 4 using the above techniques. Additionally, we provide new lower bounds on R(T son ) for n = 4 and 5.
4.1. Koszul flattenings. In the case of T sl 3 , we achieve the best results when p = 3 and we restrict to a generic 7 dimensional subspace of sl 3 , since dim sl 3 = 8. The best bound achieved is R(T sl 3 ) ≥ 14 (See Table 2). In the case of T sl 4 , we achieve the best lower bound of 27 when p = 4 or 5 while restricting to a subspace (See Table 4). As stated above, Koszul flattenings alone are insufficient to obtain border rank lower bounds exceeding 2m, i.e. Koszul flattenings will not prove R(T sl 3 ) ≥ 16 and R(T sl 4 ) ≥ 30.
In addition to lower bounds on these tensor, we computed lower bounds on T so 4 and T so 5 . From [Mir85], it is shown that for the Lie algebras g n = sl ×n 2 that R(T gn ) = 5n, so R(T so 4 ) = 10. Therefore, R(T so 4 ) will either be 9 or 10. For T sln ∈ sl * n ⊗ sl * n ⊗ sl n , we may identify the space sl * n with sl n (by sending an element to its negative transpose). Therefore, we may identify T sln as an element of sl n ⊗ sl n ⊗ sl n . As a first step in applying border substitution, we restrict T sln ∈ A ⊗ B ⊗ C in the A tensor factor. Since we may restrict to looking at representatives of closed G T sln -orbits, then the only planes we need to check are the highest weight planes in G(k, sl n ). In order to compute the border rank of the restricted tensor, we use Koszul flattenings on the restricted tensor. Once again, let v λ denote the unique weight vector in weight space λ.
For T sl 3 , border substitution did not generate a better lower bound than the Koszul flattenings. However, we were able to obtain a better lower bound for T sl 4 .
Let A , as in Proposition 15, beà ⊥ where we takeà to be a space of dimension m − k. If we restrict our tensor by a one dimensional subspace, then the only choice forà will be the space spanned by v [1 0 1] , which is the highest weight vector of sl n . The best bound we obtain is R(T sl 4 (v [1 0 1] ) ⊥ ⊗sln⊗sln ) ≥ 27 (See Table 9). By Proposition 15, this proves Theorem 3, R(T sl 4 ) ≥ 28. After restricting in the A tensor factor, we cannot restrict by the highest weight vector of sl n in the B or C factor as the symmetry group of the restricted tensor will have a different symmetry group.
Additionally, border substitution gives a better lower bound on R(T so 5 ). Restricting by a two dimension subspace (See Table 14) and using Proposition 15 proves Theorem 5. 4.3. Border Apolarity. We use border apolarity to disprove that T sl 3 has rank r = 15. We first compute candidate F 110 spaces which passed the (210)-test. There were a total of 5 candidate F 110 subspaces out of a total of more than 1245 possible F 110 spaces. The candidate F 110 spaces came in three types of weight space decompositions: The computation to produce these 5 candidate F 110 planes took extensive time in some cases, due to the parameters creating a difficult groebner basis computation when determining whether an F 110 plane passes (210)-test. The large number of candidates was not as much of a computational issue as all the (210)-tests can be parallelized. Some of these computations were done on Texas A&M's High Performance Research Cluster as well as Texas A&M's Math Department Cluster.
Using the skew-symmetry of T sln , we are able to produce F 011 and F 101 candidate weight spaces from the candidate F 110 spaces. A computer calculation verified that for each candidate triple F 110 , F 011 , F 101 , the rank condition is not met for the (111)test and consequently, there are no candidate F 111 spaces. Therefore, the rank of T sl 3 is greater than 15 and this proves Theorem 1, R(T sl 3 ) ≥ 16. This result is significant as it is the first example of an explicit tensor such that the border rank is at least 2m when m < 13. 4.4. Upper Bounds. A numerical computer search has given a rank 20 decomposition of T sl 3 . The technique used was a combination of Newton's Method and Lenstra-Lenstra-Lovász Algorithm to find rational approximations [CLGV20]. This technique formulated the problem as a nonlinear optimization problem that was solved to machine precision and then subsequently modified using the Lenstra-Lenstra-Lovász Algorithm to generate a precise solution with algebraic numbers given the numerical solution. As T sl 3 ∈ C 8 ⊗ C 8 ⊗ C 8 , a rank 20 decomposition consists of finding a i , b i , c i ∈ C 8 such that T = 20 i=1 a i ⊗ b i ⊗ c i . We take each vector a i , b i , c i to be a vector 8 variables, and using properties of elements of tensor products, we can multiply out the right hand side and have a system of equations for each entry of the tensor. This amounts to solving a system of 512 polynomial equations of degree 3 in 480 variables. We then use Newton's method to find roots to this system of equations. If it appears to converge to a solution, then we compute it to machine precision and use Lenstra-Lenstra-Lovász to find an algebraic solution that satisfies the initial polynomial conditions. Therefore, this decomposition proves Theorem 2, R(T sl 3 ) ≤ 20.
Let ζ 6 denote a primitive 6th root of unity. The following is the rank 20 decomposition of T sl 3 . One may verify that this is in fact a rank decomposition by showing that it satisfies the polynomial equations described above. We note that the below decomposition has some localy symmetry. For example, the first three terms share 0 ζ 6 0 0 0 0 0 1 0 as a common factor with standard cyclic Z 3 -symmetry, i.e. this element occurs once in each tensor factor in the first three terms. We have grouped terms in the below presentation of the decomposition by this local symmetry. We were unable to determine any global symmetry in this presentation of this decomposition. In an attempt to find a smaller rank decomposition, we found numerical evidence suggesting that R(T sl 3 ) ≤ 18. The above method was unable to determine exact algebraic numbers for it to be an honest border rank decomposition. We include the approximate border rank decomposition, which was obtained as a numerical solution to machine precision using Newton's method, in Appendix A. This decomposition is satisfies the equation T sl 3 = 18 k=1 a k (t) ⊗ b k (t) ⊗ c k (t) + O(t) to a maximum error in each entry of 3.88578058618805 10 −16 ( 0 error). It also is satisfied with a sum of squares error of 1.85900227125328 10 −15 ( 2 error), which is the square root of the sum of the squares of all errors in each entry.
|
2021-05-19T01:15:54.331Z
|
2021-05-17T00:00:00.000
|
{
"year": 2021,
"sha1": "99322e51196b3f9354eeb46a2af7727784ad16c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "99322e51196b3f9354eeb46a2af7727784ad16c7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
225076442
|
pes2o/s2orc
|
v3-fos-license
|
Transesophageal Echocardiographic Evaluation of Novel Extracellular Matrix Valve for Tricuspid Valve Endocarditis
Graphical abstract
INTRODUCTION
Right-sided infective endocarditis (IE), specifically tricuspid valve (TV) endocarditis, is a rare occurrence, accounting for <10% of IE cases. The vast majority of cases of TV endocarditis result from intravenous drug use. Risk factors for the development of endocarditis include a history of IE, prosthetic valves, intracardiac devices, valvular or congenital heart disease, chronic intravenous catheters, immunosuppression, and recent dental procedure. 1 Although medical management with antibiotics is generally successful, a subset of patients will require surgical intervention. Furthermore, there is evidence suggesting that early surgical intervention can reduce embolic events as well as mortality in this patient population. 2 Echocardiography remains the primary imaging modality for detecting vegetations and abscesses, 3 with transesophageal echocardiographic (TEE) imaging having a reported sensitivity of >90%.
Because of the unique anatomic and functional features of the TV, there are no ideal prosthetic valves designed for the TV. 4 Mechanical or bioprosthetic replacement may be complicated by thrombosis, degenerative calcification, heart block, and residual or recurrent infection. 5 However, new technology has made it possible to undertake TV replacement with unstented valve material constructed from extracellular matrix (ECM). The CorMatrix ECM TV (CorMatrix Cardiovascular, Roswell, GA) is designed to function as a competent heart valve immediately after implantation. In addition, it serves as a biological scaffold that may enable native cells to migrate and grow, 6 resulting in a remodeled TV with the patient's own tissue. This case exemplifies a comprehensive echocardiographic evaluation of this new valve and highlights its unique appearance and function.
CASE PRESENTATION
A 43-year-old man with a history of polysubstance drug abuse presented with productive cough, nocturnal diaphoresis, weight loss, chills, dyspnea, pleuritic chest pain, and nausea. He was diagnosed with bibasilar pneumonia and multiple pulmonary septic infarcts, as well as methicillin-sensitive Staphylococcus aureus/Enterococcus faecalis bacteremia and TV endocarditis. He later presented to the operating room for TV replacement.
Preoperative TEE imaging revealed severe tricuspid regurgitation and a large burden of mobile vegetations on all three TV leaflets, which made TV repair not feasible ( Figure 1, Video 1).
The tricuspid annular diameter was measured intraoperatively at 33 mm. The decision was made to replace and remove the native TV leaflet tissue using ECM with the CorMatrix ECM valve. A piece of CorMatrix ECM was cut to 5 cm and wrapped around a 33-mm sizer. The sheet was cut to fit and sutured to create a 33 Â 50 mm cylinder of its own making. Three fixation points for neochords were selected within the right ventricle. The valve was then secured to the annulus with a running suture. No annuloplasty ring was necessary ( Figure 2).
Following separation from cardiopulmonary bypass, the newly implanted CorMatrix TV was evaluated using two-dimensional and three-dimensional (3D) imaging as well as color flow and spectral Doppler evaluation. Examination revealed mild residual tricuspid regurgitation, a mean inflow gradient of 1.0 mm Hg, and normal right ventricular function (Figures 3-5, Videos 2 and 3). The postoperative course was unremarkable, and the patient was discharged on postoperative day 14.
DISCUSSION
Comprehensive intraoperative TEE examination in patients with valvular endocarditis provides valuable information regarding the feasibility for repair. Currently available materials for valve repair (e.g., patch repair) or replacement are synthetics such as woven nylon (Dacron), expanded polytetrafluoroethylene, and glutaraldehydecross-linked biological membranes such as bovine pericardium. Although such materials perform adequately as patch material, they have no capacity for bioresorption, they may become incorporated by fibrotic encapsulation, and they cannot restore regional tissue functionality. 7 ECM such as the CorMatrix ECM has recently been used for valve replacements in both the tricuspid and mitral positions. [8][9][10][11][12][13] ECM is the material surrounding cells in all tissues and is composed mainly of proteoglycans, forming a 3D, hydrated gel in the extracellular interstitial space, and fibrous proteins (elastins, fibronectins, and laminins). The composition of the main components varies depending on the function of the tissue or organ. Besides functioning as a passive support structure, it serves as a biological scaffold for regulation of cell adhesion, cell differentiation, cell division, and cell migration. CorMatrix ECM is derived from porcine small intestinal submucosa composed of four major types of molecules: structural proteins, adhesion glycoproteins, glycosaminoglycans, and matricellular proteins. The ECM is then fashioned into a simple tubular valve, inspired by tubular aortic valves and the theory of ''form follows function.'' 8,14 The hypothesis behind this principle is that native cardiac valves function as if they were simple tubes with sides that collapse when subjected to external pressure, and their form is dictated by the natural anatomic restraints placed on that tube (i.e., form follows function). 14 The valve competency and mechanical characteristics are similar to a Heimlich valve. Depending on the pressure gradients and direction of flow, the valve cylinder either closes or opens with a windsock-like motion.
Evaluation of the ECM valve should be performed in all midesophageal and transgastric views used for evaluation of the native TV. Color flow Doppler, spectral Doppler, and 3D imaging techniques should be used. The valve competency and mechanical characteristics windsock-like motion and should be observed. Possible reported complications include annular ring disruption, papillary disruption, right ventricular dysfunction, residual tricuspid regurgitation, right ventricular outflow tract obstruction, pulmonary embolism, and recurrent endocarditis. [8][9][10][11][12][13]15 CONCLUSION Echocardiographic evaluation of this novel valve implantation technique is important for the proper assessment of valve function. ECM valve replacements have unique mechanics and echocardiographic appearance. The absence of an annuloplasty ring plus its tubelike structure attached by neochords to the right ventricle are the main
VIDEO HIGHLIGHTS
Video 1: Two-dimensional TEE imaging shown in midesophageal four-chamber and midesophageal bicaval views demonstrating preprocedural mobile TV vegetations. Video 2: Two-dimensional TEE imaging shown in midesophageal four-chamber and midesophageal right ventricular inflow-outflow views with color flow Doppler demonstrating laminar flow and trace tricuspid regurgitation after TV replacement with CorMatrix valve. Video 3: Two-dimensional TEE imaging shown in transgastric right ventricular basal short-axis and transgastric RV inflow demonstrating valve motion. Three-dimensional TEE imaging of CorMatrix valve seen en face from right atrial and right ventricular perspective. Three-dimensional with color flow Doppler demonstrating competent valve.
View the video content online at www.cvcasejournal.com.
unique characteristic features of this novel valve. Echocardiographers performing examinations should be familiar with common complications and comprehensive evaluation techniques, including twodimensional and 3D imaging.
|
2020-06-25T09:10:13.633Z
|
2020-06-24T00:00:00.000
|
{
"year": 2020,
"sha1": "c21d5b8db7c2d2a2514eeeaedad69da3e3f51b36",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cvcasejournal.com/article/S2468644120300955/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3db9af1c19e13bb1d711ae47a0154f09819a1e8e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232417112
|
pes2o/s2orc
|
v3-fos-license
|
On the $C_p$-equivariant dual Steenrod algebra
We compute the $C_p$-equivariant dual Steenrod algebras associated to the constant Mackey functors $\underline{\mathbb{F}}_p$ and $\underline{\mathbb{Z}}_{(p)}$, as $\underline{\mathbb{Z}}_{(p)}$-modules. The $C_p$-spectrum $\underline{\mathbb{F}}_p \otimes \underline{\mathbb{F}}_p$ is not a direct sum of $RO(C_p)$-graded suspensions of $\underline{\mathbb{F}}_p$ when $p$ is odd, in contrast with the classical and $C_2$-equivariant dual Steenrod algebras.
Introduction
For over a decade, since the Hill-Hopkins-Ravenel solution of the Kervaire invariant one problem [HHR16], there has been great success in using exotic homotopy theories, like C 2 nequivariant homotopy theory and motivic homotopy theory, to study classical homotopy theory at the prime 2. A key foundational input to many of these applications is the computation of the appropriate version of the dual Steenrod algebra, F 2 b F 2 , which was carried out by Hu-Kriz [HK01] in C 2 -equivariant homotopy theory and by Voevodsky [Voe03] in motivic homotopy theory. One of the major obstacles to carrying out a similar program at odd primes is that we do not understand the structure of the dual Steenrod algebra in C pequivariant homotopy theory. The purpose of this paper is to make some progress towards this goal.
To motivate the statement of our main result, recall that we have the following description of the classical, p-local dual Steenrod algebra as a Z ppq -algebra 1 Here the tensor product is taken over the sphere spectrum, S 0 rxs denotes the free E 1 -algebra on a class x, and the classes t i live in degree 2p i´2 . Modding out by p causes each of the above cofibers to split into two classes related by a Bockstein; modding out by p once more introduces the class τ 0 and recovers Milnor's computation of A˚" π˚pF p b F p q, as an F p -algebra.
In the C p -equivariant case our description involves a similar decomposition but is more complicated in two ways: • Rather than extending the class t i to a map from S 0 rt i s using the multiplication on Z b Z, we will want to choose as generators a mixture of ordinary powers of t i and of norms, Npt i q, of t i .
• Rather than modding out by the relation 'pt i " 0' we will need to enforce the relation that 'θt i " 0', where θ is an equivariant lift of p to an element in nontrivial ROpC p qdegree. We will then also need to enforce the relation pNpt i q " 0.
To make this precise, we will assume that the reader is comfortable with equivariant stable homotopy theory as used, for example, in [HHR16], and introduce the following conventions, in force throughout the paper: • We will use ρ Cp to denote the regular representation of C p .
• We will use λ to denote the representation of C p on R 2 " C where the generator acts by e 2πi{p .
• We denote by θ : S λ´2 Ñ S 0 the map of C p -spectra arising from the degree p cover S λ Ñ S 2 . We'll denote the cofiber of θ by Cθ. Note that the underlying nonequivariant spectrum of Cθ is the Moore space Mppq.
• If X is a spectrum, we will denote by NpXq the Hill-Hopkins-Ravenel norm of X, which is a C p -equivariant refinement of the ordinary spectrum X bp .
• We denote by Z and F p the C p -equivariant Eilenberg-MacLane spectra associated to the constant Mackey functors at Z and F p , respectively.
• The degree k map S λ Ñ S λ k is a p-local equivalence when pk, pq " 1, so, when working p-locally, we will often make this identification implicitly. For example, we may write • We use π ‹ X to denote the ROpC p q-graded homotopy groups of a C p spectrum, so that, when ‹ " V´W is a virtual representation, π V´W X " π 0 Map Sp Cp pS V´W , Xq.
Now we can give a somewhat ad-hoc description of the equivariant refinements of the building blocks in Z b Z.
Construction. Let x be a formal variable in an ROpC p q-grading |x|. Define a C p -spectrum as follows: where Mppq is the mod p Moore spectrum. We denote the inclusion of the summand Σ i|x| Cθ by the restriction ofx to the bottom cell by x, and the inclusion of the final summand by y Nx. We denote by Nx : S |x|ρ Cp Ñ T θ pxq the restriction of y Nx to the bottom cell of the mod p Moore spectrum. Now suppose that R is a C p -ring spectrum equipped with a norm NpRq Ñ R. If we have a class x P π ‹ R such that θx " 0, it follows that p¨Npxq " 0 (see the proof of Lemma 4.4), so we may produce a map S 0 ' pS 0 rNxs b T θ pxqq Ñ R, which only depends on the choice of the nullhomotopy witnessing θx " 0.
We can now state our main theorem.
Theorem A. There are equivariant refinements of the nonequivariant classes t i P π˚pZ ppq b Z ppq q which satisfy the relation θt i " 0. For any choice of witness for these relations, the resulting map is an equivalence.
As an immediate corollary we have: Corollary. With notation as above, we have where τ 0 is dual to the Bockstein, in degree 1 and Λpτ 0 q " F p ' ΣF p . In particular, since F p b Cθ is indecomposable at odd primes, the spectrum F p b F p is not a direct sum of ROpC p q-graded suspensions of F p at odd primes.
Remark. When p " 2 we have an accidental splitting Remark. One can show that F p b Cθ b Cθ splits as pF p b Cθq ' pF p b Σ λ´1 Cθq. It follows that F p b F p splits as a direct sum of cell complexes with at most 2 cells.
Our result raises a few natural questions which would be interesting to investigate.
Question 1. When specialized to p " 2, how does our basis compare to the Hu-Kriz basis?
Question 2. The geometric fixed points of Z ppq b Z ppq are given by pF p b F p qrb, bs, where b is the conjugate of b, a class in degree 2. It is possible to understand what happens to the generators t i and z Npt i q upon taking geometric fixed points. One is left with trying to understand the remaining class hit byt i on geometric fixed points. We don't know what this should be. One guess that seems consistent with computations is that this class is given, up to conjugating the τ i and modding out by pbq, by: It would be useful for computations to sort out what actually occurs. Question 3. Is it possible to profitably study the F p -based Adams spectral sequence using this decomposition? Since F p b F p is not flat over F p , one would be forced to start with the E 1 -term. But this is not an unprecedented situation (e.g. Mahowald had great success with the ko-based Adams spectral sequence).
Question 4. Can one describe the multiplication on π ‹ F p bF p in terms of our decomposition?
Relation to other work
As we mentioned before, we were very much motivated by the description of the C 2 -equivariant dual Steenrod algebra given by Hu-Kriz [HK01]. That said, our generators are slightly different than the Hu-Kriz generators when we specialize to p " 2. For example, the generator t 1 lives in degree 2ρ C 2´λ " 2, whereas the Hu-Kriz generator ξ 1 lives in degree ρ " 1`σ. 2 Hill and Hopkins have also obtained a presentation of the C 2 n -dual Steenrod algebra, using quotients of BPR and its norms, which is similar in style to the one obtained here.
At odd primes, Caruso [Car99] studied the C p -equivariant Steenrod algebra, π ‹ mappF p , F p q, essentially by comparing with the Borel equivariant Steenrod algebra and the geometric fixed point Steenrod algebra, and was able to compute the ranks of the integer-graded stems. There is also work of Oruç [Oru89] computing the dual Steenrod algebra for the Eilenberg-MacLane spectra associated to Mackey fields (which does not include F p ).
In the Borel equivariant setting, the dual Steenrod algebra is given by the action Hopf algebroid for the coaction of the classical dual Steenrod algebra on H˚pBC p q (see [Gre88]).
There is also related work from the first and second authors. The first author produced a splitting of F p b F p in [San19] using the symmetric power filtration. This summands in that splitting were roughly given by the homology of classifying spaces, and were much larger than the summands produced here. The second author and Jeremy Hahn showed [HW20] that F p can be obtained as a Thom spectrum on Ω λ S λ`1 . The Thom isomorphism then reduces the study of the dual Steenrod algebra to the computation of the homology of Ω λ S λ`1 . Understanding the relationship between this picture and the one in this article is work in progress.
Outline of the proof
To motivate our method of proof, let's first revisit the classical story. We are interested in where the classes t i P π˚pZ b Zq come from, and why they are annihilated by p.
Recall that the homology of CP 8 is a divided power algebra and hence a homology suspension map σ : H˚pCP 8 q Ñ π˚´2pZ b Zq which annihilates elements decomposable with respect to the product structure on H˚pCP 8 q. We can take 3 t i :" σpβ piq q. The relation pt i " 0 follows from the fact that pβ piq is, up to a p-local unit, decomposable as β p pi´1q in H˚pCP 8 q. In the equivariant case, we will proceed similarly.
Step 1. Compute the homology of KpZ, λq and use the homology suspension to define classes in π ‹ pZ b Zq.
Step 2. Use information about the product structure on the homologies of KpZ, λq and KpZ, 2q to deduce relations for these classes, and hence produce the map described in Theorem A.
Step 3. Verify that the map in Theorem A is an equivalence by proving that it is an underlying equivalence and an equivalence on geometric fixed points.
The first step is carried out in §2 and §3 by identifying KpZ, λq with an equivariant version of CP 8 and then specializing a computation due to Lewis [Lew88], which we review in our context. The second step is carried out in §4. The third and final step is carried out in §6 using a lemma proven in §5 that allows us to check that the map on geometric fixed points is an equivalence by just verifying that the source and target have the same dimensions in each degree.
2 Homology of B C p S 1 Recall that we have the C p -space B Cp S 1 classifying equivariant principal S 1 -bundles. The following lemmas give two useful ways of thinking about this space.
Lemma 2.1. The complex projective space PpCrzsq is a model for B Cp S 1 , where the generator of C p acts on Crzs through ring maps by z Þ Ñ e 2πi{p z. Here Crzs is the ordinary polynomial ring over C, and the projective space PpCrzsq " pCrzs´t0uq{pCˆq inherits an action in the evident way.
Lemma 2.2. The space B Cp S 1 is a model for KpZ, λq.
Proof. The map
PpCrzsq Ñ SP 8 pS λ q to the infinite symmetric product, which sends a polynomial f pzq to its set of roots (with multiplicity), is an equivariant homeomorphism. The group-completion of the latter is a model for KpZ, λq by the equivariant Dold-Thom theorem. But SP 8 pS λ q is already groupcomplete: the monoid of connected components of the fixed points is N{p " Z{p.
Remark 2.3. The reader may object that the definition of B Cp S 1 makes no reference to λ, so how does B Cp S 1 know about this representation rather than λ k for some k coprime to p?
The answer is that, in fact, each of the Eilenberg-MacLane spaces KpZ, λ k q coincide for such k: we have an equivalence of Z-modules Σ λ Z » Σ λ k Z whenever pk, pq " 1. This follows from the computations in [FL04, Proposition 9.2], for example.
The filtration of Crzs by the subspaces Crzs ďn of polynomials of degree at most n gives a filtration of B Cp S 1 .
Lemma 2.4. There is a canonical equivalence where V k " À 0ďiďk´1 λ i´k . Proof. This follows from a more general observation. If L is a one-dimensional complex representation, and V is an arbitrary complex representation, then the function assigning a linear map to its graph, is an equivariant homeomorphism. So it induces an equivalence on one-point compactifications The next proposition now follows from [Lew88, Proposition 3.1].
Proposition 2.5 (Lewis). The above filtration on B Cp S 1 splits after tensoring with Z, giving an equivalence In particular, for i ě 1 we have |e p i | " 2p i´1 ρ Cp .
We will also need some information about the multiplicative structure on homology.
" y to mean that x " αy for some α P Zp pq , we have " θe p , and e p p i . " pe p i`1 for i ě 1.
Proof. Using the model for B Cp S 1 given by PpCrzsq, we see that, in fact, PpCrzsq has the structure of a filtered monoid. It follows that the product in homology respects the filtration by the classes te i u. Thus, for i ě 0, we have: where the coefficients lie in π ‹ Z. When j ă p i`1 we see that the virtual representations |c i,j | have positive virtual dimension and their fixed points also have positive virtual dimension. The homotopy of Z vanishes in these degrees (see, e.g., [FL04, Theorem 8.1(iv)]), so we must have e p p i " c i,p i`1 e p i`1 where |c 0,p | " λ´2 and |c i,p i`1 | " 0 when i ě 1. In both cases, the restriction map on π ‹ Z is injective in this degree, so the result follows from the nonequivariant calculation.
Suspending classes
We begin with some generalities. If X is any C p -spectrum, we have the counit called the homology suspension. Just as in the classical case, σ annihilates decomposable elements in π˚pZ b Σ 8 Ω 8 Xq.
Construction 3.1. For i ě 1, we define as the homology suspension of the element e p i P π 2p i´1 ρ Cp pZ b B Cp S 1 q. Here we use the identification B Cp S 1 » KpZ, λq " Ω 8 Σ λ Z.
Two relations in homology
We begin with a brief review of norms, transfers, and restrictions.
Remark 4.1 (Transfer and restriction). Given a nonequivariant equivalence pS V q e -S n , we define res : π V X Ñ π n X e , px : S V Ñ Xq Þ Ñ pS n -pS V q e Ñ Xq and tr V : π n X e Ñ π V X, py : For example, when V " λ´2 and X " S 0 , then tr λ´2 p1q " θ.
Changing the equivalence pS V q e -S n has the effect of altering these classes by˘1; in our case the representations in question have canonical orientations so this will not be a concern. Given a map X b Y Ñ Y we have a relation: Remark 4.2 (Norms). If a C p -spectrum X has a map NpXq Ñ X, then, given an underlying class x : S n Ñ X e , we may define a norm by the composite Nx : NpS n q " S nρ Cp Ñ NpXq Ñ X.
The underlying nonequivariant class is given by respNxq " ś gPCp pgxq P π pn X e . Our goal in this section is to prove the following two lemmas.
Lemma 4.3. The classes t i P π 2p i´1 ρ Cp´λ pZ ppq b Z ppq q satisfy θt i " 0.
Lemma 4.4. The classes Npt i q P π p2p i´2 qρ Cp pZ ppq b Z ppq q satisfy pNpt i q " 0.
In fact, the second relation follows from the first.
Proof of Lemma 4.4 assuming Lemma 4.3. Since p " trp1q, the class pNpt i q is the transfer of the class respt i q p into degree p2p i´2 qρ Cp . Notice that p2p i´2 qρ Cp´| t p i | " λ´2 (after identifying the λ k suspensions with λ for pk, pq " 1), and the transfer of 1 into this degree is θ, so we have pNpt i q " θt p i " 0.
Proof of Lemma 4.3. By Lemma 2.6, we have e p 1 .
" θe p so that θt 1 " σpθe p q " 0, since σ annihilates decomposables. For the remaining classes, consider the commutative diagram where rθs " Ω 8 pθq. Thus, to show that θt i " 0 for i ě 2, it is enough to show that rθs˚e p i is decomposable in π ‹ pZ ppq b KpZ, 2q`q for i ě 2. Write where the elements γ i pβ 1 q are the standard module generators of H˚pCP 8 ; Zq, and write β piq " γ p i β 1 . To show that rθs˚pe p i q is decomposable for i ě 2, it is enough to establish the following two claims: .
Claim (b) is just the classical computation of the product in homology for H˚pCP 8 , Zq. For claim (a), let ι λ denote the fundamental class in cohomology for KpZ, λq and ι 2 the same for KpZ, 2q. Then we have rθs˚pι 2 q " θι λ by design, and hence rθs˚pι j 2 q " θ j ι j λ .
The map on homology is now determined by the relation Since θ j is a transferred class, the value above is also a transfer, and hence determined by its restriction to an underlying class. But resprθsq " rps and we clearly have rps˚prespe p i qq " p i β piq , which agrees with the restriction of p i´1 θ u p i pp´1q´1 λ β piq . This completes the proof.
Digression: Detecting equivalences nonequivariantly
The goal of this section is to establish a criterion for detecting equivalences of Z-modules. We recall that Z ΦCp » F p rbs where the class b in degree 2 arises from taking the geometric fixed points of the Thom class u λ : S λ Ñ Σ 2 Z. (i) f is an underlying equivalence.
(ii) π j M ΦCp and π j N ΦCp are finite dimensional of the same rank, for all j.
(iii) π˚M ΦCp and π˚N ΦCp are graded-free F p rbs-modules.
Then f is an equivalence.
We will deduce this proposition from the following one, which relates geometric and Tate fixed points. In particular, M e is an F p rC p s-module. Let γ denote the generator of C p so that F p rC p s " F p rγs{p1´γq p . Let F j M Ď M be the sub-Mackey functor generated by p1´γq j M e Ď M e . This is a finite filtration with associated graded pieces given by Mackey functors with trivial underlying action. So, since E is a thick subcategory, we are reduced to the case when M is a discrete F p -module with trivial underlying action.
For the next reduction we recall some notation. If N is any Mackey functor, denote by N Cp the Mackey functor N b C p`a nd, if A is an abelian group, denote by A tr the Mackey functor whose transfer map is the identity on A and whose restriction map is multiplication by p. We also recall that the transfer extends to a map of Mackey functors tr : N Cp Ñ N. Now consider the two exact sequences If N is any Mackey functor with N e " 0, then N P E since then N " N ΦCp is bounded above and hence N ΦCp rb´1s " 0. Thus, from the exact sequences above, we are reduced to the case where M is of the form V tr for an F p -vector space V (with trivial action). Now recall that pF p q tr " Σ 2´λ F p and hence V tr " Σ 2´λ V . So we are reduced to showing that the constant Mackey functor V lies in E, where V is an F p -vector space with trivial action. This certainly holds for V " F p , and in general we have
Proof of the main theorem
We are now ready to prove the main theorem. Recall that we have constructed classes t i P π 2p i´1 ρ Cp´λ pZ ppq b Z ppq q, and shown that θt i " 0 and pNpt i q " 0. With notation as in the introduction, let hen, choosing nullhomotopies which witness θt i " 0, we get a map: The main theorem is then the statement: Theorem 6.1. The map f is an equivalence.
Proof. Combine Proposition 5.1 with the two lemmas below.
Lemma 6.2. The map f e is an underlying equivalence.
Proof. First observe that, by our construction in the proof of Lemma 4.4, the map z Npt i q restricts to the map t p´1 it i , since the nullhomotopy witnessing pNpt i q " 0 was chosen to restrict to the nullhomotopy chosen for pt p i that came from the already chosen nullhomotopy of pt i . The upshot is that the map restricts on underlying spectra to the map obtained just from the relation pt i " 0 and extended via the multiplicative structure.
In particular, on mod p homology f e induces a ring map We know that t i maps to ξ i and that βx i " t i , so that βpf e px i qq " ξ i . Modulo decomposables, τ i is the only element whose Bockstein is ξ i . So x i must map to τ i , mod decomposables. It follows that f e is a mod p equivalence, and hence an equivalence.
Lemma 6.3. pZ b Xq ΦCp and pZ b Zq ΦCp are free F p rbs-modules, finite-dimensional in each degree, and isomorphic as graded vector spaces over F p .
Proof. If Y is any C p -spectrum, then is a free F p rbs-module. Applying this in the cases Y " X and Y " Z, we see that each is a free F p rbs, evidently finite-dimensional in each degree. So it suffices to prove that as graded vector spaces. Notice that we can write, as graded vector spaces, where |σ i´1 | " 2p i´1´1 and |d pi´1q | " 2p i´1 . Indeed,t i , on geometric fixed points, gives rise to two classes; one we are calling d pi´1q and the other we are calling σ i´1 . Similarly, z Npt i q, on geometric fixed points, gives rise to two classes: one we are calling ξ i and the other τ i , in their usual degrees. The relations are the ones needed to ensure that the monomials not arising from geometric fixed points of elements in X i are omitted.
It follows that we have an isomorphism of graded vector spaces F p bX ΦCp -F p rξ n : n ě 1sb Fp F p rd piq : i ě 0sb Fp Λpσ j , τ k : j ě 0, k ě 1q{pd p piq , d pi´1q τ i , d p´1 piq σ i , σ i´1 τ i q.
We are trying to show that this is isomorphic, as a graded vector space to F p b F p rbs -F p rξ n : n ě 1s b Fp Λpτ i : i ě 0q b Fp F p rbs.
We may regard each vector space as a module over F p rξ n : n ě 0s in the evident way, and hence reduce to showing that the two vector spaces V " Λpτ i : i ě 0q b Fp F p rbs and W " F p rd piq : i ě 0s b Fp Λpσ j , τ k : j ě 0, k ě 1q{pd p piq , d pi´1q τ i , d p´1 piq σ i , σ i´1 τ i q are isomorphic. (Here recall that |σ i | " |τ i | " 2p i´1 , |b| " 2, and |d piq | " 2p i ).
• J¨K " I¨K " p0, 0, ...q. That is: I and K have disjoint support and J and K have disjoint support.
Then V has a basis of monomials M I,J,K " p ź where K 1 r1s " p0, κ 1 0 , κ 1 1 , ...q. These have the same number of basis elements in each dimension, so V -W .
|
2021-03-31T01:16:12.048Z
|
2021-03-30T00:00:00.000
|
{
"year": 2021,
"sha1": "380a0b71dcf1d9ab0347bdb543634d83518c9152",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "380a0b71dcf1d9ab0347bdb543634d83518c9152",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
255546157
|
pes2o/s2orc
|
v3-fos-license
|
Time-optimal universal quantum gates on superconducting circuits
Decoherence is inevitable when manipulating quantum systems. It decreases the quality of quantum manipulations and thus is one of the main obstacles for large-scale quantum computation, where high-fidelity quantum gates are needed. Generally, the longer a gate operation is, the more decoherence-induced gate infidelity will be. Therefore, how to shorten the gate time becomes an urgent problem to be solved. To this end, time-optimal control based on solving the quantum brachistochrone equation is a straightforward solution. Here, based on time-optimal control, we propose a scheme to realize universal quantum gates on superconducting qubits in a two-dimensional square lattice configuration, and the two-qubit gate fidelity approaches 99.9\%. Meanwhile, we can further accelerate the Z-axis gate considerably by adjusting the detuning of the external driving. Finally, in order to reduce the influence of the dephasing error, decoherence-free subspace encoding is also incorporated in our physical implementation. Therefore, we present a fast quantum scheme which is promising for large-scale quantum computation.
I. INTRODUCTION
Due to the intrinsic superposition nature, quantum computation can deal with problems that are hard for classical computers.Recently, quantum computation has been proposed to be implemented on various quantum platforms [1][2][3][4], among which, the superconducting quantum circuits system is one of the most promising candidates [5][6][7][8].However, besides the existence of operational errors, a quantum system will inevitably couple to its surrounding environment, and thus lead to an increase in the distortion of quantum states or operations.Therefore, how to achieve high fidelity quantum gates on quantum systems is an urgent problem to be solved.
In the presence of noise, high-quality quantum control can be realized by the fastest possible evolution.Therefore, finding a shorter gate evolution path to shorten the gate time has become an effective means to achieve high-fidelity quantum gates [9].Time-optimal control (TOC) based on solving the quantum brachistochrone equation (QBE) [10] is an effective method to shorten the evolution time [11].Recently, TOCbased schemes for unitary operations have been proposed [11][12][13][14][15][16][17][18] and experimentally demonstrated [19][20][21][22][23][24], where the time needed for specific quantum gates has been reduced significantly.However, universal quantum control with an analytical solution can only be possible for specific cases [12].
Here, based on TOC, we propose a scheme to realize universal quantum gates on superconducting transmon qubits, arranged in a two-dimensional (2D) square lattice configuration, which can support large-scale universal quantum computation.In our scheme, by time-dependent modulation of a superconducting qubit, we can achieve tunable coupling between two transmon qubits [25][26][27], which can be readily used to induce target quantum gates in only one step.Meanwhile, we * zyxue83@163.comcan further shorten the time for Z-axis rotations by adjusting the time-independent driving detuning.Note that, in the previous work [15], Z-axis rotational gates can only be achieved by two sequential steps, i.e., by a combination of X-and Yaxis rotations, thus leading to longer gate times.Furthermore, to eliminate the effect of collective dephasing, which is an important factor affecting the gate fidelity, different from the previous work [15] , decoherence-free subspace (DFS) encoding [28][29][30] has been incorporated and the robustness of our gates with respect to the decoherence is verified.Therefore, our work realized high-fidelity universal quantum gate on superconducting circuits, which is a promising scheme for future large-scale quantum computation.
II. QUANTUM GATES VIA TOC
As it is well-known,quantum computation can be implemented based on two-level quantum systems, i.e., using qubit systems.Therefore, we begin with presenting a general method of constructing quantum gates via TOC, based on a qubit system, denoted by {|0⟩ = (1, 0) † , |1⟩ = (0, 1) † }.Assuming ℏ = 1, the general light-matter interaction can be written as where Ω(t) and ϕ(t) are the amplitude and the phase of the driving field, respectively, and δ(t) is the detuning between the qubit frequency and the driving field frequency.Choosing two mutually orthogonal evolution states, |Ψ ± (t)⟩, that satisfy the time-dependent Schrödinger equation of Hamiltonian in Eq. ( 1), the evolution operator can be written as arXiv:2301.03334v2 [quant-ph] 24 Oct 2023 where T is the time-ordering operator.
Considering experimental implementations, the interaction term in Eq. ( 1), H c (t) = Ω(t) [cos ϕ(t)σ x + sin ϕ(t)σ y ] /2 with σ x,y,z being the Pauli matrices, needs to satisfy the following two conditions.First, the coupling strength Ω(t) is adjusted within a restricted range, so there is an upper bound on Ω(t).
2]/2 = 0 needs to be satisfied.Then, the form of the interaction term H c (t) is usually not arbitrary and an independent σ z operator cannot be achieved, so it is necessary to satisfy f 2 (H c (t)) = Tr(H c (t)σ z ) = 0. Considering these two conditions, based on the QBE of where and the Lagrange multiplier λ j is defined as λ 1 = 1/Ω(t) and λ 2 = −c/2, where c is a constant, we can obtain For a specific gate, ϕ − is set, and the gate time τ is determined by the equation of τ 0 φ(t)dt = 2ϕ − .To get the shortest operation time τ , φ(t) should be a constant.Then, by solving Eq. (7), Ω(t) is a constant too, which is set as its maximum Ω(t) = [Ω(t)] max = Ω according to specific systems.That is, to implement a fast quantum gate, the coupling strength, detuning, and phase need to satisfy the conditions Ω(t) = 0, δ, (t) = 0 and φ(t) = η, respectively, where η is constant.
Therefore, based on TOC, the evolution operator in Eq. ( 8) can be rewritten as where and U g can be used to obtain arbitrary single-qubit gates.Note that, the operation in Eq. ( 12) can be implemented in a singlestep way, instead of being constructed by sequential gates in conventional cases.Besides, in the above example, an arbitrary single-qubit gate U g can be obtained by simple square pulses.However, we want to emphasize that the parameters of the driving field are only required to meet the condition in Eq. ( 7); i.e., Ω(t) can be time-dependent in general, and thus further pulse shape is also allowed.12) and (13).We here take the universal gate set of {H, S, T } as a typical example.For S and T gates, setting γ ′ = π, ϕ − S = −3π/4, and ϕ − T = −7π/8, we can obtain the S gate and the T gate, respectively.The gate times are where δ can be adjusted to further accelerate the gates, due to the extra freedom on χ.
For the H gate, χ H = π/4, so that η H − δ H = Ω H .As γ ′ H = π/2, from Eq. ( 13), we get τ H = π/( √ 2Ω H ). When ϕ 0 = (2n + 1)π, with n being an arbitrary integer, the evolution operator in Eq. ( 11) reduces to When we set ητ = 2π, we obtain an H gate, and τ H is only determined by Ω = η − δ; changing one of them will lead to the change of the other one, and thus no acceleration of gate time can be obtained.Note that, the implementation of our proposal with superconducting qubits is straightforward, as the Hamiltonian in Eq. ( 1) is readily realizable experimentally [26] by directly applying a microwave drive to a qubit.Besides the tunable coupling of two qubits [25][26][27] can be used to construct two-qubit gates.However, this direct implementation is limited by the weak anharmonicity of transmon qubits and their crosstalk-induced Z error.
III. PHYSICAL IMPLEMENTATION WITH ENCODING
Here, to further decrease the Z errors, induced by the qubitcrosstalk and the dephasing effect of physical qubits, we incorporate the DFS encoding in our scheme.We consider the implementation of the TOC scheme with encoding based on the 2D square lattice consisting of superconducting transmon qubits, as shown in Fig. 1(a), where a transmon serves as a physical qubit.Labeling two adjacent transmon qubits to be qubits T 1 and T 2 as shown in Fig. 1(a), the logical qubits can be encoded in their single-excitation subspace, i.e., S 1 = Span{|0⟩ L = |10⟩ 12 , |1⟩ L = |01⟩ 12 }.By this encoding, the logical qubit can resist the collective Z error of the physical qubits.Besides, this single-excitation subspace encoding can effectively suppress the leakage error in the single physical qubit case, due to the weak anharmonicity of transmon qubits, as the transition between different excitation subspaces are energetically suppressed.
A. Single-logical-qubit gates via TOC
As the coupling strength between two adjacent transmons is usually fixed, to control single-logical-qubit units and twological-qubit units independently and construct the targeted quantum gates exactly, tunable interactions between any two transmon qubits should be achieved.For two adjacent transmon qubits T i and T j , the interaction Hamiltonian is where ω i,j and α i,j are the frequency and the anharmonicity of the ithe and jth transmon qubit T i and T j , respectively, and To achieve tunable coupling between T i and T j , we added a frequency modulation in the form of ω j = ω j0 + ϵ j cos[ν j t + ϕ j (t)] for qubit T j , with the driving frequency and the phase being ν j and ϕ j (t), respectively.Meanwhile, the frequency of T i is fixed, which is written as ω i = ω i0 for the same layout as ω j .Moving into the interaction picture with respect to with with b i,j = (|0⟩ i,j ⟨1| + √ 2|1⟩ i,j ⟨2|), Γ j = ϵ j /[ν j + φj (t)], and then using the Jacobi-Anger identity, exp(−iΓ sin θ) = n J n (Γ) exp(−inθ), where J n is the nth Bessel function, the transformed Hamiltonian can be written as where ∆ ij = −∆ ji = ω i0 − ω j0 is the frequency difference between T i and T j , and The energy spectrum is shown in Fig. 1(b), and adjacent levels within a certain excitation subspace can be used to realize different quantum gates.In addition, Γ j can be tuned to achieve adjustable coupling between qubits T i and T j , and thus we can select appropriate parameters of the modulation field to construct target quantum gates.
In order to obtain a two-level system Hamiltonian in the form of Eq. ( 1) for constructing universal quantum gates, it is natural to go into the rotating frame with respect to and the transformed Hamiltonian of Eq. ( 20) can be written as Choose the modulating frequency to meet ν 2 = ∆ 12 −δ in Eq. ( 22), and after the rotational wave approximation, we obtain the Hamiltonian in the logical basis S 1 as where Ω = 2g 12 J 1 (Γ 2 ).By adjusting the pulse parameters ϵ 2 , ν 2 , and ϕ 2 , we can find a path that accords with the TOCbased scheme.Therefore, according to the general theory in the last section, we can use the TOC-based scheme to construct arbitrary single-logical-qubit quantum gates.We set different parameters of the physical qubits for theH, S and, T gates.The H gate corresponds to γ ′ H = π 2 , ϕ H (0) = π, ϕ − H = π, and χ H = π 4 .For the S and T gates, they correspond to γ ′ S = γ ′ T = π, ϕ − S = −3π/4, ϕ − T = −7π/8, and ϕ S (0) = ϕ T (0) = 0. Based on TOC and solving Eq. ( 10), here, ϕ(t) is in the form of a linear function and χ is a constant, whose path in the Bloch sphere is illustrated in Fig. 1(c).
Next, we use the master equation where b z k = b † k b k , to simulate the performance of our scheme for the single-logical-qubit gates with A(b) = 2bρb + −b + bρ− ρb + b, where ρ is density operator of the quantum system with [8] are the decay and dephasing rates of the two transmons qubits T 1 and T 2 , respectively, which correspond to τ − = 1/r − ≈ 40 µs and τ z = 1/(2r z ) ≈ 20 µs, respectively.As shown in Fig. 2, taking g 12 and ∆ 12 as variables, we numerically obtain the fidelity of the H, S and T gates, which are defined as F = Tr(U † U ′ )/Tr(U † U ), where U ′ represents the evolution matrix under decoherence.For typical examples, we consider the parameters of the physical qubits as follows.The qubit frequency difference ∆ 12 = 2π× 520 MHz, the capacitive coupling strength g 12 = 2π × 14.5 MHz; the detunings of the H, S, and T gates are modulated to δ H = 2π×29.58MHz, δ S = 2π×25 MHz, and δ T = 2π×15 MHz; Γ 2 is set as 1.5; and Ω = 2π × 16.18 MHz.With these settings, the gate operation times of the H, S, and T gates are 21.9, 9.5, and 7.8 ns, respectively, and the fidelities of the H, S, and T gates can reach F H =99.89%, F S =99.96%, and F T =99.97%, respectively.
Next, to test the gate robustness of our scheme, we consider the crosstalk-induced qubit-frequency drift error of the two transmon qubits T 1 and T 2 , which is the main error source of the superconducting qubit lattice and is in the form of ω 1,β = ω 1 + βΩ and ω 2,β = ω 2 − βΩ.Under the interaction picture, the interaction Hamiltonian with error can be expressed as As shown in Fig. 3(a), we found that under the effect of qubit frequency drift, our scheme exhibits a better resistance than the single-loop (S-L) scheme [34].
B. Two-logical-qubit gates via TOC
We next consider the implementation of the controlled phase (CP) gate, which is an important element for the universal quantum gates.As shown in Fig. 1(a), we consider a two- logical qubits unit with two pairs of transmon qubits, T 1 and T 2 , and T 3 and In addition, an auxiliary state |a⟩ = |0200⟩ is needed to assist the implementation of the CP gate.We consider the interaction between two adjacent physical qubits T 2 and T 4 .Similar to the single-logical-qubit case, the frequency of the T 2 qubit needs to be modulated as ω 2 = ω 20 + ϵ 2 cos(ν 2 t + ϕ 2 ) to achieve tunable coupling between qubits T 2 and T 4 .
In order to properly evaluate the performance of the CP gate, with the initial state |ψ in ⟩ = (|10⟩ L +|11⟩ L )/ √ 2, the effect of the frequency difference ∆ 24 and the coupling strength g 12 on the gate fidelity is shown in Fig. 3(c).When the parameters are set as ∆ 24 = 2π × 600 MHz, g 24 = 2π × 7 MHz, α 2 = 2π × 210 MHz, and α 4 = 2π × 230 MHz, the fidelity of the CP gate can reach 99.88%, approaching 0.1% gate infidelity.Actually, the leakage about the two adjacent qubits T 1 and T 3 should be considered as well.When we set ∆ 12 = ∆ 34 = 2π × 900 MHz, α 1 = 2π × 200 MHz, and α 3 = 2π × 220 MHz, the fidelity of CP gate can reach 99.72%.The state evolution process is shown in Fig. 3(d).In this case, N = 4 is set in the master equation, Eq. ( 24), and the rates of decay and dephasing for each transmon qubit are set as r = r − k = r z k = 2π × 4 kHz.
IV. CONCLUSION
In conclusion, we propose a protocol for constructing universal quantum gates in a single-step via TOC combined with DFS encoding, and we suggest an implementation on superconducting circuits, consisting of transmon qubits.For S, T and CP gates, by adjusting the detuning, the gate operations can be completed in an extremely short time, which leads to universal quantum gates approaching 0.1% gate infidelity.Thus, our scheme provides a promising way towards the practical realization of fast quantum gates.
FIG. 1 .
FIG. 1. Illustration of our scheme.(a) A scalable 2D square lattice consists of transmon qubits, where adjacent qubits are capacitively coupled.Two physical qubits of the same color encoded as a DFS logical qubit.(b) The energy levels of two adjacent coupled qubits, Ti and Tj, where different excitation subspaces can be used to implement different quantum gates.(c) Illustration of the evolution path of the TOC-based scheme (red line) on the Bloch sphere, where χ is the angle between the direction of the auxiliary basis vector and the vertical axis, and ξ(τ ) − ξ(0) is the horizontal angle shift of the auxiliary basis vector at a specific time τ .
FIG. 2 .
FIG. 2. The gate fidelity as a function of the qubits' frequency differences ∆12 and the coupling strength g12.The numerical results of H, S, and T gates are shown in panels (a), (c), and (e), respectively.The dynamics of the state population and the fidelity of H, S, and T gates are shown in panels (b), (d), and (f), respectively.FG is the gate fidelity; P0 and P1 are the populations of the logical states |0⟩L and |1⟩L, respectively.
FIG. 3 .
FIG. 3. (a) Comparative results for the gate robustness.Frequency drift error of TOC-based (solid line) and S-L-based gates (dashed lines).(b) The operation time τ2 in units of 1/Ω with respect to the rotation angle γ(τ2) and δ2/Ω.(c) State fidelity as the function of the qubits' frequency differences ∆24 and their capacitive coupling strength g24.(d) Considering the adjacent interactions from T1 and T3, state population and fidelity dynamics of the CP-gate process with prescribed parameters as presented in the maintext, where FS is the state fidelity with the initial state (|10⟩L + |11⟩L)/ √ 2, and P00, P01, P10, P11, and Pa are the populations of |00⟩L, |01⟩L, |10⟩L, |11⟩L, and |a⟩, respectively.
|
2023-01-10T06:42:17.254Z
|
2023-01-09T00:00:00.000
|
{
"year": 2023,
"sha1": "1436ba7e7f3f09ebeef0e998012c8d68d47da287",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1436ba7e7f3f09ebeef0e998012c8d68d47da287",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
}
|
90180627
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Study on Effect of Microbial Cultures on Soil Nutrient Status and Growth of Spinach Beet in Polluted and Unpolluted Soils
Soil contamination due to the disposal of industrial and urban wastes generated by human activities has become a major problem and an environmental concern. Controlled and uncontrolled disposal of wastes to agricultural soil are responsible for the migration of contaminants into non contaminated sites. Because of industrialization and urbanization, there is no much land is available for urban farming in and around Mumbai. Wherever the small lands are available as open space, unused lands, barren lands etc are contaminated by heavy metals which come through industrial waste disposal.
Introduction
Soil contamination due to the disposal of industrial and urban wastes generated by human activities has become a major problem and an environmental concern. Controlled and uncontrolled disposal of wastes to agricultural soil are responsible for the migration of contaminants into non contaminated sites. Because of industrialization and urbanization, there is no much land is available for urban farming in and around Mumbai. Wherever the small lands are available as open space, unused lands, barren lands etc are contaminated by heavy metals which come through industrial waste disposal.
Microorganisms play a unique role in the soil ecosystem, because of their contributions to soil fertility. These are responsible for mineralization of nutrients, decomposition, and degradation or transformation of toxic compounds. The biological agents i.e. yeast, fungi or bacteria are used to remove toxic waste from environment (Vessey, 2003). Hence, microbial bioremediation is the most
ISSN: 2319-7706 Volume 6 Number 4 (2017) pp. 1386-1393
Journal homepage: http://www.ijcmas.com In this present studied poly bag experiment was conducted following complete randomized block design with 12 treatments and three replications. Polluted Soil with supply of fresh water, Unpolluted soil with supply of fresh water, Unpolluted soil with supply of polluted water. The results of pot culture were reveals that the Nitrogen availability was highest in T 3 (140.65kgha -1 ) and lowest in T 8 (116.79kgha -1 ) at harvesting stage, phosphorus uptake was found in the treatment T 3 (43.34 kgha -1 ) and Increasing soil phosphorus content due to the application of inorganic fertilizers in polluted soils, increased the nutrient availability in the soil, highest potassium uptake was observed in T 7 (241.26 kg ha -1 ) in un polluted soils application of fresh water. Application of microbial cultures had significant effect on nitrogen, phosphorus potassium uptake in spinach beet in the different pot culture treatments. The treatment T 8 (70.03 g plant -1 ) comprising RDF+FYM+VAM and Pseudomonas showed highest values at 30 DAS, 60 DAS in unpolluted soils over other treatments. Among all the treatments, T 8 comprising RDF, FYM, VAM and Pseudomonas was showed highest dry weight of leaf per plant at 30 DAS & 60 DAS in unpolluted soils. effective tool to manage the polluted environment and recover contaminated soil.
Vegetables are an important part of human's diet. In addition to a potential source of important nutrients, vegetables constitute important functional food components by contributing protein, vitamins, iron and calcium which have marked health effects. Amongst all the vegetables, the leafy vegetables have a very high protective food value. They are rich in mineral and hence can be called as "Mines of minerals".
Vitamin A and C are present in abundant quantities. It is a widely grown leafy vegetable. It is rich and cheap source of vitamin A, iron, essential amino acids Ascorbic acid etc. Beside this, soft fibrous matter is specially in providing necessary roughage in diet. Vegetables, especially those of leafy vegetables grown in heavy metals contaminated soils, accumulate higher amounts of metals than those grown in uncontaminated soils because of the fact that they absorb these metals through their leaves. Majority of the land resources were found to be uncultivable, as they were heavily contaminated with heavy metals. If the microbial bioremediation is proved to be effective, then the land resources can be preserved with good fertility, so that the farmers can be benefited by using these remediated soils for cultivation.
The crop benefiting microbial inoculants generally called as bioinoculants, help in augmenting the crop productivity through effective mobilization of major plant nutrients like N, P and K and other minor nutrients needed by the crop. These beneficial microorganisms are also known to secrete plant growth promoting substances like IAA, GA, cytokinins, vitamins for the improvement of crop growth, yield and for quality produce (Ajay kumar et al., 2014). Mycorrhizal Fungi (AMF) is widespread throughout the world and found in the majority of terrestrial ecosystems (Smith and Read, 2008). AMF can be integrated in soil management to achieve low-cost sustainable agricultural systems. AMF can reduce soil erosion by bringing together micro aggregates of soil particles to form macro aggregates (Miller and Jastrow, 1994). They are the obligate symbionts that can improve plant growth by up taking P and help to absorb N, K, Ca, S, Cu, and Zn (Jiang et al., 2013); produce glomalin (Guo et al., 2012); increasing resistance to pests and soil borne diseases
Soil samples and soil characteristics
Soil samples of polluted and unpolluted soils were collected before sowing and analysed for the physical(pH, EC, and particle size and chemical characters like NPK and OC parameters) and microbiological properties by adopting standard procedures at Department of Agricultural Microbiology and Bio-energy and Department of Soil Science and Agricultural Chemistry, College of Agriculture, Rajendranagar, PJTSAU, Hyderabad.Water samples were also analyzed before sowing of crop in polluted and unpolluted soils (Table 1).
Crop details
The pot culture experiment was conducted at Department of Agricultural Microbiology and Bioenergy during 2012-13. For this investigation leafy vegetable crop, spinach beet, Pusa Jyothi variety was sown in pot experiments followed completely randomized block design with four treatments and three replications.
Microbial cultures (Pseudomonas, VAM) collected from our laboratory
Experiment details Treatments
The treatments for poly bag experiment were fixed as twelve treatments each treatment with three replications was designed. All three replications were used to record observations on yield, quality parameters of spinach around 30 and 60 days after sowing.
Preparation of poly bags mixture
The cleaned poly bags were filled with 8 kg soil and this soil was mixed with chemical fertilizer (0.14: 0.24: 0.37 g poly bag -1 NPK), farm yard manure (78.75 g poly bag -1 ) and Vesicular Arbuscular Mycorrhizae (100 to 150 g of infected propagules poly bag -1 ) according to the treatments which were neatly arranged in the net house.
Chemical fertilizers
Phosphorus and potassium @ 0.24 g poly bag -1 P 2 O 5 and 0.37 g poly bag -1 K 2 O were applied through Di Ammonium Phosphate and Muriate of Potash respectively as basal application. Nitrogen was applied in the form of Urea @ 0.24 g poly bag -1 after germination and after 30 and 60 days after sowing. Farmyard manure was applied @ 78.75 g poly bag -1 which was mixed with soil according to the treatments requirement. EC and pH of FYM were 0.95 dS m -1 and 7.59 respectively and Ni, Co, Cd content in FYM was 0.91, 0.20, 0.01-0.02 respectively.
Seed sowing and maintenance
The poly bags were sown with Pusa Jyothi variety of spinach beet at the rate of 20 seeds per poly bag. After germination, thinning was done and routine care was taken to protect the plants from pest and diseases.
N, P, K content in soil
Available nitrogen (kg ha -1 ) Application of microbial cultures had significant effect on nitrogen uptake and presented in table 2.
Nitrogen availability was highest in T 3 (140.65kgha -1 ) and lowest in T 8 (116.79kgha -1 ) at harvesting stage and these were significantly different with each other. Among all the treatments polluted soil with supply of fresh water treatments were found that significantly highest nitrogen content was observed in treatments in which 100% RDF are added.
Available phosphorous (kg ha -1 )
Application of microbial cultures had significant effect on phosphorus uptake and presented in table 2.
Increasing soil phosphorus content due to the application of inorganic fertilizers in polluted soil, increased the nutrient availability in the soil. The higher nitrogen and phosphorus in polluted soil could be the contribution of industrial pollutants towards N, P only and not to K.
Available potassium (kg ha -1 )
Application of microbial cultures had significant effect on potassium uptake and presented in table 3. Among all the treatments, lowest potassium uptake was observed in T 4 (195.40 kg ha -1 ) in polluted soil with application of fresh water and highest potassium uptake was observed in T 7 (241.26 kg ha -1 ) in un polluted soils application of fresh water.
The treatments applied with 100% RDF (T 1, T 5 , T 11 ) through inorganic fertilizers recorded significantly highest soil potassium at harvest stage of the spinach crop. The treatment T 3 (231.57 kg ha -1 ) was showed highest nitrogen and potassium values in polluted soils with application of fresh water than potassium and T 7 (241.26 kg ha -1 ) treatment was showed highest potassium values in unpolluted soils with application of polluted water
Leaf fresh weight (g plant -1 )
The data presented revealed that the leaf fresh weight was significantly affected by different treatments with RDF, combination of inorganic, organic manures (FYM, and biofertilizer) at 30 DAS and 60 DAS of crop (Table 3).
The highest leaf fresh weight plant -1 was recorded in treatment T 8 (41.63 g plant -1 ) than the rest of treatments at 30 DAS in unpolluted soils. The lowest leaf fresh weight per plant was showed in T 3 (23.02 g plant -1 ) at 30 DAS in polluted soils. The highest leaf fresh weight was observed in T 8 (70.03 g plant -1 ) and the lowest value observed in T 9 (38.12 g plant -1 ) at 60 DAS in unpolluted soil. It was observed that the treatment T 8 (70.03 g plant -1 ) comprising RDF+FYM+VAM and Pseudomonas showed highest values at 30 DAS, 60 DAS in unpolluted soils over other treatments.
Leaf dry weight (g plant -1 )
The data presented revealed that the leaf dry weight was significantly influenced by The highest leaf dry weight plant -1 was observed in T 8 (6.62 g plant -1 ) and lowest value in T 3 (3.16 g plant -1 ) was observed at 30 DAS (Table 4). The highest leaf dry weight was observed in T 8 (4.17 g plant -1 ) and the lowest in T 3 (2.22 g plant -1 ) at 60 DAS.
Among all the treatments, T 8 comprising RDF, FYM, VAM and Pseudomonas was showed highest dry weight of leaf per plant at 30 DAS and 60 DAS in unpolluted soils. In same way, the lowest dry weight of leaf was found in T 3 at 30 and 60 DAS in polluted soils. Similar results were reported by Madhvi et al., (2014). It was reported that increased leaf area and leaf dry weight in spinach was due to application of chemical fertilizers along with organic manures and biofertilizers.
In conclusion, it was reported that increased leaf area and leaf dry weight in spinach was due to application of chemical fertilizers along with organic manures and biofertilizers. Recycling of wastes for elements; microorganisms abound in the soil and are critical to decomposing organic residues and recycling soil nutrients. Finally results showed that unpolluted soil with the supply of fresh water and microbial cultures was given good results comparatively with polluted soil with supply of fresh water and unpolluted soil with supply of polluted water.
|
2019-04-02T13:06:37.206Z
|
2017-04-15T00:00:00.000
|
{
"year": 2017,
"sha1": "e556b5953d9c6da7b112232d620b18befe6010f6",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/6-4-2017/Alavala%20Uma%20Rajashekhar,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4e1e97fdc130ec2533aa044c99636e6b841a0c62",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
250503717
|
pes2o/s2orc
|
v3-fos-license
|
Cell Recognition Using BP Neural Network Edge Computing
This exploration is to solve the efficiency and accuracy of cell recognition in biological experiments. Neural network technology is applied to the research of cell image recognition. The cell image recognition problem is solved by constructing an image recognition algorithm. First, with an in-depth understanding of computer functions, as a basic intelligent algorithm, the artificial neural network (ANN) is widely used to solve the problem of image recognition. Recently, the backpropagation neural network (BPNN) algorithm has developed into a powerful pattern recognition tool and has been widely used in image edge detection. Then, the structural model of BPNN is introduced in detail. Given the complexity of cell image recognition, an algorithm based on the ANN and BPNN is used to solve this problem. The BPNN algorithm has multiple advantages, such as simple structure, easy hardware implementation, and good learning effect. Next, an image recognition algorithm based on the BPNN is designed and the image recognition process is optimized in combination with edge computing technology to improve the efficiency of algorithm recognition. The experimental results show that compared with the traditional image pattern recognition algorithm, the recognition accuracy of the designed algorithm for cell images is higher than 93.12%, so it has more advantages for processing the cell image algorithm. The results show that the BPNN edge computing can improve the scientific accuracy of cell recognition results, suggesting that edge computing based on the BPNN has a significant practical value for the research and application of cell recognition.
Introduction
Recently, cell recognition has become a research hotspot in the field of image processing and pattern recognition and has an extensive application prospect in neurology and biology [1]. e artificial neural network (ANN) plays a very effective role in digital image recognition. e main reason is that neural network has good self-learning behavior, good fault tolerance, and efficient classification efficiency [2]. Among them, the backpropagation neural network (BPNN) is an algorithm tool, which can acquire knowledge through learning and make the network model have the ability to solve problems. It is the most widely used algorithm. Recently, edge computing based on the BPNN has also been favored by multiple researchers. Edge computing is an open platform based on network technology, computing, and storage technology at the end, close to the data source to provide the closest end service for data users. e response end of edge computing is close to the service initiator, so it can provide faster network response services to meet the needs of real-time business processing and security and privacy of the supply chain. e BPNN is used to study the edge detection of the digital image. Based on the analysis of target samples and training samples, its powerful selflearning and self-organization abilities are used to accurately locate the image edge [3]. However, the BPNN algorithm still has many shortcomings, and the development of edge computing is still in its infancy. e edge computing based on the BPNN needs to be further improved to achieve the goal of high recognition accuracy, strong antinoise ability, and short running time.
e BPNN algorithm can be divided into forward propagation and error backpropagation. In forward propagation, the sample data are input, then come to the hidden layer through the input layer, and finally, come to the output layer. If there is an error between the actual output result and the expected output result, it is essential to turn to the error backpropagation process. Each layer of nerve cells will continuously adjust its specific connection weight and threshold according to the gradient descent method to minimize the error value and make the actual output result close to the expected output result [4]. e BPNN has the advantages of large-scale parallel, distributed processing, self-organization, and self-learning. It is widely used in multiple fields and has achieved many outstanding results. Compared with the traditional BPNN algorithm, cell recognition research based on BPNN edge computing is proposed; that is, edge computing is used in the learning and training process of the BPNN algorithm. Only Sobel and Laplacian are selected to segment the cell image. It improves the detection accuracy of specific cell image segmentation effects, plays a good auxiliary role in specific cell recognition, and makes the final detection more efficient and accurate.
Literature Review
Relevant researchers worldwide have conducted extensive research on the application of neural network technology and edge computing technology. Firdaus and Rhee studied the method of edge computing to deal with the increasing system complexity and data storage problems in the intelligent transportation system. Besides, blockchain and smart contracts were proposed to ensure a reliable environment for secure data storage and sharing in the system. Experiments show that the system can defend against system failures with or without symptoms to reach agreement among consensus participants. Moreover, it is found that the use of an incentive mechanism can promote the system's continuous operation [5]. For the production and export of aquaculture, Zhang proposed a yield prediction model based on an optimized BPNN. e results show that the root mean squared error of the model is less than that of the traditional BPNN, and the learning efficiency is higher than that of the traditional BPNN. It shows excellent performance in processing rich historical data and can shorten the modeling time and obtain good prediction results [6]. Hu et al. studied the application of edge computing technology in the analysis of the organic agricultural supply chain. A trust framework was constructed using the blockchain's invariance and the edge computing paradigm to reduce the cost and improve the operation efficiency of the organic agricultural supply chain. A new consensus mechanism was proposed to manage information flow by classifying stakeholders. e evaluation results show that the designed method can significantly improve performance and reduce cost [7].
As for edge computing, people agree that traditional edge computing has multiple disadvantages, such as a small amount of calculation, simple operation, and high noise sensitivity. However, with the rapid development of artificial intelligence and its wide application in the field of image digital processing, the research of edge computing based on the neural network has emerged. Similarly, the BPNN is the most widely used. e BPNN is optimized by using edge computing technology to improve the operation efficiency of the algorithm. en, the image feature vector is trained by the BPNN algorithm. Finally, the trained samples are used for image segmentation and edge computing. e research shows that the final image edge is accurate and the segmentation effect is good.
Application of ANN in Image
Recognition. e ANN, also known as the connector model, is a system network based on modern neuroscience, biology, and psychology. Specifically, the ANN system is constructed by imitating the structure and function of human brain nerve cells. erefore, there is no doubt that its development is closely related to the understanding of human brain structure. Moreover, ANN research is based on the simulation and simplification of the biological nervous system, which reflects the basic characteristics of the biological nervous system to a certain extent [8,9]. e learning and classification process of the ANN is realized by constantly adjusting the connection weight. It shows that the research purpose of the neural network is to explore the mechanism of human brain processing, storing, and searching information, and then explore the possibility of applying this principle to various signal processing. Besides, most neurons in the neural network have mapping relationships, and they are connected by weight coefficients. Because of this large-scale parallel structure, the neural network is completely different from traditional computers and has a high computing speed. e pattern recognition of neural networks is mainly divided into the learning process and classification process. Specifically, the first step is the learning process. Massive training samples complete the training process of the network. In this process, according to the specific learning rules, the specific connection weight is constantly adjusted to make the output tend to the direction of manual recognition. In this way, it can be considered that the network has learned the specific internal rules. e ANN is composed of multiple interconnected neurons, and its basic processing unit is the artificial neural unit. erefore, in the same way, artificial neurons only simulate biological neurons' basic structure and function. Figure 1 is a specific structural model [10]. e algorithm can be applied to image feature recognition based on this structure.
Application of the BPNN Algorithm in Image Recognition.
Rumelhart, an American scholar, first proposed the BPNN. e BPNN is a typical feedforward neural network that combines forward and backward error propagation algorithms. It is widely used because of its multiple advantages, and it is the most widely used neural network [11]. Neural network algorithms can learn in fewer samples and express complex functions with fewer parameters, which reduces the difficulty of setting and adjusting model parameters.
erefore, the sample features that can be learned are richer and their simulation is better. Figure 2 shows the structural model of the BPNN, which is composed of an input layer, hidden layer, and output layer. Each neuron in the input layer is mainly responsible for receiving external signals and transmitting them to the hidden layer. e hidden layer is the main information processing layer in the neural network, which is responsible for information transformation.
According to the change of information demand, the hidden layer can be a single-layer structure or a multilayer structure. After the information is processed by the hidden layer, it will be transmitted to the output layer. Finally, the output layer outputs the information results to complete the complete network forward propagation [12]. When the actual output result is inconsistent with the expected output result, the network system will enter the error backpropagation stage. After the error passes through the output layer, the connection weight of each layer is modified according to the decline of the error gradient. e feedback information is fed back to the output layer. e hidden layer and input layer are carried out step by step. It is the feedback mechanism of the BPNN and the source of the BPNN name.
A BPNN is a repeated process of information forward transmission and error backward transmission. It is the orderly adjustment of connection weights at all levels to realize the learning and training of the neural network. e weight of neurons in any layer of the BPNN can be determined by partial derivative and learning rate. When using the BPNN for calculation, the above steps need to be repeated until the error between the output value and the expected value reaches the allowable range or the maximum number of iterations of the neural network. is process will continue until the error between the network output and the expected output is minimal or infinitely close, and the number of learning time can be set in advance as needed. e output of the last learning training is the final result [13,14]. erefore, applying the BPNN to the process of image recognition can effectively learn the characteristics of the image to better identify the target in the image.
Optimized BPNN Algorithm Based on Edge Computing.
In the Internet era, the generation of data is going on all the time. Data storage, encryption, transmission, and other processes may be subject to malicious attacks. Besides, there are huge security risks in databases, edge devices, and cloud storage systems, which are very vulnerable to attacks by hackers and Trojan horse software. Once the data are destroyed and polluted, it will cause serious information loss [15]. Compared with traditional cloud computing, edge computing has obvious advantages, as follows: (1) Low delay and high real-time performance: generally, edge devices are far away from the central server and at the boundary of the whole data system. erefore, it is close to the data source and can process the data at the first time receiving them, and then transmit them to the central server. e advantage of this is to reduce the pressure of the central server, speed up the data transmission time, and improve the operation efficiency of the central server. improving the fault tolerance rate of the system: the data no longer need to be uploaded to the cloud server for centralized processing. Hence, part of the storage space can be released and the running speed of the system can be improved. When dealing with complex problems, it frees up more space, avoids system locking, improves fault tolerance, and solves the security problem of data storage [16].
One of the basic features of image processing is image edge, which mainly exists among objects or between objects and their background. With edges, specific images can be segmented, so edge detection aims to segment images. e designed method recognizes two edge conditions in cell image recognition: step edge and roof edge. e specific difference between the two is that the gray level of pixels on both sides of the step edge is significantly different, while the gray level on both sides of the roof edge changes from increase to decrease. Figure 3 shows the comparison.
In addition, the description method of edge detection uses the derivative of the image gray value function, among which the most representative is the Sobel operator based on the derivative and the Laplace operator based on the second derivative [17,18]. ese operators are based on the weighted average of gray value in a certain neighborhood of a pixel in order to approximate the numerical derivative near When the change of the derivative cannot express the change of the corresponding gray value in the process of image processing, the second derivative is used. Obviously, the second derivative plays a very important role in the process of image processing. e analysis shows that the extreme point of the derivative, that is, the zero point of the second derivative is a step edge, and the gray values of the pixels on both sides are significantly different [19]. erefore, edge detection can be realized by calculating the derivative and second derivative of image pixels. e second derivative of common differential operator image is as follows: where 1, j) and i and j are basic vectors. According to the above gradient calculation equations (1) and (2), the gradient of two-dimensional image f (x, y) at each pixel can be obtained. en, the pixel can be judged as the edge point according to the obtained pixel gradient [20]. e typical BPNN model includes three levels: input layer, hidden layer, and output layer. If i, j(k), and θ j are the number of neurons in input, output, and hidden layer or the threshold of neurons, respectively, w ij is the weight of neurons from input layer to hidden layer. e calculation equation of the hidden layer is as follows: e calculation equation of the output layer is as follows: e error equation of the BPNN algorithm is as follows: e iterative equation of input layer and hidden layer is as follows: where s is the number of iterations and δ j � f ′ (u rj ) m k�1 δ k w jk . e above description shows that, unlike the activation function of the hidden layer and output layer of BPNN, the sigmoid function is mainly used for edge detection. Its learning process is mainly concentrated in the hidden layer [21,22]. e main reason for choosing this function as a function of hidden layers is to avoid the saturation of hidden layers. Since the output layer outputs the edge or edge pixel value, and the method to obtain the value is to modify the connection weight along the gradient, the output result is generally the maximum or minimum value. Besides, to improve the learning efficiency of the network, the Purelin linear function is selected as the activation function of the output layer. e edge computing of the hidden layer of the BPNN is equivalent to the modification of the template coefficient of each operator. e result of the output layer is equivalent to the adjustment of the weight of each operator. e purpose is to achieve the final output result that is infinitely close to the expected value [23]. e cell types involved mainly include red blood cells, white blood cells, and epithelial cells. In the collected cell image dataset, the number of white blood cell images is 138, the number of red blood cell images is 169, and the number of epithelial cell images is 241. en, the known characteristic signals are used to train the BPNN algorithm. Table 1 shows the setting of BPNN parameters used.
Effect Analysis of Edge
Computing. e Sobel operator contains two groups of 3 * 3 matrices: horizontal and vertical templates. By plane convolution with the image, the approximate values of horizontal and vertical brightness difference can be obtained, respectively. e Laplace operator is a second-order differential operator defined as gradient ∇f and divergence ∇f′. e second derivative of the image is calculated. e image is two-dimensional, so there is no need to calculate the horizontal and vertical derivatives separately, and then add them. e Sobel operator and Laplace operator are selected to segment the cell image, enhance the region of gray mutation in the image, and weaken the slow-changing region of gray. Two algorithms are used to process epithelial cells. Figure 4 shows the specific effect. Figure 4 shows that each operator has its own characteristics in image segmentation. e Laplace operator has high accuracy in image edge detection. However, when there is more noise in the image, the detection accuracy decreases significantly. Besides, the image edge has not only position information but also direction information. However, the Laplace operator can only obtain one aspect of location information. e Sobel operator can smooth and suppress noise, but the actual edge is also smoothed and the detection effect is suppressed, resulting in the decline of detection accuracy. In the groove, the Sobel operator is more suitable for oblique edge detection and the Laplace operator is more suitable for horizontal and vertical edge detection.
Analysis of Network Training Results.
In the process of implementing the BPNN, the training process of the BPNN is the most critical. e key of cell recognition is to use the extracted cell features combined with specific network training to carry out the final cell recognition and classification and obtain the recognition results. In order to improve the learning efficiency of the BPNN, edge computing based on the BPNN is used in the hidden layer training process. According to the known collected cell images, the main features are extracted. Table 2 shows the results. According to the known information, the number of input neurons of the BPNN is 6 and the expected number of output neurons is 3. en, the BPNN algorithm is trained.
In the BPNN, after the signal from the input layer is received, it reaches the output layer through the hidden layer, and the final result is output. It also experiences a continuous error reduction process. After BPNN algorithm training (the number of hidden layer neurons S1 � 15), the output result is shown in Figure 5. Figure 5 shows that after continuous adjustment, the error of the BPNN output is very close to the expected output. Moreover, the smaller the error between the actual output and the expected result is, the longer the online learning training time is, and the more the training time is. Figure 5 shows the training results.
Analysis of Cell Recognition Results Based on BPNN Combined with Edge Computing.
e final recognition result of the BPNN is compared with the actual number of cells to verify the ability of the designed algorithm to deal with the problem. Table 3 shows the results. Table 3 displays that the BPNN algorithm can realize the recognition of specific cell types and numbers. e experimental results show that the actual number of white blood cells is 122 and the number of network recognition is 120.
ere are only two differences, and the accuracy is as high as 98.36%. e actual number of red blood cells is 146, the number of network recognition is 138, and the accuracy rate is 94.52%. e actual number of epithelial cells is 189, and the number of network recognition is 176, with an accuracy of 93.12%. It can be concluded that in BPNN recognition, the recognition effect of white blood cells is the best and that of epithelial cells is the worst.
Conclusion
Cell recognition is conducted based on BPNN edge computing.
is exploration discusses the BPNN structure model and the edge computing process, respectively. Finally, the practical application of BPNN edge computing in cell recognition is discussed, and the following results are obtained. In the process of BPNN recognition, the influence of parameter setting cannot be underestimated. e results show that using the initial weight is a good method in this process because randomly selecting the smallest value will ensure that the nonlinear function is more sensitive than other values in the initial weight. In the training process of the BPNN, the nonlinear mapping relationship between the input layer and output layer can effectively adjust and correct the weight between different levels. When the optimal connection weight between layers is obtained through the continuous iteration of network training, the error between the actual output and the expected output t is minimized to obtain the final output value. e Laplace operator has high accuracy in image edge detection, but the effect of noise suppression is relatively poor.
e Sobel operator has a stable noise suppression effect, but the detection accuracy is low. Besides, the cell recognition results based on BPNN edge computing are not different from the actual cell type and number, and the recognition results are accurate, indicating that the effect of BPNN edge computing is very good. However, the edge computing process based on the BPNN is still insufficient in the current research. Further improvement measures can be put forward in terms of sample selection or e-learning strategies. erefore, there are still many key and difficult problems for many researchers to solve in edge computing based on neural networks.
Data Availability
e data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2022-07-14T18:06:00.655Z
|
2022-07-12T00:00:00.000
|
{
"year": 2022,
"sha1": "c1256919171068c51102638d55cf1643e67d4d58",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cmmi/2022/7355233.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b15fdc8f42a33702af1e2b0df58fd07661d2ab07",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
139609399
|
pes2o/s2orc
|
v3-fos-license
|
Pulse Excitation of a System Containing a Textile Layer
Abstract In this paper the problem of the vibration of a mass supported on a textile layer and subjected to pulse excitation is analysed. A mathematical model of a system containing an elastic spring and electromagnet is formulated. The numerical simulation shows that the electromagnet may ensure the maintenance of a compressive force acting on the textile layer provided that the period of natural oscillations is shorter than the time duration of the pulse.
of the layer to an impacting body was analysed. The vibration of an elastic system that contains textile layers was studied in work [4]. The problem of the transmission of vibration through a textile layer was studied. The purpose of this paper is to study the response of the textile layer to a pulse excitation. Examples of such loadings of textiles are feed mechanisms in sewing machines [5] and grippers for textiles [6].
Equations of motion
The system considered is shown in Figure 1. It consists of a steel bar of mass m, a textile layer k, springs of stiffness s and an electromagnet of inductance L. e is the distance from the centre of the core to the centre of the coil at rest; w denotes the core displacement, x the position of the core centre, and y the displacement of the textile layer support if it is moving.
Summing up all forces acting on the mass, one obtains an equation of motion -Equation (1), where t denotes time, and g is the gravity acceleration.
The relationship between the force F kc acting on the fibrous layer and its deflection found in work [1] in the form (2 The constants (k, L 1 ) denote the elastic parameters resulting from the bending o fibres under compression of the layer and (c, H 1 ) are the damping parameters res (1) The relationship between the force F kc acting on the fibrous layer and its deflection w can be found in work [1] in the form of Equation (2).
The constants (k, L 1 ) denote the elastic parameters resulting from the bending of individual fibres under compression of the layer and (c, H 1 ) are the damping parameters resulting from squeezing air out of the layer, defined in paper [3]. The dimensionless function sgn( ) extracts the sign of its argument to (-1, 0, +1) for negative, zero or positive argument, respectively, and it assures that the direction of the resistance force is opposite to the air velocity.
Introduction
A broad discussion of the vibration of a mechanical system subjected to pulse excitation can be found in book [1]. The compression characteristics of fibre masses were presented in paper [2]. The relationship between the force and magnitude of the layer compression was defined in paper [3], in which the reaction zing air out of the layer, defined in paper [3]. The dimensionless function sgn( ) extracts n of its argument to (-1, 0, +1) for negative, zero or positive argument, respectively, and res that the direction of the resistance force is opposite to the air velocity.
The electromagnetic force can be found from equation (3). Here, L [4] is the tance of the electromagnet approximately found from equation (4), i the current ity, R the resistance of the circuit, and u denotes the feed voltage.
nductance L and its derivative dL/dx can be calculated as explained in paper [7] or ximately [4] from equation (4). In equation ( , squeezing air out of the layer, defined in paper [3]. The dimensionless function sgn( ) extracts the sign of its argument to (-1, 0, +1) for negative, zero or positive argument, respectively, and it assures that the direction of the resistance force is opposite to the air velocity.
The electromagnetic force can be found from equation (3). Here, L [4] is the inductance of the electromagnet approximately found from equation (4), i the current intensity, R the resistance of the circuit, and u denotes the feed voltage.
The inductance L and its derivative dL/dx can be calculated as explained in paper [7] or approximately [4] from equation (4). In equation (4) Results squeezing air out of the layer, defined in paper [3]. The dimensionless function sgn( ) extracts the sign of its argument to (-1, 0, +1) for negative, zero or positive argument, respectively, and it assures that the direction of the resistance force is opposite to the air velocity.
The electromagnetic force can be found from equation (3). Here, L [4] is the inductance of the electromagnet approximately found from equation (4), i the current intensity, R the resistance of the circuit, and u denotes the feed voltage.
The inductance L and its derivative dL/dx can be calculated as explained in paper [7] or approximately [4] from equation (4). In equation (4) the position of the core centre, and y the displacement of the textile layer support if it is moving.
The relationship between the force F kc acting on the fibrous layer and its deflection w can be found in work [1] in the form (2 The constants (k, L 1 ) denote the elastic parameters resulting from the bending of individual fibres under compression of the layer and (c, H 1 ) are the damping parameters resulting from the position of the core centre, and y the displacement of the textile layer supp moving.
The relationship between the force F kc acting on the fibrous layer and its deflection found in work [1] in the form (2 The constants (k, L 1 ) denote the elastic parameters resulting from the bending of fibres under compression of the layer and (c, H 1 ) are the damping parameters resu Equations (2), (3) and (4).
Model of vibrating mass supported on the textile layer, excited to vibrate by the electromagnet. all forces acting on the mass, one obtains an equation of motion (1). t denotes is the gravity acceleration. (1) ship between the force F kc acting on the fibrous layer and its deflection w can be rk [1] in the form (2).
ts (k, L 1 ) denote the elastic parameters resulting from the bending of individual The electromagnetic force can be found from Equation (3). Here, L [4] is the inductance of the electromagnet approximately found from Equation (4), i the current intensity, R the resistance of the circuit, and u denotes the feed voltage.
The inductance L and its derivative dL/dx can be calculated as explained in paper [7] or approximately [4] from Equation (4). In Equation (4) r 0 denotes the computational radius of the coil, l half of the computational length of the coil, and L min & L max denote the minimum and maximum inductance of the coil measured. The layer compression w+y or w versus time t, the time derivative dw/dt versus the layer compression, the electromagnetic force F E and the layer compression force F kc versus time t are shown in Figures 2 for the motion pulse and in Figure 3 for the voltage pulse.
Conclusions
From Figures 2 and 3 it can be concluded: 1. The electromagnet ensures that the compressive force acting on the layer is maintained. 2. In order to maintain the pulse type response, the frequency of natural vibration of the system should be higher than that of the excitation. 3. The rectangular pulse excitation results in a decreasing oscillatory force acting on the layer.
|
2019-04-30T13:03:01.600Z
|
2017-06-30T00:00:00.000
|
{
"year": 2017,
"sha1": "8aeb398fbeb3a6f258f1d74eb92eb39a8b7e7817",
"oa_license": null,
"oa_url": "https://doi.org/10.5604/01.3001.0010.1703",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "815d18f76453f3460449853438ada4e7ac2d4ed2",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
17173313
|
pes2o/s2orc
|
v3-fos-license
|
Black Holes in Type IIA String on Calabi-Yau Threefolds with Affine ADE Geometries and q-Deformed 2d Quiver Gauge Theories
Motivated by studies on 4d black holes and q-deformed 2d Yang Mills theory, and borrowing ideas from compact geometry of the blowing up of affine ADE singularities, we build a class of local Calabi-Yau threefolds (CY^{3}) extending the local 2-torus model \mathcal{O}(m)\oplus \mathcal{O}(-m)\to T^{2\text{}} considered in hep-th/0406058 to test OSV conjecture. We first study toric realizations of T^{2} and then build a toric representation of X_{3} using intersections of local Calabi-Yau threefolds \mathcal{O}(m)\oplus \mathcal{O}(-m-2)\to \mathbb{P}^{1}. We develop the 2d \mathcal{N}=2 linear \sigma-model for this class of toric CY^{3}s. Then we use these local backgrounds to study partition function of 4d black holes in type IIA string theory and the underlying q-deformed 2d quiver gauge theories. We also make comments on 4d black holes obtained from D-branes wrapping cycles in \mathcal{O}(\mathbf{m}) \oplus \mathcal{O}(\mathbf{-m-2}%) \to \mathcal{B}_{k} with \mathbf{m=}(m_{1},...,m_{k}) a k-dim integer vector and \mathcal{B}_{k} a compact complex one dimension base consisting of the intersection of k 2-spheres S_{i}^{2} with generic intersection matrix I_{ij}. We give as well the explicit expression of the q-deformed path integral measure of the partition function of the 2d quiver gauge theory in terms of I_{ij}.
Introduction
Few years ago, Ooguri, Strominger and Vafa (OSV) have made a conjecture [1] relating the microstates counting of 4d BPS black holes in type II string theory on Calabi-Yau threefolds X 3 to the topological string partition function Z top on the same manifold. The equivalence between the partition function Z brane of large N D-branes, and that of the associated 4d BPS black hole Z BH leads to the correspondence Z brane = |Z top | 2 to all orders in 1 N expansion. OSV conjecture has brought important developments on this link: it provides the non-perturbative completion of the topological string theor [2]- [14] and gives a way to compute the corrections to 4d N = 2 Bekenstein-Hawking entropy [10,15]. OSV relation has been extended in [7,16] to open topological strings, which capture BPS states data information on D-branes wrapped on Lagrangian submanifolds of the Calabi-Yau 3-folds.
Evidence for OSV proposal has been obtained by using local Calabi-Yau threefolds and some known results on 2d U (N ) Yang-Mills theory [17]. It has been first tested in [2] by considering configurations of D-branes wrapped cycles O(m) ⊕ O(−m) → T 2 for m a positive definite integer. Then it has been checked in [18] by using wrapped D-branes in a vector bundle of rank 2 non trivial fibration over a genus g-Riemann surface O(2g + m − 2) ⊕ O(−m) → Σ g . It has been shown in these studies that the BPS black hole partition function localizes onto field configuration which are invariant under U (1) actions of the fibers. In this way, the 4d gauge theory reduces effectively to q-deformed Yang-Mills theory on Σ g . These works have been recently enlarged in [19] by considering local Calabi-Yau manifolds with torus symmetries such as local P 2 =O(−3) → P 2 , local F 0 = O(−2, −2) → P 1 B × P 1 F and ALE × C. In the last example, the A k type ALE space is given by gluing together (k + 1) copies of C 2 viewed as a real two dimensional base fibered by torus T 2 . Other related works have been developed in [20,21,22,23,24,25,26,27,28]. M-theory and AdS 3 /CF T 2 interpretations of OSV formula have been also studied in [29].
In this paper we contribute to the program of testing OSV conjecture for massive 4d black holes on toric Calabi-Yau threefolds in connection with q-deformed quiver gauge theories in two dimensions. This study has been motivated by looking for a 2d N and beyond. Recall that affine ADE geometries are known to be described by elliptic fibration over C 2 and lead to N = 2 conformal quiver gauge theories in 4d space-time with gauge groups G = Π i U (s i M ), where the positive integers s i are the Dynkin weights of affine Kac-Moody algebras. Borrowing the method of the above quiver gauge theories [30,31,32] and using the results of [2], we engineer a new class of local Calabi-Yau threefolds that enlarges further the class of CY3s used before and that agrees with OSV conjecture.
Our study gives moreover an explicit toric representation of O(m) ⊕ O(−m) → T 2 especially if one recalls that T 2 viewed as S 1 × S 1 does not have, to our knowledge, a simple nor unique toric realization. As we know real skeleton base of toric diagrams representing toric manifolds requires at least one 2-sphere S 2 which is not the case for the simplest S 1 × S 1 geometry. In the analysis to be developed in this study, the 2-torus will be realized by using special linear combinations [∆ n+1 ] of intersecting 2-spheres with same homology as a 2-torus. The positive integer n ≥ 1 refers to the arbitrariness in the number of S 2 's one can use to get the elliptic curve class T 2 . Among our main results, we mention that 2d quiver gauge theories, associated with BPS black holes in type IIA string on the local CY 3 s we have considered, are classified by the "sign" of the intersection matrix I ik of the real 2-spheres S 2 i forming the compact base I ik = S 2 i S 2 k , i, k = 0, ..., n.
According to whether k I ik u k > 0, k I ik u k = 0 or k I ik u k < 0 for some positive integer vector (u k ), we then distinguish three kinds of local models. For the second case k I ik u k = 0, the u k 's are just the Dynkin weights of affine Kac-Moody algebras and the corresponding 2d quiver gauge theory is non-deformed in agreement with the result of [2]. The obove relation corresponds then to the case where the genus g-Riemann surface is a 2-torus; i.e 2g − 2 = 0, ↔ g = 1.
For the two other cases, the gauge theory is q-deformed and recovers, as a particular case, the study of [19] dealing with ALE spaces. On the other hand, this study will be done in the type IIA string theory set up. So we shall also use our construction to complete partial results in literature on field theoretic realization using 2d N = (2, 2) linear sigma model for affine ADE geometries with special focus on the A n case. To our knowledge, this supersymmetric 2d field realization of local Calabi-Yau threefolds has not been considered before.
The organization of paper is as follows: In section 2, we begin by recalling general features on toric graphs. Then we study the toric realizations of local T 2 using the techniques of blowing up of affine ADE geometries. We study also the field theoretic realization of the non trivial fibrations of local T 2 which corresponds to implementing framing property [38]. In section 3, we develop the N = 2 supersymmetric gauged linear sigma model describing the local torus geometry. This study gives an explicit field theoretic realization of geometric objects such as the surface divisors of the local 2-torus and their edge boundaries in terms of field equations of motion and vevs. In section 4, we construct the 4d BPS black holes in type IIA string by considering brane configurations using D0-D2-D4-branes in the non compact 4-cycles. We show, amongst others, that the gauge theory of the D-branes, which is dual to topological strings on the Calabi-Yau threefold, localizes to a "q-deformed" 2d quiver gauge theory on the compact part of the affine ADE geometry and test OSV conjecture. More precisely, we show that the usual power (2g − 2) of the weight of the deformed path integral measure for O(2g + m − 2) ⊕ O(−m) → Σ g gets replaced, in case of 2d quiver gauge theories, by the intersection matrix I ij . There, we show that the property (2g − 2) = 0 for g = 1 corresponds to the identity j I ij s j = 0 of affine Kac-Moody algebras. Motivated by this link, we study 4d black holes based on D-branes wrapping cycles in O (m) ⊕ O (−m − 2) → B k with m = (m 1 , ..., m k ) an integer vector and where B k is a complex one dimension base consisting of the intersection of k 2-spheres S 2 i with generic intersection matrix I ij . In section 5, we give our conclusion and outlook.
2 Toric realization of local T 2 In this section, we build toric representations of the class of local 2-torus O(m)⊕O(−m) → T 2 by developing the idea outlined in the introduction. There, it was observed that although, strictly speaking, T 2 = S 1 × S 1 is not a toric manifold (base reduced to two points), it may nevertheless be realized by gluing several 2-spheres in very special ways. Before showing how this can be implemented in the above local CY3, recall that the study of local threefold geometry is important from several views. It has been used in [2] to test OSV conjecture [1] and was behind the study of several generalizations, in particular The novelty brought by these class of local CY3s stems also from the use of the non trivial . These non trivial fibers, which were motivated by implementing twisting by framing [38], turn out to play a crucial role in the study of 4d BPS black holes from type IIA string theory compactification. Generally, the class of local CY3s eq(2.2) which will be considered later (section 4), is mainly characterized by two integers m and the genus g and may be generically denoted as follows of the complex two dimensional divisor This local complex surface D (m,g) has a compact curve Σ g with the following intersection number In the case where Σ g is a 2-torus (g = 1), the above two integers series of local CY 3 reduces to the one integer threefold series X (m,−m,0) 3 . The previous non compact real 4-cycles are then given by [2] D and their intersection number with the 2-torus class T 2 is [D m,g ] . T 2 = m.
Toric realization
Our main objectives in this subsection deals with the two following: (1) Build explicit toric realisations of the local 2-torus by using particular realisations of the real 2-cycle T 2 . These realisations, which will be used later on, have been motivated from results on blowing up of affine ADE singularities of ALE spaces and geometric engineering of 4d N = 2 supper QFTs [30,31,32]. It gives a powerful tool for the explicit study of the special features of local 2-torus and allows more insight in the building of new classes of local Calabi-Yau threefolds for testing OSV conjecture.
(2) Use the result of the analysis of point (1) to complete partial results in literature on type IIA geometry with affine ADE singularities. More precisely, we construct the 2d N = 2 supersymmetric gauged linear sigma model giving the field realization of local 2-torii. This construction to be developed further in section 3, will be used for the two following: (a) Work out explicitly the results of (q-deformed) 2d YM theories on T 2 and give their extensions to 2d quiver gauge theories on the elliptic curve realized a linear combination of intersecting 2-spheres. (b) Study partition function properties of 4d BPS black holes along the lines of [29] and ulterior studies [19,25,26,27,28] to test OSV conjecture.
General on toric graphs
In building toric realization of local CY 3 s [33,34,35,36,37], one encounters few basic objects that do almost the complete job in striking analogy with the work done by Feynman graphs in perturbative QFTs. In particular, one has: (i) "Propagators" given by the toric graph of the real 2-sphere S 2 . It corresponds to the two points free field Green function in the language of quantum feld theory (QFT). Recall that the real 2-sphere S 2 is given by the compactification of the complex line C. The latter can be realized (polar coordinates) as the half line R + with fiber S 1 that shrinks at the origin. The compactification of C; i.e P 1 the complex one dimensional projective space, is obtained by restricting R + to a finite straight line (a segment) which is interpreted as a propagator in the language of QFT Feynman graphs [38]. Alike, we distinguish here also two situations: With these propgators, one already build complex one and two dimensional toric manifolds as shown on figure below. To construct CY 3 , we need 3-vertices which we discuss in the next. With these objects one can build other local CY3s. Using fat propagators and vertices we can also have a picture on their internal topology. One may also describe its even dimensional homology cycles. Real 2-cycles C i of local CY3s are represented by linear combinations of segments and real 4-cycles D i (divisors of CY 3 s) by 2d polygons: a triangle for P 2 , a rectangle for P 1 f iber × P 1 base and so on. Compact 4-cycles have then finite size. The power of the toric quiver realization of threefolds comes also from its simplicity due to the fact that the full structure of toric CY 3 quiver diagrams is basically captured by the lines (2-cycles) since boundaries of divisors (4-cycles) are given by taking cross products of pairs of straight line generators.
Note that, though P 1 is not a Calabi-Yau submanifold since its first Chern class is c 1 P 1 = 2, this one dimensional complex projective space, together with the vertex, are the basic objects in drawing the 2d graphs of toric manifolds. Note also that torii S 1 , T 2 and T 3 of CY 3 appear in this construction as fibers and play a fundamental role in the study of topological string theory amplitudes [38,39,40]. It has been shown in [2] that the one integer series of spaces X (m,−m,0) 3 = O(m)⊕O(−m) → T 2 is a toric local Calabi-Yau threefold used to check OSV conjecture. From the above study on toric graphs it follows that, a priori, one should be able to draw its corresponding toric quiver diagram. However, unlike the 2-sphere, the usual 2-torus S 1 × S 1 has no simple toric graph realization. Then the question is what kind of 2-torii T 2 are involved in X (m,−m,0) 3
?
Here below, we would like to address this problem by using the graphic method of toric geometry. In particular, we develop a way to build toric graphs representating classes of T 2 . This will be done by realizing T 2 in terms of intersecting 2-spheres S 2 i by expressing T 2 as a "sum" of intersecting 2-spheres, and thinking about X and their even real homology cycles. Here it is interesting to note the two following: (a) This construction is important since once we have the toric quivers, one can use them for different purposes. For instance, we can use the toric graphs in the topological vertex method of [38,40] to compute explicitly the partition functions of the topological string on X (m,−p,2) 3 is not exactly X (m,−m,0) 3 considered in [2]; it is more general. These manifolds have different Kahler moduli spaces and their U (1) × U (1) isometry groups are realized differently. To fix the idea, think about the homogeneity group of the compact geometry ∆ n+1 of eq(2.8) as To proceed we shall deal separately with base ∆ n+1 and the two line fibers of X . We first look for toric quivers to describe the ∆ n+1 class and O(±k 0 , .., ±k n ) independently. Then we use 3-vertex of C 3 and the Calabi-Yau condition to glue the various pieces. At the end, we get the right real 3-dimensional toric graph of our local CY 3 s and, by fattening, the topology of X . This approach is interesting since one can control completely the engineering of the toric quivers of local Calabi-Yau threefolds. Moreover, seen that O(±k 0 , .., ±k n ) are line bundles, their toric graphs are mainly the toric graphs of O(±m) which are locally given by C In what follows, we show that these toric graphs and their fattening constitutes in fact particular topologies of infinitely many possible graphs.
(a) Solving the constraint equation of 2-torus homology: As noted before, the question of drawing a toric graph for T 2 = S 1 × S 1 seems to have no sense at first sight. This is because the basic (irreducible) real 2-cycle in toric geometry is Its toric graph is a straight line segment with length given by the size of the 2-sphere (Kahler modulus r). The 2-torus has a zero self-intersection and a priori has no simple nor unique toric diagram. Tori T n are generally speaking associated with U n (1) phases of complex variables. For instance, on complex line C with local coordinate z, the unit circle S 1 is given by |z| = 1 and the U (1) symmetry acts as z → e iθ z. This circle and the associated U (1) symmetry are exhibited when fattening toric graphs as shown on previous figures.
To build a toric representation of T 2 ; but now viewed as ∆ n+1 , we use intersecting 2-spheres with particular combinations as where the positive integers ǫ i are obtained by solving eq(2.13). Denoting by I ij the intersection matrix of the 2-cycles C i , then the condition to fulfil eq(2.13) is A solution of this constraint relation is given by taking as minus the generalized Cartan matrix of affine Kac-Moody algebras g. In this case, the positive integers ǫ j are interpreted as the Dynkin weights and the topology of the 2-torus is same as the affine Dynkin diagrams. In particular for the simplest case of simply laced affine ADE are reported below. Therefore, there are infinitely many toric quivers realizing T 2 in terms of intersecting 2spheres. But to fix the ideas we will mainly focus on the series based on affine A n Kac-Moody algebras. In this case the elliptic curve is It involves (n + 1) real 2-cycles with intersection matrix The curve ∆ n+1 has the homology of a 2-torus realized by the intersection of (n + 1) 2-spheres with the topology of Dynkin diagram of affine n . The Kahler parameter r of the curve ∆ n+1 defined as with ω being the usual real Kahler 2-form, is given by the sum over the kahler parameters r i of the 2-cycles C i making up ∆ n+1 . We have and so the Kahler modulus of ∆ n+1 is given by In this computation we have ignored the volume of the intersection points of the 2-spheres C i and C j since they are isolated points and moreover their volumes vanish in any case. To fix the ideas, we shall set r ≥ r 0 ≥ r 1 ≥ ... ≥ r n ≥ 0, (2.23) and for special computation, in particular when we study the path integral of the partition function on quiver gauge theory on ∆ n+1 dual to 4d black hole (section 4), we will in general sit at the moduli space point where In all these cases, the Kahler moduli r i are positive. We shall also suppose that we are away from r = 0 describing the singularity of the curve ∆ n+1 and where full non abelian gauge symmetry of quiver theory is restored.
Using the above realization, one can go ahead and build the 4-cycle and the local Calabi-Yau threefold. Viewed as a whole, the non compact 4-cycle is and splitting m as It is not difficult to see that [D] can be decomposed as follows: If we take m a positive integer, then we should have . Using the realization eq(3.20) and the splitting eq(2.27), the above local CY 3 reads as The fibers O(±k 0 , .., ±k n ) carry charges under the various U (1) gauge symmetries of the individual 2-spheres S 2 i . The total charge is given by eq(2.27). With these results at hand, we are now in position to proceed forward and study the field theoretical representation of the above class of local CY 3 s by using the method of 2d N = 2 supersymmetric gauged linear sigma model.
Supersymmetric field model
Here we develop the study of type IIA geometry of X (m,−p,2) 3 . To do so, we use known results on 2d N = 2 supersymmetric gauged linear sigma model formulation and take advantage of our construction to also complete partial ones on type IIA geometries based on standard affine models as well as non trivial fibrations.
To begin, recall that in the 2d N = 2 supersymmetric sigma model framework, the field equations of motion of the auxiliary fields D a of the gauge supermultiplets V a define the type IIA geometry. This method has been used in literature to deal with K3 surfaces with ADE geometries. But here we would like to extend this method to the case of the local geometry X (m,−p,2) 3 . To that purpose, we shall proceed as follows: (1) Study first the field realization on two special examples. This analysis, which will be given in present subsection, allows to set up the procedure. Then we give useful tools and illustrate the method.
(2) Develop, as a next step, the general field theoretical 2d N = 2 supersymmetric gauged linear sigma model of eq (2.31). This is a more extensive and will be given in next section. Actually this is one of the results of the present study.
To start note that for m = 0, this local Calabi-Yau threefold describes just the usual A 1 geometry on the complex line. The variety has been studied extensively in literature; see [41] for instance. For non zero m the situation is, as far as we know, new and its supersymmetric linear sigma model can be obtained by considering a U (1) gauge field V and four chiral superfields of this model reads as and the field equation of motion (2.33) of the gauge field leads to It has four special divisors φ i = 0 while the base 2-sphere corresponds to In case where m is positive definite, this geometry can be also viewed as describing the line bundle O(−m − 2) over the weighted projective space WP 2 (1,1,m) . Note that for m = 1, one gets the normal bundle of P 2 and the local Calabi-Yau threefold coincides with (2.39) Note also that for m = −1, one has the resolved conifold It is interesting to note here that resolved conifold can be realized from normal bundle on P 2 just by sending to infinity one of the edges of P 2 . Note finally that for m = 0, −2, one has the A 1 geometry fibered on the complex line C.
Example
Following the same method, one can also build the type IIA sigma model for this local CY 3 (p i = m i + 2) based on two intersecting 2-spheres S 2 1 and S 2 2 The sigma model involves two U (1) gauge fields V 1 , V 2 and five chiral superfields Φ i . Denoting by X 1 , X 2 , X 3 the bosonic field components parameterising the compact 2-cycle C 2 and by Y 1 , Y 2 the complex variables parameterising O(−m 1 − 2, −m 2 − 2) and O(m 1 , m 2 ) respectively, the two sigma model equations are given by (2.42) In these equations, one recognizes two SU (2) /U (1) relations describing the 2-spheres S 2 1 and Notice that eqs(2.42) involve five complex (10 real) variables which are not all of them free since they are constrained by two real constraint relations (the D 1 and D 2 auxiliary field equations of motion) and U (1) × U (1) gauge symmetry where ϑ i are the two gauge group parameters. At the end one is left with (10 − 2 − 2) degrees of freedom which can be described by three independent complex variables. Notice also that the spheres S 2 1 and S 2 2 intersect at gauge symmetry (toric action) and up to a gauge transformation, the same is valid for X 1 = r 1 and X 3 = r 2 . Indeed if parameterising X 1 = |X 1 | e −iϕ and X 3 = |X 3 | e −iψ , one can ususally set ϕ = ϑ 1 and ψ = ϑ 2 by using U (1) × U (1) gauge invariance. Then setting X 2 = 0 in eqs(2.43), one discovers eq(2.45). Notice finally that this construction generalizes easily to the case of an open chain The sigma model field equations describing this complex one dimension curve read as involving (n + 1) complex field variables X i constrained by n complex constraint equations.
which we denote also as ∆ n+1 . In this section, we give the type IIA description of these kinds of local CY 3 s. To proceed, we start from the previous open chain C n and add an extra 2-sphere S 2 0 , with the following features, describing the extra 2-sphere S 2 0 with Kahler modulus r 0 . The meaning of the complex variables Z 0 and Z 1 will be specified later. Then glue S 2 0 and C n by implementing the constraint relations (3.5). A priori this could be achieved by setting Z 1 = X 1 and Z 0 = X n+1 so that eq(3.6) becomes In this way S 2 0 intersects once the 2-sphere S 2 1 defined by |X 1 | 2 + |X 2 | 2 = r 1 as well as S 2 n with eq |X n | 2 + |X n+1 | 2 = r n . But strictly speaking there is still a problem although the resulting geometry looks having the topology of a 2-torus. This construction does not exactly work. The point is that by combining eqs(2.47,3.7), we cannot have the right dimension since the (n + 1) complex variables {X i , 1 ≤ i ≤ n + 1} are constrained by (n + 1) complex constraint relations. As mentioned before, we have (n + 1) real relations coming from the field equations of motion (2.47-3.7) and an equal number following from U (1) n+1 gauge symmetry acting on the field with i = 1, ..., n + 1 and the ϑ a 's being the U (1) group with charges q a i = q a 1 , q a 2 , 0, .., q a n+1 as follows, This dimensionality problem can be solved in different, but a priori equivalent, ways. Let us describe below the key idea behind these solutions.
A natural way to do is to start from the complex two dimension ALE geometry with blown up A n singularity and make an appropriate dimension reduction down to one complex dimension. More precisely, start from equations and add the following extra constraint relation reducing the dimensionality by one There is also an extra U (1) gauge invariance giving charges to (X n , X n+1 , X 0 ). This picture involves (n + 2) complex variables constrained by (n + 1) relations. The compact geometry is determined from eqs(3.10-3.11) by restricting to the compact divisors X i = 0. This gives The second way is to use the correspondence between roots of Lie algebras α i , which in present analysis are given in terms of unit basis vectors of R n+1 as, 3.14) and the S 2 i 2-sphere homology of ALE space with blowing up singularities. To have the 2torus, one should consider affine Kac-Moody symmetries and use its correspondence between the imaginary root δ Below, we shall develop an other way to do relying on the following correspondence where e i is as in eqs (3.14). In the case where {e i } are orthogonal; i.e e i .e j = 0, there is no Y ij variable and this is interpreted as just a divisor equation. In this correspondence, roots α i are associated with cotangent bundle of P 1 since by computing α 2 i = e 2 i − 2e i .e i+1 + e 2 i+1 and using above correspondence, For Y i,i+1 = 0, one recovers the usual 2-sphere. The link between these ways initiated here will be developed in [42]. We start from eq(2.47) and modify it as follows: (i) an extra complex variable X 0 so that the new system involves the following complex variables {X 0 , X 1 , ..., X n+1 }. The extra variable charged under the U 1 (1) abelian gauge symmetry, (ii) an extra U 0 (1) gauge symmetry acting as and trivially on the remaining others. (iii) modify eq(2.47) as whose compact part X 0 = 0 is just the 2-sphere |X 1 | 2 + |X 2 | 2 = r 1 of eq(2.47), and the remaining (n − 1) others unchanged
(iv) Finally add moreover |X n+1 | 2 + |X 0 | 2 = r 0 . These relations involves (n + 2) complex variables X i subject to (n + 1) real constraint equations and (n + 1) U (1) symmetries. They describe exactly the elliptic curve ∆ n+1 . Note that at the level eq(3.20) the variable X 0 parameterises a complex space C, which in the language of toric graphs, is represented by a half line. The relation (3.22) describes then the compactification of non compact complex space C with variable X 0 to the complex one projective space (real 2-sphere).
In sigma model language, this corresponds to having (n + 2) chiral superfields Φ i with leading bosonic component fields, charged under (n + 1) Maxwell gauge superfields, with the U n+1 (1) charges q a i = q a 0 , q a 1 , 0, .., q a n+1 as follows, Below, we use this contruction to build our class of local Calabi-Yau threefold using the elliptic curve ∆ n+1 as the compact part.
Local 2-torus
where Y 1 and Y 2 are the fiber variables and carry non trivial charges under U n+1 (1) gauge symmetries.
Brane theory and 4d black holes in type II string
In this section we consider type string IIA compactification on the class of local Calabi-Yau threefolds constructed in previous sections Then, we develop a field theoretical method to study 4d large black holes by using the 2d qdeformed quiver gauge theory living on ∆ n+1 . Large black holes in four dimensional space-time are generally obtained by using configurations of type II or M-theory branes on cycles of the internal manifolds. In type IIA framework, an interesting issue is given by BPS configurations involving, amongst others, N D4-branes wrapping non-compact divisors of the local CY 3 giving rise to a dual of topological string. Our construction follows more a less the same method used in [2]; the difference comes mainly from the structure of the internal manifold X (m,−m−2,2) 3 and the engineering of the quiver gauge theory living on ∆ n+1 . We first study the D-brane formulation of the BPS 4d black hole in the framework of type IIA string compactification on local ∆ n+1 . Then we study the reduction of N = 4 twisted topological theory on 4-cycles to 2d quiver gauge theory, represented by ADE Dynkin diagrams. . Under some assumptions, BPS states based on a special D-branes configuration may be interpreted in terms of 4d space-time black holes. This configuration involves D0-D4 and D2-D4 brane bound states but no D6 due to the reality of the string coupling constant g s . The D0-particles couple the RR type IIA 1-form A 1 while the D2and D4-branes couple to the RR 3-form C 3 . Their respective charges Q 0 , Q 2a and Q a 4 give the following expression of the macroscopic entropy of the black hole [6,19,15] S BH = 1 4
Brane theory in X
The above 4d black hole construction can be made more precise for our present study. Here the local threefold X with generic equations where Z stands either for X 0 or Y 1 as given in eqs (3.26). These dual 2-and 4-cycles determine a basis for the (n + 1) abelian vector fields B i = B i (t, r) obtained by integrating the RR 3-form C 3 on the 2-cycles C i as shown below Under these B i abelian gauge fields, the D2-branes in the class [C] ∈ H 2 (X, Z) and D4-branes in the class [D] ∈ H 4 (X, Z) are given by (4.9) and carry respectively Q 2i (Q 2i = M i ) electric and Q i 4 (Q i 4 = N i ) magnetic charges. We also have D0-brane charge Q 0 that couple the extra U (1) vector field originating from RR 1-form. D6-brane charges are turned off.
Following [2], the indexed degeneracy Ω Q 0 , Q 2i , Q i 4 of BPS particles in space-time with charges Q 0 , Q 2i , Q i 4 can be computed by counting BPS states in the Yang-Mills theory on the D4-brane. This is computed by the supersymmetric path integral of the four dimensional theory on D in the topological sector of the Vafa-Witten maximally supersymmetric N = 4 theory on D [2,43,19], Up to an appropriate gauge fixing, this relation can be written, by using the chemical potentials ϕ 0 = 4π gs and ϕ 1 = 2πθ gs for D0 and D2-branes respectively, as follows where we have used The above relation may be expanded in series of e −gs due to S duality of underlying N = 4 theory that relates strong and weak coupling expansions [19]. Recall that the world-volume gauge theory on the N D4-branes is the N = 4 topological U (N ) YM on D. Turning on chemical potentials for D0-brane and D2-brane correspond to introducing the observables where ω is the unit volume form of ∆ n+1 . The topological theory (4.13) is invariant under turning on the massive deformation which simplifies the theory. By using further deformation which correspond to a U(1) rotation on the fiber, the theory localizes to modes which are invariant under the U(1) and effectively reduces the 4d theory to a gauge theory on ∆ n+1 .
2d quiver gauge theories on ∆ n+1
Note first that from four dimensional space-time view, the wrapped N D-branes on D = O(−p) → C describe a point-particle with dynamics governed by 4d N = 2 supergravity coupled to U (N ) super-Yang Mills. On the D4-branes live a (4 + 1) space-time N = 4 U (N ) supersymmetric gauge theory and on its reduced topological sector one has a 4d N = 4 topological theory twisted by massive deformations.
In our present study, the 2-cycle C is represented by the closed chain ∆ n+1 with multitoric actions and the line O(−p) is a non trivial fiber capturing charges under these abelian symmetries. It will be denoted as O(−p), As a consequence of the topology of ∆ n+1 which is given by (n + 1) intersecting 2-spheres, the previous U (N ) gauge invariance gets broken down to On the 4-cycle D of the local Calabi-Yau threefold, the theory is a N = 4 topologically twisted gauge theory; but using the result of [2], this theory can be simplified by integrating gauge field configuration on fiber O(−p) and fermionic degrees freedom to end with a 2d bosonic quiver gauge theory on ∆ n+1 . This theory has i U (N i ) as a gauge symmetry group and involves: (i) Gauge fields A i , i = 0, 1, .., n, for each gauge group factor U (N i ) with field strength where .22); but which we reduce to its These fields are obtained by integration of the 4d gauge field strengths F i (z, z, y, y) on the fiber which, as usual, can be put as where the loop S 1 i can be thought of as a circle at the infinity (|y| → ∞) of the non compact fiber O (−p i ) ∼ C parameterized by the complex variable y. (iii) 2d matter fields Φ ij in the bi-fundamentals of the quiver gauge group living on the intersection of ∆ n+1 patches with, in general, a leg on S 2 i and the other on S 2 j . In the language of the representations of the gauge symmetry U (N i ) × U (N j ), these fields belongs to N i , N j and describe the link between the gauge theory factors living on the irreducible 2-cycles making ∆ n+1 .
In the language of topological string theory using caps, annuli and topological vertex [38], these bi-fundamentals can be implemented in the topological partition function thought insertion operators type using sums over representations R i R i+1 of the gauge invariance U (N i ) × U (N i+1 ). This construction has been studied recently in [19] for particular classes of local CY 3 such as O (−3) → P 2 and O (−2, −2) → P 1 × P 1 . We will not develop this issue here 2 . Below we 1 We have used the Greek letter β to refer to the roots of the of the gauge group U (N ). Positive roots of the U (Ni) are denoted by β i and should not confused with simple roots αi used in the intersection matrix (Kij = αi.αj ) of the 2-cycles of the base ∆n of local CY3. 2 As argued in [19], the matter fields localized at the intersection point Pi of the 2-spheres S 2 i and S 2 i+1 corresponds to inserting the operator V = where the integral contour is a small loop around Pi.
shall combine however field theoretical analysis and representation group theoretical method to deal with bi-fundamentals.
Derivation of 2d quiver gauge field action
Here we construct the 2d field action S ∆ n+1 describing the localization of the topological gauge theory of the BPS D4-, D2-, D0-brane configurations on the non compact divisor [D] = O(−p) → ∆ n+1 of the local CY 3 . This action can be obtained by following the same method as done for the case O (−p) → Σ g . One starts from eq(4.13) describing the gauge theory on the N D4-branes wrapping D with D0-D4 and D2-D4 bound states TrF ∧ ω. (4.26) In this equation, the parameters g s and θ are related to the chemical potentials ϕ 0 and ϕ 1 for D0 and D2-branes respectively as ϕ 0 = 4π 2 gs and ϕ 1 = 2πθ gs . The field F is the 4d U (N ) gauge field strength F = dA + A ∧ A. It is a hermitian 2-form with gauge connexion A. The field 1-form A reads in the local coordinates {z, z, y, y}, with (z, z) parameterising the base ∆ n+1 and (y, y) for the fiber, as follows A = A z dz + A y dy + Azdz + Aȳdy, A µ = A µ (z, z, y, y) , µ = z, y, z, y.
where H a i , E ± which plays a crucial role in the computation of Wilson loops. In eq(4.26), ω is the 2-form on the compact cycle ∆ n+1 on which the D2-branes lives and is normalized as On the other hand, using eq(2.18-2.21), we can put the right hand side of the above relation 3 as follows (4.31) 3 We have used the formula This shows that on ∆ n+1 = n i=0 C i , the Kahler form splits as ω = 1 The next step is to perform integration on the fiber variables y and y. The topological theory (4.26) localizes to modes which are invariant under U n+1 (1) symmetries and effectively reduces to a gauge theory on the base ∆ n+1 . Let us give details by working out explicitly these steps: (i) First, we have and where we have restricted F to its values in the U N (1) abelian subsalgebra eq(4.29) in order to put Φ (z, z) in the Wilsonian form The same thing can be done for the second term of eq(4.26). We find The 4d action (4.26) reduces then to the following 2d one Now using the fact that ∆ n+1 = n i=0 P 1 i combined with eq(4.31), we see that, depending on the patches of ∆ n+1 where the Wilson field Φ is sitting, we get either adjoint 2d scalars Φ i or bi-fundamentals Φ ij as shown below Note that the Φ i fields are valued in U N i (1) maximal abelian group. They parameterise the maximal T N i torii of the Lie group U (N i ). So they should be compact and undergo periodicity conditions. This means that the linear expansion should be understood as (4.40) and so the 2d field components Φ a i are constrained as, leaving U i invariant. Now substituting eq (4.38) in the relation (4.37), we obtain after implementing the hermiticity condition Φ = 1 2 (Φ + Φ * ) and F = 1 2 (F + F * ), the following ∆n where we have set F ji = F * ij and where we have disregarded the terms Φ ij F ij transforming in . These terms do not preserve the abelian subsymmetry U n+1 (1) of the quiver gauge group U (N 0 ) × ... × U (N n ). These 2d field configurations have a group theoretical interpretation. They correspond to splitting the adjoint representation of U (N ) with N = N 0 +...+N n in terms of representation of U (N 0 )×...×U (N n ) (4.43) The terms of the first sum are associated with Φ i (z) while the other are associated with Φ ij . Obviously since in present case only P 1 i ∩P 1 i±1 which are non trivial, there are no bi-fundamentals Φ ij for j = i ± 1. For the term ∆n ωTr (Φ) (4.37), we get ∆ n+1 Let us first discuss these configurations separately and then give the general result.
Adjoint 2d scalars
Putting eq(3.22) back into (4.26) and focusing on the patches P 1 i by substituting Φ diag = i Φ i (z), we get the diagonal part of the topological 2d quiver gauge field action and where we have added the topologically invariant point-like observables TrΦ 2 i at the points z ∈ P 1 i . Upon integrating out fermions and adjoint scalars using Φ i = −F i −θ i p i and following [2], this topologically twisted theory is equivalent to the bosonic 2d Yang-Mills theory with Yang-Mills gauge coupling constants g Y M i ≡ g i and θ Y M i ≡ θ i terms given by, Here the r i 's are the Kahler parameters of the 2-sphere constituting ∆ n+1 . Note that these gauge coupling constants and θ i 's are not completely independent and are related amongst others as (n + 1) r g s = n i=0 The first equation should be compared with the standard relation 1 appearing in the geometric engineering of quiver gauge theories.
2d bi-fundamentals To get the field action describing the contribution of bi-fundamentals, it is interesting to proceed in steps as follows: Start from the topological field action on the 4-cycle [D 4 ], and think about F as a field strength valued in the maximal non abelian gauge group U (N 0 + ... + N n ). Then expand the real field F as with F ji = (F ij ). The F i 's are the real field strengths of the gauge fields A i valued in the adjoints of U (N i ) factors with generators {T a i } The F ij 's are the field strengths of the gauge fields A ij valued in the Lie algebra associated to the coset . (4.53) Obviously the group U (N 0 + ... + N n ) is not a full gauge invariance of the N = 4 topological gauge theory since the gauge fields A ij part get non zero masses m ij ∼ (r i − r j ) after breaking The next step is to use the same trick as before by integrating partially over the variables of the fiber O(−p 0 , .., −p n ). We get Now using the expansion (4.51) and the property we have Then we can bring eq(4.54) into the following reduced form where we have added the typical mass deformations p i 2gs TrΦ 2 i and by analogy p ij 2gs Tr (Φ ij Φ ji ) with some p ij integers which a priori should be related to the p i degrees of the line bundle. Integrating out the scalar fields, one ends with where g 2 i and θ Y M i are as in (4.48) and where Note that for the base ∆ n+1 realizing the elliptic curve in terms of intersecting 2-spheres, the intersection P 1 i ∩ P 1 j is given by a finite and discrete set of points P ij of ∆ n+1 . These points have zero volumes vol P 1 i ∩ P 1 j and so In the present case where ∆ n+1 is taken as i P 1 i , we have (n + 1) intersection points P i,i+1 . The non zero intersection numbers is between neighboring spheres P 1 i and P 1 i±1 , Implementing this specific data, the last term of eq(4.58) reduces then to a sum of integrals over the following field densities which diverge as long as |F i,i+1 | 2 = 0. This property is not strange and was in fact expected. It has the behavior of a Dirac function one generally use for implementing insertions. To exhibit this feature, denote by P 1 i ∩ P 1 i+1 = {P i }, the points where the 2-spheres intersect. Then we have where δ P − P + i is a Dirac delta function. Combining the above results, one ends with the follwing field action of the 2d bosonic quiver gauge theory describing the brane configuration on the non compact 4-cycle [D 4 ] = O(−p 0 , .., −p n ) → ∆ n+1 of the local CY 3 , In this relation, the coupling constants g 2 i and G 2 i are expressed in terms of the string coupling g s , the Kahler moduli of the 2-spheres of the base ∆ n+1 and the degrees of the fiber O(−p 0 , .., −p n ). The F i 's are the U (N i ) gauge field strengths and are insertion operators in bi-fundamental representations and are needed to glue the spheres. The last term may be rewritten in different forms. For example like n and it depends on the P i 's.
Path integral measure in 2d q-deformed quiver gauge theory
Here we want to study the structure of the measure in the path integral description of the partition function of the quiver gauge field action S 2d eq(4.57,4.64) which we rewrite as We will give arguments indicating that bi-fundamentals contribute as well to the deformation of path integral measure and in a very special manner. More precisely, we give an evidence that adjoints and bi-fundamentals altogether deform the measure by the quantity In this relation, the Φ a i 's are as in eq(4.39) and where I ij is the intersection matrix of the 2-spheres of the ∆ n+1 base. It is equal to minus the generalized Cartan matrix of affine n . To begin recall that the partition function Z Y M (Σ g ) of topological 2d q-deformed U (M ) YM on a genus g-Riemann surface Σ g is given by where the φ a 's are the diagonal values of U (M ) unitary gauge symmetry. In this relation, ∆ H (φ a ) is given by, and is invariant under the periodic changes (4.41). It can take the following form (4.71) Using this relation, we see that, on each 2-sphere S 2 i of ∆ n+1 , the correction to the path integral measure is which by setting q i = exp −g s r i r can be put in the equivalent form The power 2 in the right hand of above relation can be interpreted in terms of the entries of the intersection matrix I ii = −K ii of the i-th 2-spheres of ∆ n+1 . This property is visible on eq(4.68) where the power 2 − 2g (Euler characteristics) is just the self-intersection of the Riemann surface Σ g . As such, the above relation can be put in the form This relation is very suggestive, it lets understand that this feature is a special property of a more general situation where appears number intersection. More precisely, the structure of the deformation of the path integral measure for local Calabi-Yau threefolds with some base B made of 2-cycles C i with intersection matrix I ij = [C i ] . [C j ] should be as follows In our concern, the 2d manifold is given by the base manifold ∆ n+1 of the local Calabi-Yau threefold. In this case the intersection matrix of the 2-cycles is given by I ij = −K ij . So the partition function of the q-deformed 2d quiver gauge theory reads in general as (4.75) where S Quiver is the action given by eq(4.64-4.66). Of course, here ∆ n+1 is an elliptic curve and so one should have J B = 1. This condition can be turned around and used rather as a consistency condition to check the formula (4.74). Indeed, for the case of the 2-torus T 2 = S 1 × S 1 , we know that T 2 . T 2 = 0, (4.76) and so no q-deformation in agreement with eq(4.68). The same property is valid for [∆ n+1 ] . But at this level, one may ask what is then the link between the two realisations T 2 = S 1 × S 1 and [∆ n+1 ] = n j=0 S 2 j . The answer is that in the second case the role of the condition that is obeyed by S 1 × S 1 is now played by the vanishing property, n j=0 K ij s j = 0 (4.78) for affine Kac Moody algebras (with s j = 1 for affine A n ). Let us check that J ∆ n+1 eq(4.74) is indeed equal to unity. We will do it in two ways: First consider the simplest case given by the superconformal model with gauge symmetry as in eq(4.20) and specify the Kahler parameters at the moduli space point where all the 2-spheres have the same area (r i = r n+1 ). In this case the quantity [Φ a i − Φ b i ] q i is independent of the details of ∆ n+1 and so the above formula reduces to, which is equal unity (J (Φ) = 1) due to the relation n j=0 K ij = 0. In the general case where the gauge group factors are arbitrary and for generic points in the moduli space, the identity (4.74) holds as well due to the same reason. For the instructive case n = 2 for instance, we have As we see, the diagonal terms of the first line of the right hand side of the above relation are compensated by the off diagonal terms. Thus J QF T ∆ 3 (Φ) reduces exactly to unity and so the 2d quiver gauge theory is not deformed. Nevertheless, one should keep in mind that this would be a special property of a general result for 4d black holes obtained from BPS D-branes in type IIA superstring moving on the follwing general local Calabi-Yau threefolds (4.81) Here m = (m 1 , ..., m k ) is an integer vector and B k is a complex one dimension base consisting of the intersection of k 2-spheres S 2 i with some intersection matrix I ij . Using Vinberg theorem [44,31,45,46], the possible matrices I ij may be classified basically into three categories. In the language of Kac-Moody algebras, these correspond to: (i) Cartan matrices of finite dimensional Lie algebras satisfying j I ij u j > 0 (4.82) for some positive integer vector (u j ). In this case the resulting 2d quiver gauge theory is qdeformed. This theory has been also studied in [19]. (ii) Cartan matrices for affine Kac-Moody algebras including simply laced ADE ones j I ij u j = 0, (4.83) where now the u ′ j s are just the Dynkin weights. In this case, the 2d quiver gauge theory is un-deformed due to the identity j I ij s j = 0 where the s ′ j s are the Dynkin weights. (iii) Cartan matrices for indefinite Kac-Moody algebras where the intersection matrix satisfies the condition j I ij u j < 0, (4.84) for some positive integer vector (u j ). Here the 2d quiver gauge theory is q-deformed.
Conclusion
In this paper, we have studied 4d black holes in type IIA superstring theory on a particular class of local Calabi-Yau threefolds with compact base made up of intersections of several 2spheres. This study aims to test OSV conjecture for the case of stacks of D-brane configurations on CY 3 cycles involving q-deformed 2d quiver gauge theories with gauge symmetry G having more than one U (N i ) gauge group factor. The class of local threefolds we have considered with details is given by X → ∆ n+1 where m is a (n + 1) integer vector (m 0 , ..., m n ). The m i components capture the non trivial fibration of rank 2 line bundles of the local CY 3 . They also define the U n+1 (1) charges of the corresponding chiral superfields in the supersymmetric gauged linear sigma model field realization. The compact elliptic curve ∆ n+1 is generally given by (n + 1) intersecting spheres according to affine Dynkin diagrams.
This study has been illustrated in the case of the local affine n model; but may a priori be extended to the other affine models especially for DE simply laced series and beyond. Black holes in four dimensions are realized by using D-brane configurations in type IIA superstring compactified on X (m,−m−2,2) 3 . The topological twisted gauge theory on D4brane wrapping 4-cycles in the local CY 3 s is shown to be reduced down to a 2d quiver gauge theory on the base ∆ n+1 and agrees with OSV conjecture. This agreement is ensured by the results on O(2g + m − 2) ⊕ O(−m) → Σ g obtained in [2,18]. It is interestingly remarkable that bi-fundamentals and adjoint scalars contribute to the deformed path integral measure with opposite powers and compensate in the case of affine geometries as shown in the example (4.80).
In developing this analysis, we have taken the opportunity to complete partial results on the 2d N = 2 supersymmetric gauged linear sigma model realization of the resolution of affine singularities and the local Calabi-Yau threefold with non trivial fibrations. We have also given comments on other black hole models testing OSV conjecture. They concern the class of 4d black holes with D-branes wrapping cycles in local threefolds with complex one dimension base manifolds B k (4.81) classified by Vinberg theorem [44]. The latter is known to classify Kac-Moody algebras in three main sets: (i) ordinary finite dimensional, (ii) affine Kac-Moody and (iii) indefinite set.
In the end, we would like to note that the computation given here can be also done by using the topological vertex method. Aspects of this approach have been discussed succinctly in present study. More details on this powerful method as well as other features related to 2d quiver gauge theories and topological string will be considered elsewhere.
|
2014-10-01T00:00:00.000Z
|
2006-11-27T00:00:00.000
|
{
"year": 2006,
"sha1": "11ee37de835bd64fdec9e5514b374d6567d4381d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2007.03.047",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a158699d4aff77725d7a00cef7f1f8ad436e05a9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
15497952
|
pes2o/s2orc
|
v3-fos-license
|
A Study of the Combined Effects of Physical Activity and Air Pollution on Mortality in Elderly Urban Residents: The Danish Diet, Cancer, and Health Cohort
Background Physical activity reduces, whereas exposure to air pollution increases, the risk of premature mortality. Physical activity amplifies respiratory uptake and deposition of air pollutants in the lung, which may augment acute harmful effects of air pollution during exercise. Objectives We aimed to examine whether benefits of physical activity on mortality are moderated by long-term exposure to high air pollution levels in an urban setting. Methods A total of 52,061 subjects (50–65 years of age) from the Danish Diet, Cancer, and Health cohort, living in Aarhus and Copenhagen, reported data on physical activity in 1993–1997 and were followed until 2010. High exposure to air pollution was defined as the upper 25th percentile of modeled nitrogen dioxide (NO2) levels at residential addresses. We associated participation in sports, cycling, gardening, and walking with total and cause-specific mortality by Cox regression, and introduced NO2 as an interaction term. Results In total, 5,534 subjects died: 2,864 from cancer, 1,285 from cardiovascular disease, 354 from respiratory disease, and 122 from diabetes. Significant inverse associations of participation in sports, cycling, and gardening with total, cardiovascular, and diabetes mortality were not modified by NO2. Reductions in respiratory mortality associated with cycling and gardening were more pronounced among participants with moderate/low NO2 [hazard ratio (HR) = 0.55; 95% CI: 0.42, 0.72 and 0.55; 95% CI: 0.41, 0.73, respectively] than with high NO2 exposure (HR = 0.77; 95% CI: 0.54, 1.11 and HR = 0.81; 95% CI: 0.55, 1.18, p-interaction = 0.09 and 0.02, respectively). Conclusions In general, exposure to high levels of traffic-related air pollution did not modify associations, indicating beneficial effects of physical activity on mortality. These novel findings require replication in other study populations. Citation Andersen ZJ, de Nazelle A, Mendez MA, Garcia-Aymerich J, Hertel O, Tjønneland A, Overvad K, Raaschou-Nielsen O, Nieuwenhuijsen MJ. 2015. A study of the combined effects of physical activity and air pollution on mortality in elderly urban residents: the Danish Diet, Cancer, and Health cohort. Environ Health Perspect 123:557–563; http://dx.doi.org/10.1289/ehp.1408698
Introduction
Regular physical activity has many health benefits including reduced all-cause mortality along with a reduced risk of cardiovascular disease, cancer, and diabetes (Johnsen et al. 2013;Samitz et al. 2011;Schnohr et al. 2006;Woodcock et al. 2011). Exposure to air pollution may adversely affect human health by triggering or exacerbating respiratory and cardiovascular conditions, certain cancers, and possibly diabetes, leading to premature mortality (Beelen et al. 2014a;Hoek et al. 2013;Raaschou-Nielsen et al. 2012, 2013. Declining rates of physical activity (Brownson et al. 2005) have given rise to population-level health initiatives including promotion of active transport in cities, encouraging a shift from car use to cycling and walking (de Nazelle et al. 2011). These initiatives are also highly relevant as a solution to other urban challenges such as traffic congestion, air pollution, and greenhouse-gas emission problems in major cities. One of the major challenges to active transport initiatives and other efforts to promote exercise is the trade-off between the health benefits of increased physical activity and potential harms due to amplified exposure to air pollution during outdoor physical activity in urban areas (Rojas-Rueda et al. 2011, 2013Woodcock et al. 2014). Increased respiratory uptake and deposition of air pollutants in the lung due to higher minute ventilation during physical exercise may amplify harmful effects of air pollution, even in young and healthy individuals (Giles and Koehle 2014;Strak et al. 2010). In controlled, real-life exposure studies, reduced lung function has been reported in association with walking on a busy street in London (McCreanor et al. 2007;Zhang et al. 2009), running near heavy traffic close to major highway , cycling during rush hour on a heavytraffic route (Strak et al. 2010), or hiking on high air pollution days (Korrick et al. 1998). Similarly, exposure to air pollution and exercise in a controlled setting was reported to alter markers of vascular impairment, arterial stiffness, and vascular reactivity and to reduce exercise performance (Cutrufello et al. 2012;Lundbäck et al. 2009;Shah et al. 2008) and alter immune function (Chimenti et al. 2009). These studies documented evidence of acute adverse health effects of short-duration exposures to high levels of air pollution during exercise, which seem to be transient and reversible after exercise, at least in young healthy individuals.
One study examined whether exercise modified associations of acute (same-day) exposure to air pollution with mortality in Hong Kong, and reported that regular exercise may reduce premature death attributable to air pollution in elderly subjects (Wong et al. 2007). A cohort study in children reported that participating in sports was associated with development of asthma in children residing in areas with high ozone, but not in areas with low ozone levels (McConnell et al. 2002). These studies implied that there is a potential interaction between physical activity and air pollution, yet no cohort study in adults has explored Background: Physical activity reduces, whereas exposure to air pollution increases, the risk of premature mortality. Physical activity amplifies respiratory uptake and deposition of air pollutants in the lung, which may augment acute harmful effects of air pollution during exercise. oBjectives: We aimed to examine whether benefits of physical activity on mortality are moderated by long-term exposure to high air pollution levels in an urban setting. Methods: A total of 52,061 subjects (50-65 years of age) from the Danish Diet, Cancer, and Health cohort, living in Aarhus and Copenhagen, reported data on physical activity in 1993-1997 and were followed until 2010. High exposure to air pollution was defined as the upper 25th percentile of modeled nitrogen dioxide (NO 2 ) levels at residential addresses. We associated participation in sports, cycling, gardening, and walking with total and cause-specific mortality by Cox regression, and introduced NO 2 as an interaction term. results: In total, 5,534 subjects died: 2,864 from cancer, 1,285 from cardiovascular disease, 354 from respiratory disease, and 122 from diabetes. Significant inverse associations of participation in sports, cycling, and gardening with total, cardiovascular, and diabetes mortality were not modified by NO 2 . Reductions in respiratory mortality associated with cycling and gardening were more pronounced among participants with moderate/low NO 2 [hazard ratio (HR) = 0.55; 95% CI: 0.42, 0.72 and 0.55; 95% CI: 0.41, 0.73, respectively] than with high NO 2 exposure (HR = 0.77; 95% CI: 0.54, 1.11 and HR = 0.81; 95% CI: 0.55, 1.18, p-interaction = 0.09 and 0.02, respectively). conclusions: In general, exposure to high levels of traffic-related air pollution did not modify associations, indicating beneficial effects of physical activity on mortality. These novel findings require replication in other study populations. citation: Andersen ZJ, de Nazelle A, Mendez MA, Garcia-Aymerich J, Hertel In a large prospective urban cohort, we studied whether reductions in mortality linked to regular outdoor leisure-time and transport-related physical activity in terms of doing sports, cycling, gardening, and walking (Johnsen et al. 2013) were modified by longterm exposure to high levels of air pollution at residence.
Materials and Methods
Design and study population. This study was based on the Danish Diet, Cancer, and Health cohort, described in detail elsewhere (Tjønneland et al. 2007). In brief, the cohort consists of 57,053 men (48%) and women (52%) born in Denmark, 50-64 years of age, living in Copenhagen or Aarhus, with no previous cancer diagnosis at the time of enrollment (1993)(1994)(1995)(1996)(1997). The participants completed an extensive questionnaire on diet, smoking, alcohol consumption, education, occupation, physical activity, history of diseases and medication, and other health-related items, and provided blood samples, blood pressure, and height and weight measurements at enrollment. Relevant Danish ethical committees and data protection agencies approved the study, and written informed consent was provided by all participants.
Mortality definition. Each cohort member was followed up in the Danish Register of Causes of Death (Helweg-Larsen 2011) until 31 December 2009, using a unique personal identification number. On the basis of the underlying cause of death, we defined total mortality as all mortality from natural causes [International Classification of Diseases, 10th Revision (ICD-10) codes A00-R99], cancer mortality (C00-C97), cardiovascular mortality (I00-I99), respiratory mortality (J00-J99), and diabetes mortality (E10-E14). We extracted the date of emigration or disappearance and the addresses of all cohort members from the Central Population Registry (Pedersen 2011) using their personal identification numbers.
Physical activity. Physical activity was assessed by a self-administered, interviewerchecked questionnaire in which leisure time and transport-related (e.g., to and from work, shopping) physical activity was reported as hours per week spent on sports, cycling, gardening, walking, housework (cleaning, shopping), and "do-it-yourself" activities (e.g., house repair). Data were collected separately for winter and summer of the previous year, and the two values were averaged, so that being active implies at least half an hour spent on a specific activity per week. The physical activity questions have been validated in two studies that found high correlation between self-reported physical activity estimates with the accelerometer measurements of total metabolic equivalent in 182 subjects (Cust et al. 2008) and with combined heart rate and movement sensing measurements in 1,941 subjects (InterAct Consortium et al. 2012). We focused in this study on sports, cycling, and gardening, which were previously associated with lower mortality in the same cohort (Johnsen et al. 2013), and additionally walking at least half an hour per week, which is relevant as an outdoor physical activity pertinent to exposure to air pollution. A previous analysis of data from the cohort indicated that accounting for the amount of physical activity did not substantially alter associations with mortality when activity was dichotomized as any participation versus none (Johnsen et al. 2013). Therefore, our main analyses focused on the estimated effect of participation (yes/ no) in sports, cycling, gardening, and walking on mortality, whereas associations with the amount of cycling (categorized as does not cycle, 0.5-4 hr/week, or > 4 hr/week) were estimated in sensitivity analyses.
Air pollution exposure. The outdoor concentration of nitrogen dioxide (NO 2 ) was calculated at the residential addresses of each cohort member with the Danish AirGIS dispersion modeling system (http://www.dmu. dk/en/air/models/airgis/). AirGIS is based on a geographical information system (GIS) and provides estimates of traffic-related air pollution with high temporal (1-year averages) and spatial (address-level) resolution. AirGIS is a validated model: High correlation was found between AirGIS estimated and measured NO 2 values (Raaschou-Nielsen et al. 2000), which has been used in a number of studies (Andersen et al. 2012b(Andersen et al. , 2012cRaaschou-Nielsen et al. 2011, 2013 and is described in more detail in the Supplemental Material, "AirGIS Human Exposure Modelling System." We used the mean of annual concentrations of NO 2 at residential addresses of each cohort participant since 1971 until the end of follow-up as a proxy of average exposure to traffic-related air pollution during exercise. We defined an indicator variable of high versus moderate/low NO 2 exposure separated by the 75th percentile of exposure range in the cohort (≥ vs. < 19.0 μg/m 3 ).
Statistical methods. We used Cox proportional hazards regression with age as the underlying time scale to simultaneously estimate associations between mortality and participation in sports, cycling, gardening, and walking, with separate models used to estimate associations of the four activities with total, cancer, cardiovascular, respiratory, and diabetes mortality, respectively. The follow-up started on the date of enrollment into the cohort (1993)(1994)(1995)(1996)(1997) until the date of death, emigration, or 31 December 2009, whichever came first. We fit a crude model adjusted for age (underlying time scale), each of the four domains of physical activity, NO 2 , sex, and year of enrollment into cohort. In addition we fit a fully adjusted model that also included occupational physical activity (sedentary work, standing work, manual work, heavy manual work, or no occupation), smoking status (never, previous, current), lifetime smoking intensity (spline), smoking duration (years), environmental tobacco smoke (indicator of exposure to smoke in the home and/or at work for at least 4 hr/day), alcohol intake (indicator and spline for intensity, grams/day), educational level (< 8, 8-10, or > 10 years of education), fruit and vegetable intake (grams/day), fat intake (grams/day), occupational risk (indicator of a year or longer in an occupation with potential exposure to smoke, particles, fumes or chemicals: mining, rubber industry, tannery, chemical industry, wood-processing industry, metal processing, foundry, steel-rolling mill, shipyard, glass industry, graphics industry, building industry, truck, bus, or taxi driver, manufacture of asbestos or asbestos cement, asbestos insulation, cement article industry, china and pottery industry, painter, welder, hairdresser, auto mechanic), and mean income in the municipality of residence at enrollment (spline), We checked the proportional hazards assumption for all categorical variables by testing for a non-zero slope in a generalized linear regression of the scaled Schoenfeld residuals on functions of time [estat phtest command in Stata (StataCorp, College Station, TX, USA)]. We detected violation of proportional hazards assumption by marital status (single, married, divorced, widow/ widower) and therefore stratified the model by this variable. Significance level < 0.05 was considered as statistically significant result in all analyses. An additional model was fit in which potentially mediating variables [body mass index (BMI; continuous, kilograms per meter squared), self-reported diagnosis or medication for hypertension and hypercholesterolemia] were added to the full model. Effect modification of associations between the four physical activities (yes/no) and mortality from different causes by exposure to NO 2 (moderate/high or low) was evaluated by introducing an interaction term into the model, and tested using likelihood ratio tests. Additionally, for cycling, we tested whether there was an interaction between intensity of cycling (> 4 hr/week, 0.5-4 hr/week, does not cycle) and NO 2 levels, to examine whether there is a dose-response relationship with different levels of NO 2 [very high, ≥ 23.9 μg/m 3 (90th percentile of exposure range); moderate, 15.1-23.9 μg/m 3 ; low, < 15.1 μg/m 3 (50th percentile of exposure range)]. We also conducted sensitivity analyses using 1-year mean of NO 2 at the residential address at enrollment (1993-1997, corresponding to the time period for the self-reported physical activity data) as an alternative proxy of exposure to air pollution. Finally, we conducted two sensitivity analyses on total and respiratory mortality: a) with high exposure to air pollution defined as NO 2 levels above the 90th percentile (23.9 μg/m 3 ) of exposure range; and b) in a cohort subset consisting of 13,948 subjects living in inner Copenhagen (municipalities of Copenhagen and Frederiksberg)-the most urban part of the cohort, with the highest levels of cycling and air pollution (75th percentile of NO 2 distribution = 24 μg/m 3 ). Results are presented as hazard ratios (HRs) with 95% confidence intervals (CIs), estimated with stcox in Stata 11.2.
Results
Of 57,053 cohort members, 571 were excluded due to cancer diagnosis before baseline, 2 due to uncertain date of cancer diagnosis, 960 for whom an address history was not available for at least 80% of the time between 1971 and recruitment date in the Central Population Registry or their address at baseline could not be geocoded, 948 due to missing air pollution exposure (due to missing traffic counts or other air pollution model input data), and 2,511 due to missing information for a potential confounder or effect modifier, leaving 52,061 cohort members for the study. Excluded subjects did not differ significantly from the rest of the cohort with respect to age, physical activity levels, and education (results not shown).
Mean age at recruitment was 56.6 years ( Table 1). Most of the study subjects participated in physical activity: 54.3% participated in sports, 68.0% cycled, 73.5% gardened, and 93.0% walked. Participation in all physical activities was lower among those who died during follow-up than in the entire cohort (Table 1), with the lowest participation among those who died from respiratory disease and diabetes ( Table 2). The mean concentration of NO 2 at residence was 16.9 ± 5.2 μg/m 3 for the cohort and 17.9 ± 5.7 μg/m 3 for the subjects who died during follow-up (Table 1).
There was no statistically significant effect modification of inverse associations between any of the four physical activities and mortality by NO 2 , except for gardening (p = 0.02), and Values are n (%) or mean ± SD. a Total natural mortality, excluding external cause of death. b For those who participate in physical activity. c Worked ≥ 1 year in an industry with exposure to smoke, particles, fumes, or chemicals (see "Methods and Materials"). Table S1). Furthermore, sensitivity analyses showed no effect modification of associations between total and respiratory mortality with any physical activity when exposure to NO 2 was dichotomized at the 90th percentile (23.9 μg/m 3 ) (see Supplemental Material, Table S2), or in the subset of cohort living in inner Copenhagen (see Supplemental Material, Table S3).
Discussion
Estimates suggesting that leisure-time participation in sports, cycling, and gardening was associated with lower mortality were not significantly modified by exposure to NO 2 in an urban setting, for total, cancer, cardiovascular, and diabetes mortality. Estimated benefits of cycling and gardening on respiratory mortality were moderately attenuated among those with high levels of NO 2 exposure compared with moderate or low exposure. Our finding of significant reductions in total natural and cause-specific mortality related to physical activity confirm existing evidence (Samitz et al. 2011;Woodcock et al. 2011) including Danish data (Johnsen et al. 2013;Schnohr et al. 2006). Estimated benefits of cycling, including cycling to and from work and shopping, were weaker than estimated effects of participating in sports, but were significant, and comparable to the limited evidence. Benefits of cycling estimated in the present analysis were slightly weaker than those earlier reported for cycling to work on allcause mortality in another cohort in Denmark (relative risk = 0.72; 95% CI: 0.57, 0.91) (Andersen et al. 2000) and comparable to cycling to work in Chinese women on overall mortality [HR = 0.79; 95% CI: 0.61, 1.01 and HR = 0.66; 95% CI: 0.40, 1.07, for 0.1-3.4 and ≥ 3.5 MET (metabolic equivalent)hr/day, respectively, compared with no cycling] (Matthews et al. 2007). Estimated inverse effects of gardening on mortality were noteworthy, because they are similar in magnitude to those of cycling, whereas weak inverse associations were detected between walking and respiratory mortality only.
Adverse effects of chronic exposure to air pollution on total natural and cardiovascular mortality are well supported (Hoek et al. 2013), and positive associations were also evident in this Danish cohort, where air pollution levels are relatively low (Beelen et al. 2014a(Beelen et al. , 2014bRaaschou-Nielsen et al. 2012, 2013. Long-term exposure to air pollution was also associated with diabetes mortality in this cohort (Raaschou-Nielsen et al. 2013), but not with respiratory mortality, in agreement with the recent meta-analyses of 16 European cohorts, including a subset of the cohort in the present analyses (Dimakopoulou et al. 2014). On the other hand, associations of air pollution with incidence of chronic respiratory disease-asthma and COPD (chronic obstructive pulmonary disease)-have been also found in this cohort (Andersen et al. , 2012a. Our results-that long-term benefits of physical activity on all major types of mortality were not moderated by exposure to high levels of NO 2 -are novel. This may imply that acute stress and damage to the cardiovascular system induced by short-term exposure to air pollution during exercise, in terms of vascular impairment, arterial stiffness, and reduced blood flow, as shown in earlier studies (Lundbäck et al. 2009;Shah et al. 2008), seem to be transient and reversible and do not abate longterm benefits of physical activity on mortality. Our results may furthermore be explained by the short duration of the physical activities, with mean of 2-3 hr/week for most activities (Table 1); this implies that extra inhaled dose of air pollution during physical activity, which is a function of increased inhalation and duration, is only a small fraction of total inhaled dose of air pollution (Rojas-Rueda et al. 2011), and is therefore not sufficient to increase the risk of premature mortality. Our results are furthermore in line with a study finding significantly lower levels of physical activity on days with poor air quality among respiratory disease patients, but not in cardiovascular patients, who do not seem immediately enough bothered by air pollution to change their outdoor physical activity habits (Balluz et al. 2008;Wells et al. 2012). Our study thus may imply that effects of longterm exposure to NO 2 and physical activity on overall and cardiovascular mortality are independent of each other, with benefits of outdoor physical activity not being reduced by exposure to NO 2 .
Inverse associations of cycling and gardening with respiratory mortality were closer to the null among subjects with high NO 2 exposure (0.77; 95% CI: 0.54, 1.11 year, and mutually for other three physical activities, occupational physical activity, smoking status, smoking intensity, smoking duration, alcohol intake, environmental tobacco smoke, education, fruit and vegetable intake, fat intake, risk occupation, mean income in municipality, and stratified by marital status. c p-Value for interaction. and 0.81; 95% CI: 0.55, 1.18, respectively) than among those with moderate/low NO 2 (0.55; 95% CI: 0.42, 0.72 and 0.55; 95% CI: 0.41, 0.73, respectively). Only one similar study exists in a cohort of children, which, consistent with our findings, showed asthma development only in children living in areas with high ozone concentrations, and not in those living in areas with low ozone (McConnell et al. 2002). It is plausible that amplification of lung damage due to greater inhaled doses of air pollution through physical activity in urban areas with high air pollution may moderate the benefits of physical activity, which improves some of the same physiological mechanisms. Earlier studies have shown that hikers with a history of asthma had significantly greater air pollution-related acute reductions in pulmonary functions than did asthma-free hikers (Korrick et al. 1998), and that subjects with moderate asthma had greater acute lung function reductions after walking on a busy street in London than did those with mild asthma (McCreanor et al. 2007;Zhang et al. 2009). However, reduced physical activity was observed on days with poor air quality among respiratory disease patients (Balluz et al. 2008;Wells et al. 2012), but not among cardiovascular disease patients, as noted earlier. This implies an alternative explanation to our findings that reduced benefit from physical activity in subjects residing in areas with high air pollution may be attributable to abstaining from physical activity on days with high air pollution, and not from enhanced negative effects of greater exposure to air pollution during physical activity. Our findings were weakened by the fact that there was no dose-response relationship in reductions of respiratory mortality related to cycling by number of hours spent cycling and by increasing levels of air pollution (see Supplemental Material, Table S1). Although numbers are small in the interaction analyses, the lack of effect of duration of cycling may imply that the cyclists themselves differ from the noncyclists, and that in general, the effects of air pollution are minimal in healthy people. Finally, significant interactions of cycling and gardening with air pollution observed for respiratory mortality (Table 3) could not be reproduced when considering levels of NO 2 > 24.9 μg/m 3 (the 90th percentile) as high exposure, or when considering the subset of subjects living in inner Copenhagen, where levels of air pollution and number of people cycling are at the highest in Denmark (see Supplemental Material, Tables S2 and S3). However, these analyses need to be interpreted with caution because in both sensitivity analyses, exposure levels were also increased in the "low" exposure category, possibly obscuring differences in associations between the lower and higher levels of exposure.
In summary, our findings suggest that outdoor physical activity in areas with high air pollution may moderate, but not reverse, the benefits of physical activity on respiratory mortality: Adverse effects of the additional pollutants inhaled over time do not outweigh the benefits of physical activity. Our results, however, need to be reproduced, because of both the small number of people dying from respiratory causes and the sensitivity of these results to the definition of NO 2 and subcohort. Furthermore, because of the relatively small number of people dying from respiratory causes (6% in this cohort), and assuming that our results are true, reductions in health benefits related to physical activity in areas with high air pollution are rather marginal.
Strengths of our study include a large prospective cohort, with well-defined and validated information on physical activity and air pollution exposure, both of which have been linked to mortality (Johnsen et al. 2013;Raaschou-Nielsen et al. 2012, 2013. We furthermore benefited from the state-ofthe art information on individual exposure to NO 2 with high spatial (address-specific) and temporal (annual mean) resolution, assessed over 35 years. Another strength of this cohort is the very high prevalence of cycling (68%), both leisure and utilitarian (e.g., to work, shopping); this provided the data for evaluation of an interaction of air pollution with this type of physical activity, in contrast to existing studies on cycling to work (Andersen et al. 2000;Matthews et al. 2007;Rojas-Rueda et al. 2011. Furthermore, this is the first cohort study to evaluate individual-level benefits of physical activity in an urban cohort while also considering individual exposure to air pollution. A study of short-term effects of air pollution on mortality in Hong Kong, which has several-fold higher levels of air pollution than in Copenhagen, found that those who exercised regularly had reduced susceptibility to acute effects of air pollution and lower mortality than those who did not exercise (Wong et al. 2007). Our study provides a novel approach in contrast to existing health impact assessment studies. Our study estimated benefits versus risks of increased physical activity, typically by evaluating active travel policies targeted to shift commuters from car use to cycling, on a population level, based on derivation of risk estimates from different studies and hypothetical scenarios (Andersen et al. 2000;de Hartog et al. 2010;Rojas-Rueda et al. 2011, 2013. A weakness of our study is the use of NO 2 levels at residence as a proxy of average air pollution levels encountered during physical activity. This assumption works well for gardening, which typically occurs at the residence (the exact location of air pollution modeling) but less well for cycling and walking. Given that this cohort consists of older subjects 50-65 years of age at baseline, many of whom retired before study recruitment or during study follow-up, it is reasonable to assume that most of their time walking and cycling had taken place in close proximity to their residence, which may be well represented by air pollution levels at residence. The higher exposure misclassification is expected for cycling than for gardening. Cycling levels were the same for subjects residing in areas with low/moderate and high air pollution levels, 68%. Gardening was more common in subjects living in areas with low/moderate air pollution (79%) than in those living in areas with high air pollution (58%), because rates of house ownership are higher in the suburbs than in the inner city, where pollution is highest. Furthermore, we did not have information on cycling, walking, and exercising habits of cohort participants, and possible behavioral adjustment by those living in areas of high air pollution levels to avoid the most polluted areas, which may bias results. Similarly, participating in sports is a poor proxy of outdoor activity because we do not have information on the type of sports activity or whether it took place outdoors (running), or indoors (e.g., gym, badminton, swimming). Thus, lack of findings of interaction with air pollution and participation in sports may be attributable to exposure misclassification. Many cohort members retired during the study follow-up, implying the possibility of misclassification of exposure over time, if retirement led to considerable changes in cycling after enrollment. Most biking trips (over 65%) in Denmark are undertaken for leisure activities, shopping, and doing errands, with a minority for transport to and back from work (Danish Technical University 2013). Cycling decreases with age in Denmark, but only marginally, by about 10-15% from 50-59 to 60-69 years, due to retirement/decrease in share of cycling trips to work (Vithen 2013).
Another weakness is a lack of data on particulate matter (PM), which was available for only half of the cohort participants living in Copenhagen (Beelen et al. 2014a) and only for 2010, and was therefore not used here. However, NO 2 and nitrogen oxides were found to correlate strongly with PM in Denmark, with Spearman correlation coefficient of 0.70 for PM 10 (PM with diameter ≤ 10 μm) and 0.93 for ultrafine fraction of PM (Hertel et al. 2001;Ketzel et al. 2003), implying that similar results for PM would be expected. Furthermore, we have shown earlier that PM 10 originates largely from long-range transport in Denmark, resulting in smaller spatial variation than ultrafine PM and NO 2 (Andersen et al. 2007), which originate mainly from traffic. Traffic is the main focus of this study, reflecting air pollution volume 123 | number 6 | June 2015 • Environmental Health Perspectives exposures during exercise or commute. Still, NO 2 serves as a proxy for all traffic-related pollutants, including elemental carbon, nitric oxide, carbon monoxide, ultrafine PM, noise, and possibly road dust.
Another weakness is that DCH cohort participants are likely healthier than the general Danish population, because they are better educated and had higher income than nonparticipants (Tjønneland et al. 2007). Finally, air pollution levels are relatively low in Copenhagen and Aarhus, and these findings need to be reproduced in sites with higher air pollution levels.
Our results are in agreement with a growing number of health impact assessment studies that evaluate the net effects of an increase in cycling at the population level, typically as a shift from car use, and conclude that health benefits due to increased physical activity levels generally outweigh the risks related to increase inhaled air pollution doses during cycling (de Hartog et al. 2010;Rojas-Rueda et al. 2011Woodcock et al. 2014).
Conclusions
Physical activity plays a key role in improving the physiologic mechanisms and health outcomes that exposure to air pollution may exacerbate. This presents a challenge in understanding and balancing the beneficial effects of physical activity in the urban environment with the detrimental effects of air pollution on human health. Our findings suggest that beneficial effects of physical activity on mortality in an urban area with relatively low levels of air pollution are not moderated in subjects residing in areas with the highest levels of air pollution. Estimated benefits of cycling and gardening on respiratory mortality were marginally reduced, but not annulled, for those living in areas with high NO 2 levels, but these novel results need confirmation. Overall, the long-term benefits of physical activity in terms of reduced mortality outweigh the risk associated with enhanced exposure to air pollution during physical activity.
|
2016-05-04T20:20:58.661Z
|
2015-01-27T00:00:00.000
|
{
"year": 2015,
"sha1": "ee345e9decfbbeca926780b865b8d71552a3989a",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.1289/ehp.1408698",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2475dc00567d6aa6bf41269500ca80182538108d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257739047
|
pes2o/s2orc
|
v3-fos-license
|
Second release of the CoRe database of binary neutron star merger waveforms
We present the second data release of gravitational waveforms from binary neutron star merger simulations performed by the Computational Relativity (CoRe) collaboration. The current database consists of 254 different binary neutron star configurations and a total of 590 individual numerical-relativity simulations using various grid resolutions. The released waveform data contain the strain and the Weyl curvature multipoles up to $\ell=m=4$. They span a significant portion of the mass, mass-ratio,spin and eccentricity parameter space and include targeted configurations to the events GW170817 and GW190425. CoRe simulations are performed with 18 different equations of state, seven of which are finite temperature models, and three of which account for non-hadronic degrees of freedom. About half of the released data are computed with high-order hydrodynamics schemes for tens of orbits to merger; the other half is computed with advanced microphysics. We showcase a standard waveform error analysis and discuss the accuracy of the database in terms of faithfulness. We present ready-to-use fitting formulas for equation of state-insensitive relations at merger (e.g. merger frequency), luminosity peak, and post-merger spectrum.
The paper is organized as follows. Sec. 2 summarizes the employed simulation techniques. Sec. 3 describes the physics content of the database and the impact of the binary parameters on the waveforms. Sec. 4 presents a full merger waveform error analysis for a case study, and gives an overview of the average accuracy of the data. Sec. 5 presents, as a first application of the database, ready-to-use EOS-insensitive fitting formulas for the GW frequency, amplitude and the peak luminosity that characterize the merger as well as analogous relations for the post-merger GW spectra.
The CoRe database is hosted on the public gitlab server https://core-gitlfs.tpi.uni-jena.de/core_database Associated code repositories and resources can be accessed from http://www.computational-relativity.org/ In particular, we provide the python package watpy to ease the checkout of the data and perform standard waveform analyses.
Notation
NR data are computed in geometrized units c = G = 1 and solar masses M = 1; we use these units also in this paper unless explicitly indicated. We recall that GM /c 3 4.925490947 µs and GM /c 2 1.476625038 km. The binary mass is M = m 1 + m 2 , where m 1,2 are the gravitational masses of the two stars. The mass ratio is defined as q = m 1 /m 2 ≥ 1, and the symmetric mass ratio is ν = m 1 m 2 /M 2 ∈ [0, 1/4], where ν = 1/4 corresponds to the equal-mass case, whereas ν → 0 for very unequal masses. The dimensionless, mass-rescaled spin vectors are denoted with χ i for i = 1, 2 and the spin components aligned with the orbital angular momentum L are labeled as χ i = χ i ·L/|L|. The effective spin parameter χ eff is defined as the mass-weighted aligned spin, Similarly, one can define the spin parameter [69], The quadrupolar tidal polarizability parameters are defined as Λ i = (2/3) k 2,i C −5 i for i = 1, 2 [70], where k 2,i and C i are respectively the = 2 gravito-electric Love numbers and the compactness of the i-th neutron star (NS). The tidal coupling constant is [70] that, similarly to the reduced tidal parameter [71] Λ = 16 13 (m 1 + 12m 2 )m 4 parametrizes the leading-order tidal contribution to the binary interaction potential and waveform phase (note that κ T 2 = 3 16Λ for q = 1.) The radiated GW (polarizations h + and h × ) is decomposed in ( , m) multipoles as where D L is the luminosity distance, −2 Y m are the s = −2 spin-weighted spherical harmonics and ι, ϕ are respectively the polar and azimuthal angles that define the orientation of the binary with respect to the observer. Each mode h m (t) can be decomposed in amplitude A m (t) and phase φ m (t) as with a related GW frequency A dimensionless frequencyω = GM ω relates to the frequency in Hz according to the formula The GW strain modes are related to the Weyl Ψ 4 curvature modes ψ m bÿ CoRe simulations compute only ψ m at different extraction radii R. However, the above equation can be integrated to obtain the strain, either by using the fix-frequency integration method [72] or directly in the time-domain and performing a polynomial correction, e.g. [14,73,74]. Comparisons between analytical and NR data often use the Regge-Wheeler-Zerilli normalized multipolar waveforms Ψ m , The radiated energy is obtained as [46] whereas the angular momentum is computed as The data released are computed with max = 4. The binary dynamics can be characterized by the binding energy and the orbital angular momentum, we therefore work with the binding energy per reduced mass, obtained by substracting the GW energy loss from the initial ADM mass, [46,75] for details. The GW luminosity peak is computed as The moment of merger is defined as the time of the peak of A 22 (t), and referred simply as "merger" when it cannot be confused with the coalescence/merger process. Waveforms are often shown in terms of the retarded time where r is the coordinate extraction radius in the simulations (assumed close to the isotropic Schwarzschild radius), r * is the associated tortoise Schwarzschild coordinate, and r S = 2M is the Schwarzschild radius.
Initial data
Initial data for CoRe simulations are generated solving Einstein's constraint equations in the conformal thin sandwich (CTS) formalism [76] assuming a helical Killing vector and imposing hydrodynamical equilibrium for the NS's fluid [77,78]. It is assumed that the fluid is either irrotational or in a quasi-equilibrium state with constant rotational velocity, which allows for consistent simulations of NS with spin [79,80]. In the latter formalism, the rotational part w a of the fluid's velocity is determined by an angular velocity parameter ω i as where x k * are the coordinates of the NS center. Possible definitions of spin for a star in a binary are discussed in Refs. [46,81,82]. The spin parameters given in the CoRe database are defined to be those of single NSs in isolation with the same rest mass and the same ω i as the BNS components [46,81,83]. To construct initial data with abritrary eccentricities, we use an extension of the helical symmetry condition that is based on approximate instantaneous first integrals of the Euler equations and a selfconsistent iteration of the CTS equations [84]. This method also allows us to create low-eccentricity initial data in quasi-circular orbits using an iterative procedure that combines initial data and evolution codes [81] (see also [85][86][87]).
CoRe initial data are calculated using either Lorene [88][89][90] or SGRID [79-81, 83, 91, 92]. Both codes use multi-domain pseudospectral methods to solve the CTS equations and surface-fitting coordinates that minimize spurious stellar oscillations at the beginning of the evolutions and guarantee accurate determination of the initial binary global quantities. Lorene can construct irrotational binaries with either piecewise polytropic or tabulated EOS. In the latter case, they are often obtained as cold, β-equilibrated slices of finite-temperature, composition dependent EOS. SGRID can generate irrotational or spinning binaries with piecewise polytropic EOS and arbitrary (or reduced) eccentricity. In particular, SGRID can simulate BNS in which the individual stars rotate close to the breakup spin and have masses which are ∼ 98% of the maximum supported NS mass allowed by the EOS [83]. Evolutions of initial data generated with SGRID and Lorene were compared in Ref. [46], where we found them to be in good agreement.
Initial data in quasi-circular orbits are characterized by the following global quantities of the 3+1 hypersurfaces: the total baryon mass M b (a conserved quantity along the evolution); the total binary gravitational mass M , i.e., the sum of the two gravitational masses of the bodies in isolation; the initial orbital frequency Ω ω 22 /2 and the corresponding ADM mass M ADM and angular momentum J ADM .
Evolution codes
CoRe simulations evolve initial data using a free-evolution approach to 3+1 Einstein field equations based on the hyperbolic conformal formulations BSSNOK [93][94][95] or Z4c [96][97][98]. The latter is used for all of the newly released data (Ref. [65] also uses BSSNOK). The (1+log)-lapse and gamma-driver shift conditions are used for the gauge sector. The general relativistic hydrodynamics is solved in first-order conservative form [99]. Wave extraction is typically performed on coordinate spheres at finite radius placed in the wave zone of the computational domain (typically R ∼ 500 − 1000 M ) and calculating the Weyl pseudoscalar Ψ 4 , see e.g. [100] for details.
Simulations are performed with two independent mesh-based codes: BAM [100, 101] and THC [102][103][104], both developed and maintained within our collaboration. These codes employ adaptive mesh refinement (AMR) techniques in which the domain consists of a hierarchy of nested Cartesian grids (refinement levels). The grid spacing of each refinement level in each direction is half the grid spacing of its surrounding coarser refinement level. Finite difference stencils are used for the spatial discretization of the metric variables (usually at fourth order accuracy), and high resolution shock-capturing methods for the hydrodynamics. The Berger-Oliger or Berger-Colella algorithm is employed during the explicit mesh evolution. The latter is performed with the method of lines and Runge-Kutta schemes of third or fourth order accuracy in time. The innermost levels move dynamically during the time evolution following the motion of the NS such that the strong field region around a NS is always covered with the highest resolution. Both codes employ a hybrid OpenMP/MPI parallelization strategy and show good parallel scaling up to thousands of cores.
BAM implements high-order finite-differencing WENO schemes [19] and, more recently, an entropy-flux-limited (EFL) scheme [65], that is better adapted to the treatment of the NS surface, to accurately simulate multiple orbits and GWs from inspiral-mergers. The typical grid configurations for these simulations consist of seven refinement levels, where the innermost level split into two boxes covering each of the NSs. Standard grid parameters for resolution studies are chosen with n m ∈ [96,256] points per direction in each inner (moving) level and n ∈ [144,512] for the outer levels. The minimal grid spacing in each direction is ∆ ∼ [0.059, 0.321] M and the maximal resolution reached in the released simulation is ∆ ∼ 0.059 M . Symmetries can be imposed to reduce the computational cost of certain problems. For example, aligned-spin BNS are often simulated in bitant symmetry (z > 0 volume). The simulation parameters can vary for each simulation; the relevant ones are reported in the CoRe metadata.
THC implements both high-order finite-differencing schemes [105] and Kurganov-Tadmor-type central schemes. The latter are preferentially used with simulations with microphysics. THC can make use of microphysical EOS, and implements various neutrino transport schemes [54,106,107] (see below) and subgrid-scale treatment of turbulent mixing and dissipation (GRLES) [61,108] to accurately simulate remnants and postmerger dynamics. Most of the GRLES data in the current release employ an effective model for turbulence based on the high-resolution magnetohydrodynamics simulation of Ref. [109], where the viscosity parameter is set to ν T = mix c 2 s and c s is the sound speed of the fluid. mix is typically defined to be a function of the rest-mass density calibrated with the general-relativistic magneto-hydrodynamics simulations of [109] (see [61]). THC builds on the Cactus framework [110] and the Einstein Toolkit [111,112]. THC simulations use the Carpet adaptive mesh refinement driver for Cactus [113], which implements both vertex centered and cell-centered adaptive mesh refinement with flux correction [114,115]. The grid structure used in the THC simulations is similar to that used in BAM. The grid structure is specified by the resolution at the coarsest refinement level and at the location of the center of the neutron stars. The refinement levels on the grid hierarchy do not have to be connected and Carpet can merge different regions to create grids with complex topology. The standard resolution setup of the THC simulations uses a resolution of ∆ = 0.125 M in every direction on the finest refinement level. The maximal resolution reached in the released simulation is ∆ 0.08 M . The typical CFL is 0.125. However, an even lower CFL of 0.0625 is used on the coarsest grid to handle the gamma-driver source term in the shift evolution equation. All THC simulations included in the current release of the CoRe database use bitant symmetry.
EOS models
CoRe simulations currently employ 18 different EOS models for the neutron star matter. BAM data are computed using analytical EOS in the form where P pwp (ρ) is a given piecewise politropic EOS model [116]. It prescribes also a value pwp for the specific internal energy given the rest mass density ρ, augmented with a γ-law "thermal" pressure term (usually, γ th = 1.75 [45,117]). The specific parameters we employ for the piecewise polytropic EOS mimic well-established zero-temperature EOS models [116]; tables of these parameters are available on the CoRe website §. The current release significantly extends the data computed with finite-temperature EOS over the first release. The release includes data from seven finite-temperature EOS, used in the calculation of Refs. [54-59, 63, 66]. The finite-temperature EOS include the following models: BHBΛφ [118], BLh [119,120], BLQ [63,120], DD2 [121,122], LS220 [123], SFHo [124], SLy4/SRO [125,126].
All these EOS include neutrons, protons, nuclei, electrons, positrons, and photons as relevant thermodynamics degrees of freedom. The ALF2 [127] and BLQ EOS [63,120] are hybrid models accounting for deconfined quark matter. BHBΛφ is a hadronic model that includes Λ hyperons [118,128].
Most of these EOS can be found on the CoRe website in tabulated form. In the simulations, the EOS is called during the hydrodynamics evolution in order to compute the pressure from the rest-mass density, the temperature, and the electron fraction, i.e., in the form p = P (ρ, T, Y e ). Any relevant thermodynamical quantity is evaluated by multi-linear interpolating the tabulated values in log ρ, log T , and Y e . As common in relativistic hydrodynamics, the EOS is called during the transformation from conservative to primitive variables. The latter takes place at each timestep and grid point and it involves a numerical root finding of the function f (p) := p − P (ρ, , Y e ), where the specific internal energy is implicitly given by the temperature T [133]. Hence, each root-finding step includes another root finder for the function g(T ) = − E(T ) (see [134] for a discussion on computational efficiency and a non-standard approach based on neural networks.)
Microphysics
Most of the THC simulations account for the loss of energy and lepton number due to the net emission of neutrinos using a leakage scheme [106,133]. Accordingly, effective neutrino leakage rates are computed as a physically motivated interpolation from the emission and diffusion rates. The latter require the knowledge of the optical depth of each computational zone in such a way as to recover the correct cooling time scale. Neutrino reabsorption is included in some simulations using the M0 scheme [106]. This scheme splits neutrinos in an optically thick component, treated with the leakage Table 1. Weak reaction rates and references for their implementation. We use the following notation ν ∈ {ν e ,ν e , ν x } denotes a neutrino, ν x denote any heavy-lepton neutrino, N ∈ {n, p} denotes a nucleon, and A denotes a nucleus.
Reaction
Reference scheme, and a free-streaming component. The free-streaming neutrinos and their average energies are obtained by solving the radiative transfer equations on a set of radial rays (the so-called ray-by-ray approach) fully-implicitly in time. More recently, we have implemented an energy-integrated M1 scheme in THC [107]. The new scheme can self-consistently capture the diffusion of neutrinos from the merger remnant and its reabsorption in the ejecta. M1 simulations are not included in the current release of the database, but will be made public as soon as the associated publications have been accepted. Table 1 summarizes all neutrino reactions currently included in THC together with the reference in which the form of the rates we use are derived.
Overview
CoRe simulations are performed for various binary masses, mass ratios, NS spins and EOS as summarized by Figure 2. They cover a significant portion of the BNS parameter space and allow to quantitatively explore the connection between the gravitationalwave morphology and the binary parameters in some detail. Figure 3 illustrates the variety of waveforms contained in the database. In the following, we give an overview of the database content and outline the connections between physics and waveform morphology. The database contains waveforms from binaries with total masses ranging from 2.4 M to around ∼3.4 M with 45 datasets reaching mass ratios larger than q 1.4 and up to q = 2.1 [47,58,64]. EOS effect can be summarized to some extent by the quadrupolar tidal polarizability parameters Λ 1,2 [70], where larger (smaller) values of Λ i are associated to stiffer (softer) EOSs. The most compact NSs (and most massive binaries) are associated with the smallest values of Λ 1,2 (andΛ), see the right panel A NS spacetime is characterized by an infinite number of multipolar Love numbers of gravitoeletric and gravimagnetic type; Λ 2 parametrizes only the (gravitoelectric) leading order term in the Lagrangian. of Fig. 1. The CoRe data encompasses well the mass and EOS variation for realistic BNSs. Waveforms from both irrotational and spinning (using the formalism outlined in Sec. 2.1) quasi-circular mergers are included [46,48,60]. For aligned spins, the individual dimensionless components range in χ z ∈ [−0.25, 0.5); about 7 datasets are from simulations with precession effects [48,60]. The distribution of key parameters among the CoRe simulations is shown in Fig. 4.
Most of the CoRe waveforms are produced from quasi-circular mergers. The residual eccentricities of non-iterated quasi-circular initial data is usually e ∼ 10 −2 − 10 −1 , see the bottom right panel of Fig. 4. About 13 datasets have an initial eccentricity e 10 −3 that was reduced through an iterative procedure employing the formalism described in Sec. 2.1. A subset of waveforms refer instead to eccentric mergers with initial eccentricity values as high as ∼0.7 [106,139,140]. In particular, the simulation in the bottom panels of Fig. 3 has an initial eccentricity of 0.55.
The effects of mass-ratio, spin, and tides on the orbital dynamics can be studied by means of gauge-invariant energy curves E b (j), that are also publicly released. We illustrate this in Fig. 5 for a few examples. In the inspiral, the binary's angular momentum j decreases due to GW emission and the system becomes more bound (E b stays negative and |E b | increases). Equal mass (ν = 1/4) non-spinning BBH systems merge with E b −0.12, indicating that about 3% of the binary mass was radiated in GWs to the moment of merger (marker in the figure). Tidal effects in BNS make the potential governing the relative dynamics more attractive. The tidal constribution to the potential at leading order is ∼ −κ T 2 /r 6 , i.e. it is stronger for larger tidal polarizabilities Λ 1,2 and it is short-ranged thus affecting the motion mostly at high frequencies (small separations, r) towards merger. Consequently, the inspiral of an equal mass non-spinning BNS is faster than a binary black hole inspiral. The binding energy at the moment of merger is |E b | ∼ 0.064, which is smaller than the black hole case because the BNS system is less compact. Mass-ratio effects make the potential more repulsive, but are less effective than tides at high frequencies. The q = 2 BNS shown in Fig. 5 merges at smaller values |E b | ∼ 0.055 than the equal mass because of tidal disruption. The remnant has also larger angular momentum j ∼ 3.6 [59].
Spin-effects are dominated by the leading-order spin-orbit interaction; their character is thus repulsive or attractive depending on the projection of the spin on the orbital angular momentum [141]. This is analogous to what happens to corotating/counter-rotating circular orbits in Kerr spacetimes that move outwards (inwards) for antialigned (aligned) spin configurations with respect to the nonspinning case. In binary black hole simulations this effect has been named as "hang-up" effect [142]. In Fig. 5, the spinning BNS withŜ = 0.1 is more bound than the non-spinning BNS at the moment of merger with E b ∼ −0.068. Note that j in this case includes the spin contribution. Moreover, the eccentric equal-mass case, contrary to the previous ones, shows brief moments of constant E b indicating the times when the NSs are apart (see inset of Fig. 5). Energy curves for BNS have been studied in detail in [46,48,60] to which we refer for more details. We stress that the properties of BNS systems at the moment of merger can be captured by EOS-insensitive (quasi-universal) relations [16]. The latter can be helpful in waveform modelling and used to estimate the properties of the remnant. We refer to Sec. 5 for further discussion. High-mass BNS produce a remnant that promptly collapses to a black hole shortly after the moment of merger and before the two cores can bounce [52,53]. Prompt collapse implies negligible shocked dynamical ejecta, because the bulk of the mass ejection comes from the first core bounce after their collision [54]. Prompt collapse can be characterized by a threshold mass m thr = k thr M TOV max , that mainly depends on the maximum mass of cold equilibria M TOV max supported by the EOS [45,143]. The recent analysis of Ref. [144], based on 227 finite-temperature EOS and CoRe data P, found that the prompt collapse mass threshold for equal-mass non-spinning BNS is well described by an EOS-insensitive threshold where C TOV max is the compactness of the maximum NS mass, and a = −3.36 ± 0.20, b = 2.35 ± 0.06. A prompt collapse waveform has a rapidly damped black hole ringdown after the moment of merger as shown in the top panels of Figure 3. Consequently, the postmerger GW signal is practically negligible for the sensitivities of both current and next-generation detectors. The lack of shocked ejecta and of a massive disc also implies that equal-mass prompt-collapse mergers have dim EM emission. However, for very asymmetric BNS with q 1.4, it is the tidal disruption of the secondary NS and its accretion onto the primary to trigger the gravitational collapse [58]. Thus, asymmetric mergers can be electromagnetically bright because they produce massive tidal dynamical P These data are not released in the database since the waveforms are rather short and extracted at close radii. ejecta and remnants with accretion discs of mass ∼0.1 M . This prompt collapse process is mainly controlled by the incompressibility parameter of nuclear matter around the TOV maximum density [145]. A robust, EOS-insensitive criterion is not known in these conditions [58,[145][146][147][148], but tidal disruption effects are subdominant to the mass effect; they produce maximal variations from the equal-mass criterion of ∼8% [144,145].
Without prompt collapse, the evolution of a NS remnant is driven by the GWs emission of ∼10 53 erg lasting 20 milliseconds (GW-driven phase) [149,150]. During this phase, a remnant that collapses to a black hole is called short-lived, while a remnant that settles to an approximately axisymmetric rotating NS is called long-lived. Examples of postmerger signals from these remnants are shown in the last two panels on the right of Figure 3, for the equal-mass case. The GW-driven phase is associated to a luminous GW transient that peaks at frequencies ∼2 − 4 kHz [27,28,[151][152][153][154]. The spectrum of this transient is rather complex but has robust and well-studied features at a few characteristic frequencies. Most of the GW power is emitted in the (2, 2) mode at a nearly constant frequency ω 22 (t) ≈ 2πf 2 ; the more compact and close to collapse the remnant is, the higher and more varying the ω 22 (t) emission frequency is. The postmerger dynamics is primarily controlled by the masses of the two stars and the bulk properties of the zero-temperature EOS, in particular maximum TOV mass and compactness [52,155]. Finite temperature and neutrinos do not produce qualitative differences, other than possibly on the time of gravitational collapse of the remnant [156]. Quantitative differences in the GW signal introduced by finite-temperature and neutrino effects are typically subdominant compared to finite-resolution uncertainties [107,157]. On the other hand, microphysics plays a crucial role in the EM counterparts and nucleosynthesis from mergers, e.g., [106,[158][159][160][161].
The remnant's signal from asymmetric binaries with mass ratio q 1.4 carries the imprint of the tidal disruption during merger [47,58,162]. An example of such a waveform is shown in the second panels (top to bottom) of Figure 3. Comparing to the equal-mass long-lived case, the postmerger amplitude is significantly smaller because the asymmetric remnant does not experience the violent bounces of the symmetric remnant. For the same reason, the early-times modulations in frequency and amplitude present in the equal-mass case are significantly suppressed in the asymmetric case.
The evolution of a NS remnant beyond the GW-driven phase is uncertain at present. Explorations of the viscous phase using NR simulations have started [59,163,164], but they are still incomplete in many ways. While GW emission is expected to be significantly weaker than during merger, remnant's instabilities might enhance GW emission. Current NR results suggest that BNS remnants have an excess of both gravitational mass and angular momentum after the GW-driven phase and when compared to equilibrium configuration with the corresponding baryon mass [165,166]. Possible mechanisms to shed (part of) this energy are CFS [167,168] and one-arm instabilities [169][170][171] that would lead to potentially detectable, long GW transients at 1 kHz. Example of such waveforms are THC:0028, THC:0029, and THC:0036 [170]. Finally, CoRe data are available for multiple grid resolutions as discussed in Sec. 2 and shown by Fig. 2. Most of the newly released data contain high resolution simulations with a minimum grid spacing as low as ∆ ∼ 0.06 M , e.g., the NS are resolved with a uniform mesh of spacing ∼88.4 meters. Notably, simulations of more than 20 orbits or up to hundreds milliseconds postmerger and with microphysics were performed at these resolutions. Simulations at multiple resolutions are a crucial aspect for data quality that is discussed next.
Waveform accuracy
Waveform accuracy depends on several aspects of the simulations. Within the CoRe data the largest sources of uncertainty are (i) the truncation error of the numerical scheme, that is regulated by the mesh resolution employed in the simulations, and (ii) the finite extraction radius for the GW data, e.g. [19,65,103,172]. Other aspects are relevant for waveform modelling, as for example, the length of the simulation (number of orbits/GW cycles), the residual eccentricity in quasi-circular initial data, and the simulation of realistic physics (star rotation, EOS, etc.). Waveform accuracy should be studied by the user case-by-case considering amplitude and phase plots with datasets of simulations at different resolutions and extraction radii. This analysis typically requires a minimum of three simulations of the same BNS at different grid resolutions (a "convergent series") and has been performed by the authors in Refs. [9,10,19,65,[103][104][105]172]. We give below in Sec. 4.1 a complete example of error analysis of a ∼10 orbit inspiral-merger waveform.
In the former case, one is interested in quantifying the fractional loss of signal-to-noise ratio (SNR) due the use of a sub-optimal, discrete match filter. Since the number of GW events is proportional to the observable volume, and the distance is inversely proportional to the observed SNR, the fractional loss of potential events scales like the cube of the minimum overlap in the discrete template bank [174,175,180]. In the latter case, one is interested in quantifying the bias (or the maximum knowledge) on the GW parameters given the noise in the detector (statistical errors) [176,177,180]. In practice, one proceeds by defining the faithfulness between two waveforms where t 0 and φ 0 are a reference initial time and phase, and its complementary, the unfaithfulness,F := 1 − F. By demanding that, at worst, the systematics biases become of the same order as the statistical ones when the noise level is doubled, it is possible to establish the condition [180] where ρ is the SNR and 2 1. This condition is necessary for unbiased parameter estimation (faithful waveforms); its violation does not imply that an analysis has biases [172,177,181,182]. The above criterion can be used to quantify the accuracy of NR data, for example by calculating the faithfulness between data at different resolutions [65,172,182]. We will use the faithfulness measure in Sec. 4.2 to discuss the average accuracy of the data of the CoRe database.
Example of NR waveform analysis
In this section we present a waveform error analysis for BAM:0066 [20]. This example effectively represents data that exhibit second order convergence. Figure 6 shows the strain Rh 22 at the lowest extraction radius available for this simulation, R = 700 M , its amplitude |Rh 22 | and frequency M ω 22 . Note that in this section we use R instead of D L .
In order to test self-convergence, we compare amplitude and phase differences of Rψ 22 between the different resolutions. For this case, we consider the simulation at resolutions n m = 120, 160, 240 grid points on the highest refined AMR level; hereafter Low (L), Medium (M), and High (H). The convergence rate p is found experimentally by rescaling these differences using the scaling factor SF [172], where ∆ x is the grid spacing at resolution x. We show the self-convergence test in Figure 7. The differences decrease with increasing resolution, as one would expect from convergent data. They also increase with increasing simulation time because truncation errors accumulate during the simulation. The optimal scaling is found for p = 2 with SF(2) = 1.4, thus indicating second order convergence. In presence of convergence, a measure of the error to be assigned to the (highest resolution) data is given simply by the difference between the two highest resolutions. This is a conservative estimate because (for convergent data) the truncation error is certainly smaller. Alternatively, the experimental convergence factor can be in principle used in a Richardson extrapolation of the data to provide an improved dataset and error estimate [19,172]. Note that in this procedure the waveforms are not shifted by a relative time and phase shift because the simulations of the convergent series are run using the same initial data with a fixed initial phase.
To assess the uncertainties originated from the waveform obtained at finite-extraction radii, R i , we compare the phase differences between consecutive radii [172] and similarly for the relative amplitudes, ∆ * A 22 /A 22 . In Fig. 8 we show the differences at the extraction radii R = 700, 750, 800, 850, 900 M . The phase differences decrease at progressively large radii, thus indicating the numerical waveforms are converging towards their true morphology at null infinity. The phase differences are larger at early times and decrease towards merger; note this behaviour has the opposite sign of that of resolution effects [19]. The relative differences in amplitude are ∼10 −4 for all radii, indicating robust results are obtained already with relatively close extraction sphere. The waveforms can be extrapolated to null infinity using either a polynomial in 1/R of order K [172] or the method outline in [183]. The two methods give comparable results; the former is more general and can be applied to the curvature multipoles ψ m , the latter is a simpler method for the strain modes. An error due to finite extraction can be then assigned to the data at finite extraction as the difference with the extrapolated data (or viceversa). Another method is to post-process simulations using Cauchy characteristic extraction (CCE) [184] and to simulate the waveform at future null infinity. This technique was used for some of the CoRe data. The total error budget can be computed as the sum in quadrature of the truncation and finite extraction errors, and it is shown in Fig. 9 for both the curvature and strain (2,2) modes. As mentioned above, the truncation phase error is typically a factor ∼2 larger than the finite extraction error (for R 500M ) at merger and in simulations with tens of orbits.
Finally, we obtain the unfaithfulnessF of the waveforms between the different resolutions (M-H and L-M). The Wiener integral is evaluated in the frequency range f ∈ [f min , f mrg ] and employing the Advanced LIGO PSD P1200087 [185] from bajes [186]. Here f min corresponds to the initial GW frequency, and f mrg to the frequency at the moment of merger. For the faithfulness threshold F thr in Eq. (20), we consider 2 = 1 as the strict requirement, and 2 = 6, corresponding to the number of intrinsic parameters of a BNS. Similarly to [65], the SNR values are chosen to be ρ = 14, 30, 80. Figure 10 shows the computed values. The smallest unfaithfulness (M-H, n m = 240, 160) passes five out of the six accuracy tests, whereas the other one (L-M, n m = 160, 120) passes only two, namely F 14,6 thr and F 30,6 thr . However, the unfaithfulness value lies closely (or on top) of the threshold F 14,1 thr . Analyses similar to the one above are necessary to determine the quality of the NR data for GW modelling. Convergence of the data is a necessary requisite for robust error estimates. Other diagnostic quantities used to verify convergence in simulations are constraint violation, baryon mass conservation and the stars oscillations during the first orbits, e.g. [60,81,98,101,172]. Achieving waveform convergence in long-term evolutions of BNS is a nontrivial result and, in our experience, requires at least fifth order finitedifferencing schemes or finite volume schemes with fifth order reconstructions (at the current resolutions) [19,65,104]. Second [19], approximately third [104] and clear fourth order convergence [65] has been demonstrated up to merger in some data using these finite-differencing conservative schemes. Extreme mass ratios q ∼ 2 and NS rotation close to the breakup limit remain challenging to simulate as well as to obtain clean convergence in GW higher (subdominant) modes like ( , m) = (2, 1), (3,3) and (4,4). Work in these directions is ongoing [58,62,64,65]. For example, clear fourth order convergence in the subdominant (3,2) and (4, 4) modes for q = 1 has been shown in [65]. Postmerger waveforms typically show slower convergence due to shock formation at merger and the complex fluid dynamics in the remnant. Nonetheless, GW spectra have remarkably robust features that can be accurately quantified with NR data, as we shall discuss in Sec. 5. We refer the reader to Ref. [33,38] for recent work on the 160 180 200 220 240 n accuracy of CoRe postmerger waveform.
Faithfulness analysis
In an attempt to give an overview of the accuracy of the waveform database, we compute the unfaithfulness of the (2,2) mode waveforms h 22 between the highest and second highest resolutions, for the whole database. We use again the zero-detuned, high-power Advanced LIGO PSD [185]. The minimum frequency f min employed in the integral of Eq. (18) corresponds to the initial frequency of each individual simulation. The result of this analysis is summarized in Fig. 11, whereF is shown as a function of the number of orbits and different colors mark the microphysics scheme employed in each simulation. The unfaithfulness values are scattered on a wide range, but about 65% of the waveforms lay below the 1% level which is conventionally considered the accuracy threshold for detection purposes. Importantly, the dependence on the number of orbits (simulation length) is very weak and most of the simulations with ten or more orbits haveF < 0.01. Several waveforms from multiple-orbits haveF 10 −4 ; according to the analysis in the previous section, these data can be considered faithful (suitable for parameter estimation) up to signal SNR of 30-80. We note that data with very few orbits (e.g. THC:0019, BAM:0029, and BAM:0082) show a remarkably low unfaithfulness. These simulations have a short inspiral and rather focus on the postmerger signal, which is not considered in this analysis. Hence, smallF is not necessarily an indication that these simulations are suitable for waveform modelling.
A faithfulness analysis for postmerger signals was recently presented by some of us in [33,38]. There, we found average mismatches of ∼0.01 − 0.4. The main source of uncertainty in the postmerger waveforms is the numerical resolution (see the above Section) and the impact of the resolution on the remnant's collapse.
Quasi-universal relations
As a first application of the database, we present in this section new EOS-insensitive relations for the merger and postmerger waveforms. Previous work found that several key quantities characterizing the merger dynamics depend on the unknown EOS mainly throughout the tidal parameters and have a very weak dependence on other details of the matter model, e.g., [29,58,150,153,[187][188][189]. Similarly, the GW postmerger spectrum has robust features that can be captured within a few percent accuracy by tidal parameters and/or other properties of NS equilibria in EOS-insensitive way [27,28,33,[153][154][155]. These relations have some practical use in GW astronomy because they deliver accurate estimates for the peak luminosity [53,150] and for the remnant properties [190][191][192] (see also [53] for a detailed review) and because they are the building blocks to develop NR-informed waveform models. First, we consider the mass-rescaled GW amplitude and frequency at the moment of merger, A mrg 22 /M and M f mrg 22 /ν, and update the fits developed in Ref. [33,38,188]. Following closely the fitting procedure of Ref. [38], we represent any quantity by the factorized function where each factor Q M , Q S , Q T accounts for the mass ratio in terms of X = 1 − 4ν, spin corrections in terms ofŜ, and tidal effects in terms of κ T 2 . The first two factors are given by the linear polynomial expressions Q M = 1 + a M 1 X, and Q S = 1 + p S 1Ŝ , with p S 1 = a S 1 (1 + b S 1 X). The last factor is instead a rational polynomial . The best fit parameters are shown in Tab. 2. The amplitude and frequency have 1σ errors of 2.6% and 4.6% respectively. We also obtain a χ 2 of ∼ 0.126 for the former and ∼ 0.329 for the latter.
Next, we use the public CoRe data on the emitted GW energy and extract the peak luminosity L peak using Eq. (13). For binary black holes, this quantity does not depend on the mass scale and it is accurately described by the fits of Ref. [193]. For BNS, it has been studied in Ref. [150]. We propose the ansatz where L BBH peak are the mass and spin dependent fits from Ref. [193] and p k (ν,Ŝ) = p k1 (Ŝ)ν + p k2 (Ŝ)ν 2 + p k3 (Ŝ)ν 3 p kj (Ŝ) = p kj0Ŝ + p kj1 .
Note the scaling factor 1/ν for L peak . By construction, the fit reduces to the BBH case for κ T 2 → 0. The luminosity peak is calculated in geometric units; the conversion factor to CGS units is given by the Planck luminosity L P = c 5 /G ≈ 3.63 × 10 59 erg s −1 . Figure 12 shows the best fit for L peak /ν and the CoRe data; the best fitting coefficients are reported in Tab. 4. The average 1σ deviation is about 12% over the entire dataset with less than a dozen of outliers. The peak luminosities for q ∼ 2 BNS are the least accurately modelled (4 BNS configurations). The figure shows that the largest peak luminosities are reached by BNS with κ T 2 80 that correspond to high-mass binaries and prompt collapse mergers. These events can reach peak luminosities of ∼10 55 erg s −1 , about an order of magnitude less than binary black holes (of any mass). BNS mass ratios q 1.5 can lower L peak of about an order of magnitude, while spins of magnitude ∼0.1 do not significantly affect L peak . We stress that BNS with the largest peak luminosity do not correspond in general to the BNS that radiate the largest amount of energy because postmerger emission can radiate further energy [149,150] (see also Fig. 5). We can set an upper limit to the total radiated GW energy from our dataset, obtaining E tot GW 0.676 M c 2 . Finally, we illustrate the use of CoRe data to model postmerger GWs by discussing a fit of the postmerger's spectrum peak frequency f 2 , e.g. [28,29,33,38]. This peak frequency is a robust feature found in all NR simulations. Direct GW inference on f 2 can be used to constrain NS properties [32,34,154,155,190,191,194]. The peak frequency also enters as one of the central parameters in postmerger waveform models that will be employed in the future for more sophisticated matched-filter analyses [195]. Following Ref. [38] we employ again Eq. (23) to fit the mass-rescaled M f 2 . The best fitting coefficients are presented in Table 2 and have a χ 2 ∼ 0.07. Figure 13 shows M f 2 as a function of κ T 2 for selected values of mass ratio and spin. The 1σ error is below 4%; this precision is in principle sufficient for informative measurements of the NS mass-radius sequence. For example, using the EOS-insensitive relation between f 2 and the maximum density of an equilibrium non-rotating NS put forward in [155], it would be possible to determine the maximum density of an equilibrium non-rotating NS to ∼15% and the maximum mass M TOV max to ∼12% with a single signal at the detectability threshold.
As a further illustration, we calibrate the EOS-insensitive relations (mass-rescaled) between f 2 and the NS radius [38,196,197] where R X is the equilibrium radius corresponding to a NS with mass X = 1.4, 1.8 M . Figure 14 shows M f 2 as function of R 1.4 , R 1.8 and R 1.4 /R 1.8 . Best fit parameters are given in Table 3. Other features of the postmerger spectrum can be quantified in a similar way. We release reduced postmerger data and analysis scripts on the CoRe website.
Conclusion
We presented a new set of BNS simulations for the second release of the CoRe database, expanding it to 254 different binary configurations covering a wide parameter space. GW190425 [66,68]. Simulations were performed with a large number of EOSs, including several microphysical models [59,63,66]. Some simulations include the effects of neutrinos, either through the leakage scheme [106,133,198], or using the M0 approach [54,106]. Turbulent viscosity is included in some models using the GRLES formalism [61,108]. Finally, we include simulations produced using a new hybrid numerical flux scheme, EFL, that was introduced in [65] showing fourth order convergence and smaller phase errors than previous simulations using WENO schemes in BAM. We described in detail the methodology we used to assess the overall accuracy of the waveforms and presented results for all the configurations in the database. The CoRe database waveform have typical unfaithfulness of less than 10 −2 , some have unfaithfulness of less than 10 −4 , so they are suitable for precision waveform modelling applications. However, to ensure the convergence and usability of the simulations, more extensive analysis is needed. As an example, we showed a full analysis of one of our simulations, BAM:0066, which showed a clear second order convergence and passed several accuracy tests.
Finally, as a first application of the CoRe database, we fitted phenomenological formulas for the merger amplitude, frequency, and GW luminosity. These fits are able to model the CoRe data with high accuracy (< 5% for the merger amplitude and frequency and 17% for the peak luminosity). We also recalibrated various quasiuniversal relations between the post-merger peak frequency and the binary parameters, again finding deviations from the universal relations of only a few percent. These were used in [38] to construct the first complete inspiral, merger, and post-merger waveform model for BNS.
We release the CoRe database to the community with the hope that it will enable future discoveries in GW astronomy. Potential applications include the development of new waveform models, the validation of data analysis pipelines and new numerical relativity codes, and the planning of future GW experiments. In the future, we plan to release new simulation data on a rolling basis, with data releases taking place at the publication time of the corresponding paper.
|
2023-03-25T20:01:25.249Z
|
2022-10-28T00:00:00.000
|
{
"year": 2022,
"sha1": "4d4989aee4aa119d2dd10135e1ac33bd9fa7f939",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6382/acc231/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "22c30634c5e7e0160bfaf5d84978af5942c7f414",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
133626689
|
pes2o/s2orc
|
v3-fos-license
|
Simplified Direct Water Footprint Model to Support Urban Water Management
: Water resources conservation corresponding to urban growth is an increasing challenge for European policy makers. Water footprint (WF) is one of the methods to address this challenge. The objective of this study was to develop a simplified model to assess the WF of direct domestic and non-domestic water use within an urban area and to demonstrate its effectiveness in supporting new urban water management strategies and solutions. The new model was tested on three Central European urban areas with different characteristics i.e., Wroclaw (Poland), Innsbruck (Austria), and Vicenza (Italy). Obtained WFs varied from 291 dm 3 /(day · capita) in Wroclaw, 551 dm 3 /(day · capita) in Vicezna to 714 dm 3 /(day · capita) in Innsbruck. In addition, WF obtained with the proposed model for the city of Vicenza was compared with a more complex approach. The results proved the model to be robust in providing reasonable results using a small amount of data.
Introduction
Europe is one of the most urbanized continents in the world. More than two-thirds of the European population lives in urban areas and this share continues to grow [1]. Besides the urbanization, climate change as well as demand for goods and services may influence water demand. In different cities, this impact will be different. Part of water is delivered by public water supply (public or private systems with public access). Although the share of the households water demand in total water abstraction can be relatively small, it is nevertheless often the focus of public interest, as it comprises the water volumes that are directly used by the population. The way in which water is managed in cities has consequences both for city dwellers and for the wider community and hence dictates water availability (in both quantity and quality) for other users. It thus also influences the environmental, economic, and social development of regions and countries. For those reasons sustainable, efficient and equitable management of water in cities has never been as important as in today's world. Looking forward to the next few decades, it seems likely that there will be a significant expansion in urban water infrastructure. Additionally, urban development, especially the sealing of surfaces and land use change, put pressure on urban infrastructure and quality of water discharged to the water bodies [2]. model, it is compared with the results of Manzardo et al. [11]. The authors of this paper propose to use the WF to solely investigate the direct water use in urban areas because it is the one directly managed by the local municipality. This paper is therefore a new contribution in applying WF. The discussion elaborates the secondary objective of this paper, which is the demonstration of model usefulness in supporting the definition of urban water management strategies and solutions. Differences between the proposed model and the one of Manzardo et al. [11] are further clarified in the discussion section.
Materials and Methods
The scope of this study focuses on the WF of direct water use which after Hoekstra et al. [12] refers to the freshwater consumption and pollution associated to the water use within city boundaries considering only urban area defined as locations with over 50% constructed surfaces [11]. Agricultural use within the city is excluded in this research. Considering that WF assessment in complex environments, such as urban areas, can be very challenging and resource consuming (time and money) a simplified method is needed. The novelty of proposed approach is not in the method itself but in improving applicability of the method in this specific context.
The simplified approach developed for urban areas and its application in urban water management is presented in Figure 1. The whole process starts with dividing the urban area into generic categories such as: impermeable area, permeable area, and water area. These categories can be subdivided further into surfaces characterized with similar water use pattern, e.g., impermeable area can be represented by paved area, roof surface, and transportation area, permeable area can consist of public and private green surfaces. The number of surfaces will depend on the local representation of urban area used by a municipality and the objectives set-up in urban water management. During the data acquisition phase, parameters characterizing all surfaces (area, evaporation coefficients), as well as water inflows and outflows (including mean annual precipitation), wastewater discharge and the concentration of pollutants are collected. The sources of the data can be found in municipalities, local water companies, legal regulations, and publicly accessible databases.
The calculation phase requires to perform a water balance for the urban area. In order to reduce the calculation effort it is recommended to use simple models [21]. The following paragraph describes how the green, blue, and grey components of WF are calculated for the urban area. This phase is complementary with the assignment of water quantity and quality in the urban area.
Calculated WF is evaluated during analysis of results phase and finally its findings are used to support creating or modifying urban water management strategies and plans. They also allow selecting activities aimed to reduce the urban WF which could stimulate policy development and create sustainable urban systems.
Urban Water Footprint Accounting Formulation
The green water footprint (WF green ) refers to the total rainwater evapotranspiration (from fields and plantations) plus the water incorporated into the harvested crop or wood [12]. In the urban environment, Manzardo et al. [11] proposed to limit accounting of WF green to green areas, such as private (gardens) or recreational land (lawns, public parks). According to this definition, WF green depends directly on the area with permeable surface covered by private and public vegetation where the coefficients K pubg and K privg represent fraction of precipitation PREC (mm/a) which evapotranspirates from public green area A pubg (m 2 ) and private green area A privg (m 2 ), respectively. As the urban area does not include agricultural land, it is assumed that water used for agricultural activities, which might be present within city boundaries, is excluded from calculating urban WF.
Urban Water Footprint Accounting Formulation
The green water footprint (WFgreen) refers to the total rainwater evapotranspiration (from fields and plantations) plus the water incorporated into the harvested crop or wood [12]. In the urban environment, Manzardo et al. [11] proposed to limit accounting of WFgreen to green areas, such as The blue water footprint (WF blue ) is the consumption of blue water resources, i.e., surface and groundwater withdrawn and not returned to the same water body [12]. According to this definition and its adaptation done by Manzardo et al. [11] it is proposed that WF blue in urban area accounts for the part of rainwater that evaporates from impervious surfaces Q imperm (such as roads and car parks) (m 3 /a) and from water surfaces (rivers, ponds) Q water (m 3 /a), water that is lost due to heating and cooling processes Q therm (heating plants) (m 3 /a), water exported outside the city boundary Q exp (m 3 /a), loss of supply water during transportation Q tl (m 3 /a), water consumed by the citizens and services and stored for a long term usage Q del (m 3 /a) WF blue = Q imperm + Q water + Q therm + Q exp + Q tl + Q del (2) If the impermeable area is further subdivided into transportation area A transp (m 2 ), roof area A roof (m 2 ), and paved area A paved (m 2 ) the volume of water evaporated from impervious surfaces can be calculated using the following formula where K transp , K roof , and K paved (unitless) represent fractions of precipitation PREC (mm/a) which evaporates from transportation, roof and paved surfaces respectively. The volume of water which evaporates from the area covered by water A water (m 2 ) is expressed as where K water (unitless) is the fraction of precipitation which evaporates from water surfaces. The volume of water lost due to heating and cooling processes is assessed based on input-output water balance Q therm = Q cool − Q heat (5) where Q cool is the volume of water withdrawn from the water body (m 3 /a) by a thermal power plant and Q heat is the volume of water which is discharged after use to the water body (m 3 /a). The most ambitious term to assess in Equation (2) is the volume of water consumed and stored Q del (m 3 /a). To avoid laborious activities in collecting data about citizens water usage, a simple water balance of an urban catchment can be applied [22] Q del = (PREC × A urban + Q imp ) − (Q evap + Q runoff + Q waste ) (6) where A urban is the total urban area in the city, Q imp the volume of water imported to the city (m 3 /a), Q evap the total volume of water evaporated (m 3 /a), Q runoff the loss of water due to surface runoff (m 3 /a), and Q waste wastewater discharge (m 3 /a). The grey water footprint (WF grey ) is defined as the volume of freshwater that is required to assimilate the load of pollutants discharged into a receiving water body based on natural background concentrations and existing ambient water quality standards [12]. In the urban environment, the pollution of water can be of chemical or thermal nature. In the case of pollution by chemicals, the WF grey is calculated as where c sewage is the concentration of a pollutant in treated sewage discharged into receiving water body(g/m 3 ), Q sewage the volume of sewage discharged into receiving water body by the sewage treatment plant (m 3 /a), c act is the actual concentration of a pollutant in water abstracted for consumption (g/m 3 ), Q abstr the volume of abstraction by the water treatment plant (m 3 /a), c max the ambient water quality standard for a pollutant (the maximum acceptable concentration) (g/m 3 ), and c nat the natural concentration of a pollutant in the receiving water body(g/m 3 ). In the case of separate sewage systems, WF grey should be calculated separately for the treated and untreated wastewater. When water is used for cooling-e.g., in thermal power plants-the processed water is discharged into the receiving water body, causing thermal pollution producing WF grey which can be calculated as WF grey, therm = (T heat − T act ) × Q heat /(T max − T nat ) (8) where T heat is the temperature of the heated water discharged into the receiving water body ( • C), T act the actual temperature of water in a receiving water body ( • C), T max the maximum acceptable temperature in a receiving water body ( • C), T nat the natural temperature in a receiving water body ( • C), and Q heat the volume of water which was discharged after use (m 3 /a). The final value of WF grey is the maximum of the chemical and thermal WFs WF grey = max (WF grey, chem , WF grey, therm ) Equation (9) is valid if the water for heating and cooling processes is released to the same water body as the water contaminated by chemical pollution. If the thermal and chemical pollutions are discharged to different water bodies, the final value of WF grey should be the sum of WF grey, chem and WF grey, therm .
The total value of the urban WF is the sum of green, blue, and grey WF WF urban = WF green + WF blue + WF grey (10)
Study Area Description
The assessment of urban water footprint was applied for three central European cities: Wroclaw (Poland), Vicenza (Italy), and Innsbruck (Austria). The cities assessed represent a diversity of geographical, climatic, and infrastructural aspects as presented in Table 1. The data on demographics, area, hydrology, infrastructure, and water usage were collected from the municipal authorities, sewage and water companies, law regulations, publicly accessible databases, and literature (for details see footer of the Table 1). Wroclaw is situated in the southwestern part of Poland on the Lower Silesian Lowlands. The city has two main water treatment plants in which surface and infiltration water, originating from the Sudetes mountains, are treated. The water supply system in Wroclaw connects 99% of inhabitants and is characterized by a great variance in age and material. The waste water is transported through the sewage system to one main mechanical-biological treatment plant. The sewage system in Wroclaw collects sewage from 98% of population and is comprised of two system types: combined and separate (sanitary and storm water) systems. The urbanized area in Wroclaw (54%), especially in the city center, and the large parts of the industrial area are mostly impermeable, hence most of the rainwater enters into the sewage system. Related with an increase in sealed surfaces is the lack of natural water retention for drier periods. Another important factor influencing the operation of water companies is water loss within the network, which amounts to over 10%.
Vicenza is located in the northeast part of Italy, on the Veneto Plain. Water from 18 artesian wells is treated in five plants, while one-third of the water consumed in the city is withdrawn from around 700 private wells. Currently, 97% of the population is connected to the water system. The waste water is treated in three plants. Around 92% of the population is connected to the sewerage system which consists of combined and separate systems. The latter is characteristic rather for new housing areas [29]. The annual rainfall in Vicenza is descending based on the data from the last two decades especially in winter season, which is characteristic for the whole Veneto region [26]. The yearly mean temperature is increasing which also causes an increase in evaporation leading to reduction in water reserves. At the same time, one of the main environmental issues in Vicenza is flooding, which happened a few times within recent years as a consequence of intensive rainfalls in autumn. Another reason for flooding is an overbuilt area and thus reduced ground permeability limiting water absorption. Even though the old water pipes are renovated systematically, the water losses reach up to 25%.
Innsbruck is located in Western Austria, surrounded by mountain ranges in the north and south. Only 32% (southern part) of the city is available for permanent settlement. Due to the alpine orography of the region, rainfall varies heavily in space, even within the municipality. The flow regime is influenced by snow and glacier melt in upstream regions and high precipitation during summer. The variations throughout the year and over the years according to the meteorological conditions are significant. Additionally, there is an influence of hydropower reservoirs [30]. As water flows rapidly through Innsbruck, the groundwater interaction is minimal [31]. Water intake to the distribution network relies mainly on a single spring in the mountains north of the city. All buildings are connected to the water (100%) and combined sewerage systems (99%). The major constraint influencing population density is topography, with mountain ranges north and south of the city. Both, heavy rainfalls and increasing temperatures cause accelerated glacier melting leading to higher risk of flooding [32]. The water loses in the water network are relatively small-below 10% is assumed, which might be due to the fact that about 1% of the network is rehabilitated each year.
Results
The data presented in Table 1 has been used to calculate three WF components: WF blue , WF green , and WF grey for three cities in central Europe. Calculations for all cities were made on the basis of data from 2014 year except precipitation for which ten years average annual value was used. It is obvious that the total WF urban is proportional to the urban area and the number of inhabitants. In order to compare cities of different size it is proposed to expressed WF per unit of area and per capita. Therefore, three different units (Mm 3 /year, m 3 /(year·ha), dm 3 /(day·capita)) were used to analyze obtained results as illustrated in Figure 2. from 2014 year except precipitation for which ten years average annual value was used. It is obvious that the total WFurban is proportional to the urban area and the number of inhabitants. In order to compare cities of different size it is proposed to expressed WF per unit of area and per capita. Therefore, three different units (Mm³/year, m³/(year•ha), dm³/(day•capita)) were used to analyze obtained results as illustrated in Figure 2. high WF green in proportion to other WFs in comparison to other cities is associated with a very high percentage of permeable area (ca. 82%) which consists of a green area. In Vicenza blue and grey WFs are comparable (10 and 9 Mm 3 /year respectively) and the least significant is WF green (5 Mm 3 /year), reflecting small percentage of permeable area (ca. 35%).
Relating WF with the areas of Wroclaw, Innsbruck, and Vicenza which are 293 km 2 , 105 km 2 , and 80 km 2 respectively, it appears that the largest total WF of 6665 m 3 /year·ha is reached for Innsbruck (Figure 2b). A very close value of 6339 m 3 /year·ha was obtained for Vicenza and a relatively small value of 4240 m 3 /year·ha is reached for Wroclaw. These results imply that the total urban WF is inverse proportional to the population density. In the cases of Vicenza and Wroclaw, WF blue is the major component of WF urban which is the result of a high share of the impermeable area in the urbanized area at 61 and 69% respectively. In Innsbruck, WF green dominates over WF blue which correlates with the relation of permeable (green) 82% to impermeable 16% area. However, the grey WF is greatly influencing WF urban which could be explained with the very high dilution factor of 0.865 reported for Innsbruck, while Wroclaw and Vicenza have 0.51 and 0.32, respectively. In general, it should be beneficial for a city, when the WF green reaches a high value as this reflects a great percentage of permeable area in the city and its retention capacity of rain water.
The comparison of WF urban expressed per day and capita (Figure 2c) is especially relevant for blue and grey WFs determined to a large extent by the number of inhabitants having an impact on the volume of water used and contaminated. The results show that even though the number of citizens is the greatest in Wroclaw (632,000), the WF blue per capita is the smallest (158 dm 3 /d·ca), with the second value (188 dm 3 /d·ca) reached in Innsbruck which is five time less populated (125,000), and the greatest value reached for Vicenza (233 dm 3 /d·ca) consisting of only 115,000 citizens. The significantly high value for Vicenza is a result of a high groundwater withdrawal from private wells and high water losses in public water distribution system. Looking at WF green the highest value was calculated for Innsbruck (223 dm 3 /d·ca) which reflects the highest percentage of permeable green area in the city (ca. 82%) and the smallest population density (1190 inhabitants/km 2 ). The value of Vicenza is about half the value of Innsbruck (113 dm 3 /d·ca) and Wroclaw is approximately seven times smaller (34 dm 3 /d·ca). This is due to the smallest share of the green area in the urbanized area (ca. 25%) and the highest population density (2157 inhabitants/km 2 ). Similar relationship among the cities is observed for WF grey which is also the highest in Innsbruck (303 dm 3 /d·ca) while the value for Wroclaw (99 dm 3 /ca·d) is approximately three times lower. Regarding the volume of produced sewage and number of inhabitants the values for Vicenza and Innsbruck are comparable thus the WF grey values for these cities should be comparable. In practice, the value for Vicenza (205 dm 3 /d·ca) is about one-third lower than for Innsbruck. For a better understanding of this phenomenon we need to take a close look at the Equation (7) for WF grey calculation. The dilution factors which multiply the volumes of produced sewage, for Innsbruck, Wroclaw, and Vicenza are 0.865, 0.507, and 0.377 respectively. The highest dilution factor for Innsbruck determines the highest value of WF grey per capita. Even though the dilution factor for Vicenza is almost 26% smaller than for Wroclaw the WF grey per capita is over twice greater. This can be explained by the fact that the number of inhabitants is five times higher in Wroclaw than in Vicenza while the volume of waste water produced in Wroclaw is only higher by the factor 2.5. It is also worth mentioning that, in Vicenza, the nitrogen concentration in the treated effluent is three times lower than the legal limit (30 mg/L) while in Wroclaw (and Innsbruck) the nitrogen concentration is only 10% lower than the legal limit.
To see what contributes to specific WF urban values in each city, the specific components are shown in Figure 3. The value of WF blue in Wroclaw is determined mostly by water usage (16.7% of total WF urban ), losses in distribution system (10.1%), and evaporation from the paved area (9.3%). Water loss for heat production and cooling, as well as evaporation from transportation area, also contribute significantly (6.9% and 6.4%, respectively). In Innsbruck WF blue is mostly associated with water usage (21.3% of total WF urban ) with other components being insignificant. In Vicenza, water loss from the water distribution system (18.5%) is dominating WF blue value with evaporation from the paved area and water usage giving similar shares (9.2% and 9.0%, respectively). The high share of the public and private green areas in Innsbruck lead to high values of water evaporated from the permeable area of the city which accounts for 31.2% of total WF urban . It is almost three times higher than in Wroclaw and 1.5 times higher than in Vicenza. The sewage discharged into the receiving body of the sewage treatment plant results in a significant share of WF grey in WF urban in all three cities, of which Innsbruck has the highest value (42.5%). The shares in Vicenza and Wroclaw are a bit lower with 37.2% and 34.1%, respectively. It has to be noted that climatic conditions (e.g., precipitation, average yearly/monthly temperature) influence WF results. This is of course particularly relevant for warmer climates such as the one in Vicenza.
Water 2018, 10, x FOR PEER REVIEW 10 of 16 (21.3% of total WFurban) with other components being insignificant. In Vicenza, water loss from the water distribution system (18.5%) is dominating WFblue value with evaporation from the paved area and water usage giving similar shares (9.2% and 9.0%, respectively). The high share of the public and private green areas in Innsbruck lead to high values of water evaporated from the permeable area of the city which accounts for 31.2% of total WFurban. It is almost three times higher than in Wroclaw and 1.5 times higher than in Vicenza. The sewage discharged into the receiving body of the sewage treatment plant results in a significant share of WFgrey in WFurban in all three cities, of which Innsbruck has the highest value (42.5%). The shares in Vicenza and Wroclaw are a bit lower with 37.2% and 34.1%, respectively. It has to be noted that climatic conditions (e.g., precipitation, average yearly/monthly temperature) influence WF results. This is of course particularly relevant for warmer climates such as the one in Vicenza. Figure 3. The WFurban data for three cities with particular components specified: (1) evaporation from transportation area; (2) evaporation from roof surface; (3) evaporation from paved area; (4) water losses at transport; (5) water exported to another basin; (6) water used and stored; (7) water loss for heat production and cooling; (8) evapotranspiration from public green area; (9) wvapotranspiration from private green area; (10) treated sewage.
The simplified approach described in this paper has been compared with the more complex approach introduced by Manzardo et al. [11]. This approach assumes that the urban area is divided into basic modules with consistent characteristics which consist of building blocks with similar functions, needs, and behavior. In the accounting phase, a representative sample of building blocks for each module is identified, relevant quantitative and qualitative water data is collected and the average blue, green, and grey WF are calculated for each module-which are multiplied by the number of building blocks, providing the total WF. The flow of this methodology is that it relies on building blocks for which many parameters need to be provided to formulate water mass balance for each building block. This has been overcome in simplified approach by using the surfaces to represent the urban area. This requires less data as the water mass balance is performed for the whole city represented with homogenous surfaces and the necessary data is easily available from municipality The WF urban data for three cities with particular components specified: (1) evaporation from transportation area; (2) evaporation from roof surface; (3) evaporation from paved area; (4) water losses at transport; (5) water exported to another basin; (6) water used and stored; (7) water loss for heat production and cooling; (8) evapotranspiration from public green area; (9) wvapotranspiration from private green area; (10) treated sewage.
The simplified approach described in this paper has been compared with the more complex approach introduced by Manzardo et al. [11]. This approach assumes that the urban area is divided into basic modules with consistent characteristics which consist of building blocks with similar functions, needs, and behavior. In the accounting phase, a representative sample of building blocks for each module is identified, relevant quantitative and qualitative water data is collected and the average blue, green, and grey WF are calculated for each module-which are multiplied by the number of building blocks, providing the total WF. The flow of this methodology is that it relies on building blocks for which many parameters need to be provided to formulate water mass balance for each building block. This has been overcome in simplified approach by using the surfaces to represent the urban area. This requires less data as the water mass balance is performed for the whole city represented with homogenous surfaces and the necessary data is easily available from municipality and water and sewage companies. The two approaches have been applied to the city of Vicenza and the results of WF accounting are presented in Table 2. It is worth noticing that the simplified approach yields very close results for blue and grey WF which are overestimated with a few percent compared to modular approach. The highest difference of 27.6% was obtained for WF green which might be the result of considering private green area differently. In a modular approach, private green area is included in building blocks but in a simplified approach it is a separated surface. Due to the fact that green WF was underestimated, the total WF urban differs only by 2.8%. These results prove that the new simplified approach is robust and provides reliable results.
Discussion
Looking at the results the question arises: which city does a good job in water management? Assuming the one with the lowest water footprint might be an unequivocal answer. From the three cities analyzed, Vicenza has the lowest WF urban expressed as total volume of water per year. If we relate the value of total WF urban to the number of inhabitants or urban area in the city then it turned out that Wroclaw has the lowest WF per year per capita or per hectare. The answer becomes even more difficult if we consider the three components of WF: green, blue, and grey WFs. This is the merit of WF indicator as it enables to analyze different aspects of water management. In practice the urban water footprint results may be useful for decision-makers who have an influence on the investments and policies associated with water consumption, usage, and treatment. It turns out that the improvement in efficiency of water use by 40% or more is possible by implementing available technological solutions [33]. Therefore, it is important to raise the awareness of decision-makers about water scarcity and motivate them to choose environmentally friendly and sustainable solutions. In this case, the water footprint indicator can be used as a measure to improve communication.
This paper shows that each urban area is very specific regarding climatic and hydrogeological conditions and each city has a potential to improve the water and sewage management. In the cases of Vicenza and Wroclaw WF blue is the major component of WF urban . This may lead to a potential water scarcity issue in the future. Local problems have been noticed with droughts in Vicenza and Wroclaw occasionally, leading to withering of plants and also to water shortages during hot summers. The climate observations and prognoses indicate that the water resources might be threatened at some point in the future due to the temperature increase in recent decades, elongation of antecedent dry weather period, as well as increased frequency and intensity of heavy rainfall events, both in Wroclaw [34] and in Vicenza [26].
The efficiency of water distribution system management is also measured by the loss of water and the associated failure of the water system. High and rising water losses will increase the WF blue and inform about inefficient water supply management, inadequate strategic planning or poor technical condition of the network. Results show that in Vicenza losses of supply water during transportation (18.5%) is determining WF blue value. Relatively high losses are also in Wroclaw (10.1% of WF blue ). Investing in improvement of water supply system e.g., by means of general rehabilitation of aging water infrastructure, replacing inefficient components such as valves, pumps, pipes, and meters, monitoring domestic water use or leakage to rapidly repair leakage can reduce direct urban water use which in turn will reduce WF blue .
The green water footprint (WF green ) is a good measure for assessment of natural retention capacity of urban area. In Wroclaw and Vicenza, the share of permeable area is relatively small (24% and 35%, respectively). Unlike Innsbruck with the share of permeable area of 82%. Based on the obtained results, it is recommended, especially for Wroclaw and Vicenza to incorporate more and more permeable and green spaces in the urban landscape. This can be done by building houses with green roofs, car parks, and pavements (especially walkways and squares) with permeable surface and rainwater harvesting facilities as described in Manzardo et al. [35]. Constructed wetlands, which are artificially created wetland ecosystems to treat-e.g., collected rainwater or wastewater, similarly to ponds and creeks-are also a possible solution for enhancing ecology and aesthetic value, enabling water retention for reuse for irrigation. The idea of linking water body and other open green spaces in a "blue-green infrastructure" is now recognized as part of cities planning strategy [36,37]. Local spatial management plans determine the indicators, forms, and functions of development, primarily the details of land use (including in areas excluded from construction) and the required percentage share of biologically active surfaces, providing opportunities to influence water management and mitigate the effects of flooding. Based on results of WF green calculations for urban areas, local governance can modify land use patterns, and thus affect water quantity and quality changes. The current trends in urban planning should highlight the need to shape compact and user-friendly cities while at the same time emphasizing the wise use of natural resources. This is evidenced by the increasingly frequent implementation of concepts based on ecological trends such as sustainable urban drainage systems, water sensitive urban design, or low impact development. Rainwater harvesting and retention is especially needed during heavy rainfall and melting snow when the sewage system is overloaded. This would help to minimize the problem with flooding noticed in Wroclaw and Vicenza and inundation of basements of buildings and streets, especially after heavy rainfalls. Such changes require promotion and might also be stimulated by the incentives and appropriate local regulations.
From an environmental point of view, it would be very helpful if not only quantitative but also qualitative requirements would be considered. The highest WF grey was in Wroclaw, then in Innsbruck, and finally in Vicenza. However, if we considered conversion to unit of area and capita these relationships would change. The highest value of WF grey was indicated for Innsbruck, Vicenza is at only 75% of the Innsbruck value and Wroclaw shows an approximately three-times smaller value than Innsbruck. Water quality changes can be significantly affected by the local governance structures, since local authorities largely influence the behaviors of inhabitants, private agents including developers, businesses, and many other stakeholders. In the case of urban areas with bigger WF grey value, the water and sewage companies should concentrate on potential process changes and investments that improve the contaminants removal from sewage (e.g., change of the operational scheme at the treatment stage). Reduction of the rainwater entering the sewage system will also result in reduction of the volume of treated sewage and thus WF grey . The reduction in the treated effluent will limit the human influence on the receiving water body and maintain the river condition closer to natural. Communities further downstream may benefit especially, as well the ecosystem in general. The enhancement of awareness by means of improvement people's knowledge on water use in order to reduce wastewater generation and to facilitate the return of water that is not affected by our use to the environment is the further step to improvement of grey water footprint in the urban areas.
From a methodological perspective, in this paper a direct water footprint accounting method at urban level is presented. As such, it includes water balances at local level to support water management without addressing the consequences of water use in a more comprehensive water footprint sustainability assessment [12]. To better support informed decisions, recent scientific developments recommend adopting additional assessment such as the water scarcity or availability assessment [2,12,[38][39][40]. For example, Bayart et al. [38] has presented the water impact index that allows the integration of consumptive and degradative water use of a process unit. The results are then characterized using a water scarcity index such as the one of Pfister et al. [41]. Moreover, Berger et al. [39] has presented the WAVE model considering atmospheric evaporation recycling and the risk of freshwater depletion. Recently, Boulay et al. [40] presented the AWARE method resulting from a consensus process lead by the UNEP SETAC Life Cycle Initiative. The outcomes of the accounting method presented in this paper can support the application of such methods by providing and organizing urban inventory data in a simplified manner when compared to previous experiences at urban level [13][14][15][16][17].
With reference to the design of WF accounting, indicator assumptions of the proposed method are based on the work of Manzardo et al. [11]. In the specific case of blue water, it is important to note that the consideration of rainwater evaporation is lively debated in the literature [39]. Therefore, the formulation of blue WF according to Equations (1) and (2) could be revised once consensus on this issue is found.
Conclusions
In this paper, a simplified model for water footprint accounting of direct water use in urban area was presented to support the definition of urban water management strategies and solutions. It was applied to three central Europe urban areas i.e., Wroclaw (Poland), Innsbruck (Austria), and Vicenza (Italy). The three cities under study represent a diversity of geographical, climatic and infrastructure aspects. This is directly reflected in three WF components: WF blue , WF green , and WF grey . In addition, proposed model was compared with the modular approach applied to the city of Vicenza [11] and proved to be robust in providing reasonable results. The results obtained for the three cities could be the base for drawing up water management plans or strategies. For example, to assess the efficiency of water use, one should look at the blue WF per capita. Here, Vicenza shows the highest values which is a consequence of uncontrolled water intake from private wells and a large share of impermeable area. Green WF is a good measure of rainwater consumption and its low value indicate vulnerability of urban area to floods as is the case in Wroclaw which has the smallest value per hectare. WF grey could help to assess the impact of the cities on water environment. The highest value observed in Wroclaw is mainly due to the largest city area and population. Even though the value is justified, it still results in the highest contamination of the receiving water body by the treated wastewater discharged in comparison with the other cities.
Though the WF directly depends on location and time, the results obtained suggest that Vicenza and Wroclaw need most modifications in the area of water management and infrastructure which should lead to restoration of natural water cycle and forming water reserves in the cities. Potential identified measures to improve local water management in analyzed cities include reduction of leakage from the drinking water network, introduction of water saving technologies, local rainwater management, education of citizens on water saving, and reduction of soil sealing in the cities.
The experience of the presented cities shows that each urban area is very specific regarding climatic and hydrogeological conditions (which cannot be changed) and each city has a great potential to improve the water and waste water management. The WF tool developed and adopted to specific city needs could be a useful tool allowing for evaluation of current water management state of the city, city area, or even single building. On the other hand, the tool could be used to compare, favor, and possibly also subsidize the best solutions proposed by the city planners, developers, and other stakeholders responsible for water management in the city. The success of using WF in water management will depend on its widespread application. The proposed simplified approach is a small contribution to achieve this goal.
Considering the outcomes of this study, future research can be planned as: (1) the development of a simplified water footprint sustainability assessment method to take into consideration also local water scarcity and availability as well as social and economic aspects [12,38]; (2) the application and possible adaptation of the proposed method at different levels, such as the regional one [42].
|
2019-04-27T13:09:36.970Z
|
2018-05-12T00:00:00.000
|
{
"year": 2018,
"sha1": "cd8e8fb2ba7e4b1800ccbe86b3e1c46bbbea01d5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/10/5/630/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9c2624c90f6736c5190f8bd3c403b7626838c409",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
253662893
|
pes2o/s2orc
|
v3-fos-license
|
Recent Findings on Platelet Activation, vWF Multimers and Other Thrombotic Biomarkers Associated with Critical COVID-19
Mortality rate in patients with COVID-19 increases in those admitted to the ICU. Activation of the coagulation system is associated with the worse disease outcomes. The aim of this study was to evaluate platelet activation and thrombotic biomarkers in hospitalized patients with COVID-19 during the second and third infection waves of the pandemic during 2021, following a previous report that included patients from the first wave. Sixty five patients were recruited and classified according to disease outcome; 10 healthy donors were included as a control group. Among prothrombotic biomarkers, t-PA concentrations (p < .0001), PAI-1 (0.0032) and D dimer (p = .0011) were higher in patients who developed critical COVID-19. We also found platelet activation via αIIbβIII expression (p < .0001) and higher presence of vWF-HMWM in severe COVID-19 (p < .0001). Several prothrombotic biomarkers are found to be increased since hospital admission in patients which lately present a worse disease outcome (ICU admission/death), among these, platelet activation, vWF increased plasma concentration and presence of HMWM seem to be of special interest. New studies regarding the predictive value of thrombotic biomarkers are needed as SARS-CoV-2 variants continue to emerge.
Introduction
Due to its rapid spread around the world, coronavirus disease 2019 (COVID-19) was declared as a pandemic in March 2020 by the World Health Organization (WHO). 1 It is an emerging sistemic disease caused by the SARS-CoV-2 virus, the newest member of the Coronaviridae family. 2,3 In the last year it has been proven that the integrity of the endothelium is severely affected during the COVID 19 disease, therefore it has now been recognized as a vascular disease. While most of the infected patients present only mild respiratory symptoms, about 19% of COVID-19 patients develop pneumonic symptoms with rapid evolution to Acute Respiratory Distress Syndrome (ARDS) 4 and present a coagulopathy that frequently results in thrombotic events that lead to multiorgan failure, septic shock and death. 5 In most cases, the evolution of this infection towards severity is closely linked to a cytokine-mediated inflammatory storm that results in the activation of coagulation mechanisms leading to the formation of microthrombi in the pulmonary vasculature, frequently found in autopsies of deceased COVID-19 patients. 6 The hypercoagulable state in COVID-19 appears to be a consequence of platelet activation and endothelial cell damage caused by this cytokine burst, thereby producing a strong alteration of the hemostatic system. 7 Since the onset of the pandemic in 2020, thrombotic alterations have been described, ranging from thrombotic microangiopathy to pulmonary thromboembolism. 8 Although some prothrombotic biomarkers were recognized early in the pandemic, many new biomarkers of thrombotic risk have emerged.
Several studies on prothrombotic biomarkers were performed in the beginning of the pandemic, during 2020 and early 2021. As new variants of the SARS-CoV-2 virus emerge and the pandemic evolves, the question arises whether the new variants continue to behave in the same way from a prothrombotic point of view. The aim of this study was to evaluate platelet activation and thrombotic biomarkers in hospitalized patients with COVID-19 during the second and third infection waves during 2021 and correlate the results with the severity of the disease.
Patients
A total of 65 patients with diagnosis of COVID-19, admitted to the General Hospital "Dr Miguel Silva" in Morelia Mexico in the months of March to August 2021, during the second to third waves of infection, were included in this prospective observational study. SARS-CoV-2 infection was confirmed by reverse transcription polymerase chain reaction (RT-PCR) by the State Public Health Laboratory; all patients were ≥18 years of age. Samples from ten healthy volunteers were included as a Control Group, samples for this control group were obtained from August to December 2019 before the COVID-19 pandemic as control group for another study and were stored in our biobank, so no SARS-COV-2 infection is possible in them.
Informed consent was obtained from each patient, and the study was performed according to the ethical principles of medical research and local guidelines and approved by the local Ethics and Research Committee of the "Miguel Silva" General Hospital with the registration number 530/01/20.
Samples
For the purpose of this study, all venous blood samples were obtained at the time of admission or within the first 48 h after admission to the hospital by clean venipuncture, only those samples from confirmed SARS-CoV-2 infection were included in the study. Patients' blood samples were further classified into two groups according to the evolution or final outcome of patients: the non-severe group included 37 patients that eventually developed severe COVID-19 but did not require admission to the ICU; the severe group included 28 patients who presented a severe disease with torpid evolution, required invasive mechanical ventilation (intubation) and therefore admission to the ICU and/or patients that eventually died. The following criteria were used by the intensive care staff to consider patients for invasive ventilation (intubation): clinical respiratory failure, PaO 2 , PaO 2 /FiO 2 ratio (Kirby index) <200, blood gas analysis results, severe acidosis, failure of response to pronation and failure of response to high-flow ventilation.
Blood samples from all patients were obtained within the first 48 h of admission to the hospital and there was no significant difference in clinical severity between the two groups at the time of admission/sampling. Patients admitted to the hospital had been home treated with standard medication such as paracetamol but were not anticoagulated at the time of sampling. Blood samples were collected in vacutainer tubes with sodium citrate solution at a final concentration of 3.2% (Becton Dickinson, Franklin Lakes, NJ, USA). The first tube (2 mL) of blood were discarded to avoid platelet activation due to venipuncture. For platelet activation assays all samples were processed within 4 h of collection. Platelet-rich plasma (PRP) was obtained by slow centrifugation at 157.81 × g for 10 min at room temperature (RT), subsequently PRP was incubated for 30 min under dark conditions. Samples were processed with minimal handling and Tyrodes buffer (5 mM HEPES, 137 mM NaCl, 2.7 mM NaHCO3, 0.36 mM NaH2PO4, 2 mM CaCl2, 5 mM glucose, 0.2% BSA, pH 7.4) was used to obtain a final concentration of 1 × 10 7 cells per milliliter. For prothrombotic biomarkers studies, plasma was obtained at 1933.20 × g for 15 min and preserved at −70°C until studied.
Prothrombotic Biomarkers
Plasma concentrations of prothrombotic biomarkers: D-Dimer, PAI-1, tPA, FT and F-IX were assessed by flow cytometry using a LEGENDplex™ Human Thrombosis Panel (10-plex) from BioLegend® kit. This is a multiplex immunoassay based on antibody-sensitized beads. Briefly, plasma was incubated for 2 h with analyte-specific antibody-conjugated beads, which are differentiated by size and internal fluorescence intensity. This allows them to act as capture beads, so that each analyte will bind to its corresponding bead. After a wash, a cocktail of biotinylated detection antibodies is added, which will bind to their specific analyte that is bound to the capture beads, forming a capture bead-analyte-detection antibody sandwich. Subsequently, streptavidin-phycoerythrin (SA-PE) is added and after 30 min of incubation, samples were read using a CytoFLEX BECKMAN COULTER® flow cytometer for analysis.
Von Willebrand Factor
IMUBIND® vWF REF ELISA Kit from BioMEDICA Diagnostics was used to assess vWF plasma concentration, following the instructions provided by the manufacturer. The ELISA plate was read at 450 nm in a Thermo Scientific Multiskan FC reader®.
VWF Multimeric Analysis
Von Willebrand Factor multimeric structure analysis was performed by western blot using a discontinuous vertical electrophoresis sodium dodecyl sulfate 1% to 2% agarose mini-gels, at 29V for 10-12 h at a constant temperature of 4°C, in a vertical MiniProtean Cel (BioRad, USA). A constant concentration of vWF protein was loaded in gels for each sample, based on previous quantification of vWF plasma concentrations for each patient. Subsequently, proteins were transferred to a polyvinylidene fluoride nylon membrane at 18 V for 1 h using semidry transfer conditions (Tans-Blot SD SemiDry Transfer Cell; BioRad). Membranes were blocked with albumin and then incubated using a polyclonal rabbit antihuman vWF antibody (Dako, Glostrup, Denmark) as a primary antibody and polyclonal swine antirabbit horseradish peroxidase-labeled immunoglobulins (Dako) as secondary antibody, both from DAKO® Cat A0082 and DAKO® P0217.
WB luminol reagent (SCBT® sc-2048) was used for development on the ChemiDoc MP Imaging System Biorad® gel documenter. The obtained patterns were scanned and subjected to densitometric analysis using ImagenJ and ImagenLab. Multimers were classified as low molecular weight (LMW-vWFM; corresponding to bands 1-5 in vWFM analysis), intermediate molecular weight (IMW-vWFM; bands 6-10) and high molecular weight (HMW-vWFM; bands ≥11). 9 For quantitative analyses, we calculated the densitometric area only of HMW-vWFM from each study group. No. 400111) were used as isotype control respectively. Subsequently, platelets were fixed with paraformaldehyde at 4%. Dark conditions and minimal handling were used during the assay to avoid external activation of platelets. As positive controls of platelet activation we used known activation agonists ADP, collagen and epinephrine and the acquisition was performed by flow cytometry as reported before by our group. 10 Results were analyzed using FlowJo v 10.8.0.
Statistical Analysis
Categorical variables were expressed as frequencies and percentages. Continuous variables that followed a normal distribution were reported as mean and standard deviation; those that did not meet the assumption of normality were expressed as median and interquartile range. Normality analysis was performed using the Kolmogorov-Smirnov test and the Anderson-Darling test as appropriate. Homogeneity of variances was evaluated with the Levene test. Comparison between groups in variables with normal distribution and homogeneity of variances was performed by analysis of variance (ANOVA) with Tukey's post hoc test and those that did not meet the homogeneity assumption were evaluated with Welch's ANOVA and Games-Howell's post hoc test. Some variables were transformed using the Log function in order to obtain a normal distribution; after the analysis, antilogarithm was applied to interpret the results. Non-parametric variables were analyzed with Kruskal-Wallis ANOVA and Dunn's post hoc test. For clinical data, Mann-Whitney U test was used to compare non-parametric data. For parametric data, an unpaired Student's t test was performed, and data were reported as mean and standard deviation.
Ethics Aspects
The research studies were conducted in accordance with the Declaration of Helsinki under protocols approved by the ethics committees of the respective institutions. Informed consent was obtained from all donors or their legal representatives.
Study Population
Between March and August 2021, a total of 65 COVID-19 adult patients were included in this study. Patients were classified into two groups according to evolution or outcome: the non-severe group consisted of 37 (56.9%) patients who subsequently developed severe COVID-19 but did not required admission to the intensive care unit; the severe disease group included 28 (43.1%) patients who presented a torpid evolution and required mechanical ventilation and admission to the ICU and/or fatality. Demographic and clinical characteristics of both study groups are presented in Table 1. The mean age was similar between study groups (p = .0580). However, there was a significant prevalence of male patients (p = .0340). The most frequent comorbidities found were diabetes, hypertension and obesity. Diabetes was significantly more prevalent in the severe disease group (p = .0170). In clinical laboratory tests, the presence of leukocytosis (p = .0002), neutrophilia (p < .0001) and lymphopenia (p < .0001) in severely ill patients were significant, as well as a higher erythrocyte sedimentation rate (p = .0244) and fibrinogen levels (p = .0124). In addition, mean platelet volume was found to be elevated in both groups of patients (p = .0006).
tPA and PAI-1 Significant differences were found in the concentrations of tPA in both study groups as can be seen in Figure 1. Concentration levels for the non-severe disease group were 5.44503 ± 1.72 ng/ ml (p = .0009) and 6.93426 ± 1.56 ng/ml for the severe illness group (p < .0001) respectively; concentrations in the control group were 2.77332 ± 1.49 ng/ml. PAI-1 concentration levels were significantly higher only in the severe illness group (48.86523 ± 1.85 ng/ml) compared to healthy subjects (23.65919 ± 1.60 ng/ml) (p = .0032) These results are shown in Figure 2.
D Dimer, F-IX and TF
The D dimer concentrations were also higher in both study groups (p = .0011) as shown in Figure 3, the severe group (17,179.08 ± 4.21 pg/ml) showed higher concentrations than the non-severe group (8356.03 ± 2.76 pg/ml) (p = .0492) and control group (3443.45 ± 1.78 pg/ml) (p = .0010). No differences were found in F-IX and TF concentrations for any study group. These results are shown in Figures 4 and 5.
VWF and Structural Analysis
Circulating vWF plasma concentration was quantified and then its multimeric structure was analyzed. Similarly, higher concentrations of vWF were found in both groups of patients, non-severe (147.6 ± 28.77 IU/ml) and severe (146.7 ± 29.16 IU/ml), when compared with healthy donors (90.17 ± 31.43 IU/ml) as shown in Figure 6A. Structural and densitometric analysis identified HMW-vWF and their densitometric area was quantified; results from western blot and densitometric analysis are shown in Figure 6B. Significant differences were found between the diseased patient groups and the healthy control group (4.28 ± 1.25%) (p < .0001). There was also a significant difference between the severe (13.58 ± 1.48%) and non-severe (7.72 ± 1.49%) groups (p = .0008), with higher HMW-vFW in patients with a worse disease course as shown in Figure 6C. A ROC curve analysis showed that the best cut-off point for the densitometric area was 9.51%, with a sensitivity of 80.0%% and a specificity of 77.8% as depicted in Figure 7.
Platelet Activation Analysis
Platelet activation was assessed via glycoprotein αIIbβIII and P-selectin surface expression. Thirty-five patients were included for this assay, 21 form the non-severe group and 14 from the severe group. In both groups platelet activation was found to be higher than normal, geometric mean of αIIbβIII expression was higher in patients with worse clinical course (3696 ± 1197) than in non-severe patients (2670 ± 609) (p = .0029) and healthy volunteers (2276 ± 616) (p = .0009). Moreover, P-selectin was significantly higher in both patient groups, non-severe (8239 ± 3332) (p = .0005) and severe (10,453 ± 5063) (p < .0001), compared to the control group (1801 ± 545 GM). These results are shown in Figure 8.
COVID-19 is a life-threatening disease in some individuals.
Since the beginning of the pandemic, it has been described that patients with severe disease develop a pathological inflammatory response that results in severe tissue damage, endothelial dysfunction and coagulopathy that favors the development of thrombotic complications and increased mortality. 11 The aim of this study was to evaluate whether certain biomarkers of an hypercoagulable state were altered in patients before the development of a critical condition and therefore if this was associated with a worse prognosis of the disease. In the present study we provide evidence that several prothrombotic biomarkers such as tPA, PAI-1, D-dimer, vWF, its multimeric structure and platelet activation could predict a higher risk of developing severe disease independently of pre-existing risk factors. The hypercoagulable state observed in severe COVID-19 has been attributed to an exacerbated innate immune response driven by proinflammatory cytokines and extracellular traps from neutrophils. 12 In the present study elevated leukocyte concentrations with a high percentage of neutrophils were detected in patients who required admission to the ICU. This inflammatory environment, together with pathogen-associated patterns (PAMP's) and reactive oxygen species (ROS), stimulates the activation of endothelial cells and thus the production and release of procoagulant factors such as fibrinogen, tPA, PAI-1 and vWF. 13,14 Our results show an increase in tPA in patients hospitalized for COVID-19, which may be attributed to the interaction between proinflammatory cytokines and the endothelium, the stimulation of the latter then releases tPA. 15 On the other hand, concentrations of PAI-1 were higher in the severe disease group. These findings were also reported by Henry et al implying that an abnormality in the fibrinolytic process is taking place, this can contribute to a greater hypercoagulability state that may lead to ARDS development and worsening of the patient status due to microthrombus formation and fibrin accumulation in the alveolar niche. 15,16 Similar to t-PA, D dimer, which is a product of fibrin degradation during fibrinolysis, is considered an independent marker of impaired fibrinolysis and a strong predictor of death in COVID-19. 17 In the present study increased D dimer and fibrinogen concentrations were found in both study groups, indicating that the fibrinolytic system is active despite elevated PAI-1 concentrations and probably because tPA is also elevated; however, tPA probably decreases at some point in the course of the disease or its concentration is exceeded to a greater extent by PAI-1, resulting in hypofibrinolysis that contributes to patients worse outcome. 18 Han and Pandey have reported SARS-CoV-2-triggered endothelial dysfunction leads to robust release of PAI-1 19 in vitro. This finding in addition to the growing clinical evidence that older patients and those with preexisting cardiometabolic or chronic inflammatory diseases, who are more likely to have higher baseline PAI-1 levels, are at an increased risk for severe disease; this suggests the plausibility of targeted PAI-1 inhibition as a potential treatment for patients with COVID-19.
No differences were observed in F-IX or TF concentrations. This is consistent with that reported by Martín-Rojas who found no consumption of coagulation factors in prothrombotic coagulopathy due to COVID-19. 20 On the other hand, another study also found no differences in plasma TF concentration between patients with COVID-19 and a control group, but did find elevated concentrations of TF in bronchoalveolar lavage fluid. 21 Another protein that is considered a marker of endothelial activity and therefore associated with prothrombotic activity is vWF. This multimeric glycoprotein plays a fundamental role in hemostatic processes. Under physiological conditions, this protein is stored in Weibel-Palade bodies (WPB) of endothelial cells and in alpha granules of platelets. When stored, vWF is found as HMW-vWF, as in circulation it is present predominantly as IMW-vWF and LMW-vWF because of its cleavage by the restriction protease ADAMTS-13. 22,23 Our study shows increased levels of vWF in both study groups. These results are consistent with our previous report from 2021, when we reported significantly higher plasma concentrations of vWF in severe COVID-19 patients from the first wave when compared to non-severe patients and controls. These results also agree with those reported by Ward et. al., who compared the concentrations of vWF in COVID-19 patients against healthy controls. 24 Flora Peyvandi's group also reported increased vWF levels in severely ill patients, 25 with an elevated vWF to ADAMTS 13 activity ratio that was strongly associated with disease severity.
In the multimeric and densitometric analysis, HMW-vWF were higher in the group of patients that developed the more critical disease. This might be evidence that there is active degranulation of platelets and endothelial cell Weibel Palade bodies contributing to the pathophysiology of severe COVID-19. 26 HMWM have the capacity to activate and improve platelet adhesion and activation. vWF also carries coagulation factor VIII, which is activated upon detachment. Altogether the presence of vWF HMWM indicates that significant endothelial damage is occurring which favors a pathological deviation of hemostasis and with it, the formation of microthrombi in the pulmonary vasculature. 27 Within the immunothrombotic triad described in COVID-19, platelets are cells that play an important role since they are classically associated with the clotting process, furthermore, platelets have recently been described as immune cells mediating the inflammatory process. The numerous receptors present on their surface allow them to be activated by several stimuli. 28 Once activated, platelets express ligands that favor adhesion to other cells and aggregation, also, these cells release many factors involved in both thrombotic and inflammatory mechanisms. In this study, patients with COVID-19 that developed a worse illness course showed increased MPV and platelet hyperactivation determined by an elevated expression of P-selectin and integrin αIIbβIII. In addition, glycoprotein αIIbβIII showed a significant difference between both study groups being higher in patients with the worse outcome. Similar to our results, Leopold et al, reported increased expression of p-selectin and the αIIbβIII complex in COVID-19 patients. Moreover, they reported a high concentration of sCD40L and PF4 released from the alpha granules. 29 These results show that platelets play a key role in COVID-19 coagulopathy and in disease outcome.
With the continuation of this pandemic, different SARS-CoV-2 variants have emerged around the world. As the virus has evolved, clinical presentation of the disease has also changed. In a previous study published by our group 30 we reported higher concentrations of prothrombotic and inflammatory biomarkers in patients recruited from July to September 2020 during the first COVID-19 wave in Mexico. In this new study, samples were obtained from March through August 2021, during the second-third wave of COVID-19. Although in this study we did not have the possibility to sequence the specific variant of SARS-CoV-2 in our patients, different variants of SARS-CoV-2 were circulating in the world and in our country at the moment of the study. Our results show that the thrombophilic state of hospitalized COVID 19 patients persists even though new variants of the virus emerged during this period of time.
Conclusions
In conclusion, the prothrombotic state in patients hospitalized for COVID-19 seems to be caused by significant endothelial tissue damage, which is reflected by increased markers of coagulation and endothelial dysfunction. In addition, increased PAI-1, the presence of platelet hyperactivation with abundant αIIbβIII integrin expression phenotype, as well as the presence of large amounts of HMW-vWF promote a hypercoagulable and hypofibrinolytic microenvironment that leads to the formation of pulmonary and/or systemic microthrombosis, organ failure and death. Due to the important role of these biomarkers in the patient's evolution, they could be used as predictors of risk of critical condition and/or death and as potential targets for treatment development, furthermore, when measured on admission, before patients are critically ill, vWF parameters could be useful to identify patients that could potentially evolve to severity/death and who could benefit from early anticoagulation/oxygenation therapies.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Consejo Nacional de Ciencia y Tecnología, (grant number 320085).
|
2022-11-19T16:23:32.379Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "fafb897b5b4a4d5482723415c8889889cddc5db8",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/10760296221135792",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bddb51ae8fb73bbfee7288e3d671ffd2e389ac60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260662593
|
pes2o/s2orc
|
v3-fos-license
|
Critical appraisal of systematic reviews of intervention studies in periodontology using AMSTAR 2 and ROBIS tools
Background Systematic reviews of intervention studies are used to support treatment recommendations. The aim of this study was to assess the methodological quality and risk of bias of systematic reviews of intervention studies in in the field of periodontology using AMSTAR 2 and ROBIS. Material and Methods Systematic reviews of randomized and non-randomized clinical trials, published between 2019 and 2020, were searched at MedLine, Embase, Web of Science, Scopus, Cochrane Library, LILACS with no language restrictions between October 2019 to October 2020. Additionally, grey literature and hand search was performed. Paired independent reviewers screened studies, extracted data and assessed the methodological quality and risk of bias through the AMSTAR 2 and ROBIS tools. Results One hundred twenty-seven reviews were included. According to AMSTAR 2, the methodological quality was mainly critically low (64.6%) and low (24.4%), followed by moderate (0.8%) and high (10.2%). According to ROBIS, 90.6% were at high risk of bias, followed by 7.1% low, and 2.4% unclear risk of bias. The risk of bias decreased with the increased in the impact factor of the journal. Conclusions Current systematic reviews of intervention studies in periodontics were classified as low or critically low methodological quality and high risk of bias. Both tools led to similar conclusions. Better adherence to established reporting guidelines and stricter research practices when conducting systematic reviews are needed. Key words:Bias, evidence-based dentistry, methods, periodontics, systematic review.
Introduction
Systematic reviews (RSs) of intervention studies are considered of high level of scientific evidence, being used to raise evidence that can support treatment recommendations and public health strategies (1).As other study designs, SRs are subject to biases that can compromise their validity and quality of evidence (2).Some tools were developed to assess the methodological quality and risk of bias of SRs, such as AMSTAR 2 (A Measurement Tool to Assess Systematic Reviews 2) (3) an updated version of ASMTAR (Risk of Bias in Systematic Reviews) (4), the Cochrane Collaboration tool for risk of bias of SRs (5).Some overviews in the periodontal field have assessed the methodological quality of SRs through AMSTAR, showing inconstant quality (6)(7)(8)(9).One overview assessed the methodological quality of SRs using the AMSTAR 2 and the risk of bias through ROBIS, and demonstrated very low overall quality (10).Among 23 SRs, only 3 SRs on peri-implantitis therapy had high quality according to AMSTAR 2, and only one were judged as low risk of bias according to ROBIS (10).This low overall quality raised questions about the general quality of the available evidence from RSs in periodontology.Hence, this overview aimed to: 1) describe the characteristics of SRs in periodontology; 2) assess if the certainty of the evidence is reported in these reviews; 3) assess the methodological quality using the AMSTAR 2; 4) assess the risk of bias using the ROBIS.
Material and Methods
This methodological survey was designed and performed following the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions (11) and was reported in accordance with the PRISMA checklist (12).
-Research question What is the methodological quality and risk of bias of the SRs of intervention studies in periodontology published in 2019-2020?-Eligibility criteria Inclusion criteria were SRs of intervention studiesrandomized (RCTs) and non-randomized clinical trials (nRCTs) -with or without meta-analysis, in the field of periodontology, indexed between October 1st, 2019 to October 1st, 2020.SRs that authors classified the studies as having prospective design were included as nRCTs.According to the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I), nRCTs are cohort studies in which intervention groups were allocated during the usual course of treatment instead of randomization (13).To be consistent, all non-randomized studies, nominated by authors as clinical trials, controlled clinical trials, prospective controlled trials, non-randomized prospective studies, prospective clinical studies, prospective controlled clinical studies and retrospective cohort studies, were classified as nRCTs.Exclusion criteria were: (a) SRs not related to the field of periodontology, (b) narrative or scope reviews, clinical guidelines, editorials or expert opinion papers, SRs of case-control and cross-sectional studies with PECO question, case reports and case series, pilot, in vitro and/ or animal studies.
-Search in databases An expert in SRs (CCM) designed and verified the strategies searches, and one reviewer (AGP) searched the following databases: MedLine (Pubmed), Embase (Elsevier), Web of Science, Scopus, Cochrane Library and LILACS for articles indexed between October 1st, 2019 to October 1st, 2020, with no language restrictions.This time length is enough to represent the current status of the quality of evidence in periodontology in the previous years, as the average time between the last search for a SR and its publication varies between 8 (14) to 15 months (15), and the mean time between the protocol's publication and the SR's publication is about 16 months (16).Grey literature was searched in OpenGrey, GreyLit and Google Scholar.A hand search was performed in the references list of selected articles, and in the main journals of periodontology found in the Journal Citation Reports (JCR) in the category "Dentistry, Oral Surgery and Medicine": Journal of Clinical Periodontology, Journal of Periodontology, Journal of Periodontal Research, International Journal of Periodontics & Restorative Dentistry, Journal of Periodontal and Implant Science, Periodontology 2000.Additional information of search strategies, including search terms, is detailed in the supplementary material (Supplement 1) (http://www.medicinaoral.com/medo-ralfree01/aop/jced_60197_s01.pdf).
-Studies selection Two pairs of independent reviewers screened studies based on titles and abstracts and then full text (AGP and SFF; JRC and LCMC).The reviewers were trained with a set of 10% of studies in each phase.In cases of less than 80% of agreement, additional rounds of training were carried out until reaching the necessary standard for each step.After reviewers achieved at least 80% of agreement, they underwent the screening process with the remaining of studies.The Rayyan platform (17) was used for studies screening.In cases of disagreement, an expert reviewer was consulted (CCM).
-Data extraction and assessment of methodological quality Data extraction, assessment of methodological quality and risk of bias were performed through the AMSTAR 2 (3) and ROBIS (5) tools by four pairs of independent reviewers (AGP and SFF; AGP and LCMC; JRC and SQN; CCM and TPP), using Excel spreadsheet editor.Reviewers were trained by two reviewers (AGP and CCM), the second one with broad experience in syste-matic reviews methodology.Again, the reviewers underwent as many rounds of training as necessary, until reaching 80% of agreement.All disagreements were solved by discussion and consensus.If consensus was not achieved, the principal investigator made the final decision.General data were extracted from the articles, and the list of the extracted data is available in the supplementary material (Supplement 2) (http://www.medicinaoral.com/medoralfree01/aop/jced_60197_s02.pdf).We uploaded the SRs protocols from the registration platform to compare with the published review, and extracted the JCR impact factor and the h-5 index of the journals from the JCR and Google Scholar Metrics, respectively.Disagreements during this step were resolved between the pair of reviewers.If disagreement persisted, the principal investigator was responsible for reaching a final consensus.Two reviews in Mandarin were translated using a translation tool.
-Statistical analysis Data was entered in IBM SPSS Statistics for Windows version 25 (Armonk, NY: IBM Corp.) for descriptive analyses.We calculated the relative and absolute frequencies for categorical variables, and mean, standard deviation and minimum/maximum values were provided for continuous variables.Analyses were performed considering all SRs and stratified by: SRs with RCTs and nRCTs, SRs with RCTs only and the impact factor of the journals (<3, ≥3 <6, ≥6).
-Protocol, register and PRISMA More than a third of the SRs (n=48; 37.8%) did not mention a study protocol.Among the 79 SRs that reported a protocol, 71 (89.9%) and 8 (10.1%) had registered and non-registered protocols, respectively.The most common registration platform was the International Prospective Register of Systematic Reviews (PROSPERO) (n=61; 85.9%).
-AMSTAR 2 The overall AMSTAR 2 methodological quality of all SRs was classified as critically low (n=82, 64.6%), low (n=31, 24.4%), moderate (n=1, 0.8%) and high (n=13, 10.2%) (Table 1, 1 cont.).The items 1 (components of PICO), 5 and 6 (study selection and data extraction in duplicate), 9 (satisfactory assessment of risk of bias), 11 (appropriate methods for meta-analysis), 14 (satisfactory discussion of heterogeneity) and 16 (report of sources of conflict of interest) received positive answers in more than 70% of SRs.The items with the highest percentage of overall negative responses were: 3 (reason for selection of certain study designs; 87.4%), 10 (report of funding sources for the included studies; 67.7%) and 4 (careful search of the literature; 65.4%).Five items considered critical according to AMSTAR 2 had large percentage of negative assessments: 2 (presence of protocol and justification for its modifications; 37.8%), 4 (careful literature search; 65.4%), 7 (list of excluded articles with justifications; 41.7%), 13 (consideration of the risk of bias in individual studies; 40.9%) and 15 (investigation and discussion of the impact of publication bias; 40.2%).
It is important to note that when analyses were performed considering the impact factor of the journal, the overall methodological quality was classified as high in ~30% of SRs in journals with impact factor ≥6.A high percentage of positive answers were also observed in the higher impact factor journals (Table 2-2 cont.-1).No expressive differences were observed when evaluating SRs according to the design of the included studies (Table 1, 1 cont.).
-ROBIS The overall ROBIS evaluations considered 113 (90.6%)SRs to be at high risk of bias, 11 (7.1%) at low risk and 3 (2.4%) at unclear risk of bias (Table 3 No expressive differences were observed when SRs were evaluated according to the design of the included studies (Tables 3).However, the risk of bias decreased with the increase of the impact factor of the journal (Table 4).Detailed ROBIS assessments such as concerns regarding study eligibility criteria, methods used to collect data and appraise studies and the synthesis and findings also decreased with the increase of the impact factor of the journal (
Discussion
The majority of SRs were classified as high risk of bias according to the ROBIS that agreed with the low methodological quality of the AMSTAR 2. It seems that both tools can indicate similar results as they point out in the same direction.This is in accordance with a recent review classifying SRs in dentistry as low and critically low quality (18) A wide variety of methodological deficiencies resulted in the classification of SRs as having high risk of bias.The absence or unjustified changes of the study protocol was the most important issue according to both tools.The prior creation and registration of a protocol is essential for ensuring the transparency of study methods and allowing adequate peer review of the proposed methodology, thus avoiding the selective reporting bias (11).
The deficiency of search strategies was another important identified bias.Search strategies for SRs should be as extensive as possible, without unjustifiable restrictions, including searches in the references of selected studies and in clinical trial registries.Additionally, complementary searches constitute an important source for the identification of potential studies.Its absence or unjustified restrictions increases the possibility of publication, language and selection biases, among others (11).Among the nine SRs that included a librarian on the research team, 77.8% had high methodological quality searches when assessed by the AMSTAR 2, in contrast to 17.8% of high-quality searches in SRs not including librarians.The inclusion of librarians, although not mandatory, is beneficial as it provides guidance at various stages of the research, such as in the processes of designing search strategies and is associated with more reproducible searches and improved methodological reporting in dental medicine SRs (19).
The processes of selection, data extraction and assessment of the risk of bias, which should be ideally carried out independently by more than one reviewer, were presented incompletely in most of the SRs.Cross-checking or duplicate selection processes, data extraction and as-e692 sessment of risk of bias can reduce biases, as well as the potential subjectivity of one single reviewer (20).
In addition to factors associated with methodological processes, the lack of robustness of the results and excessive bias in primary studies also lead to negative classification through the ROBIS assessment.Findings from SRs, especially those with meta-analyses, must be evaluated through complementary tests to assess its robustness, such as sensitivity tests, subgroup analysis, meta-regression and funnel plots (5).Few studies have proven the robustness of their findings, and the absence of such tests can result in false positive inferences in a meta-analysed result, leading the reader to believe in ineffective treatments.
It was reported that 68% of RCTs in the field of dentistry had an unclear or high risk of bias, according to the Cochrane risk of bias domains (21).If SRs do not test the robustness by meta-analytic approaches such as sensitivity analysis and meta-regression, the overall evidence may be biased.The inclusion of non-randomized intervention studies in the SRs might be considered an indication of acceptance of less-than-adequate research designs for intervention studies leading to low methodological quality or high risk of bias classifications.Nevertheless, no expressive differences were observed when SRs were evaluated according to the design of the included studies.The vast majority of SRs were of low and critically low quality when assessed by the AMSTAR 2 and judged as high risk of bias by the ROBIS.Overall, these two instruments led to similar conclusions in 93.7% of the assessments, although they are intended for different purposes.The first one is designed to assess the methodological quality of RS, or if the important aspects of the methods are being full filled (3).The second one can detect the risk of bias, so, although the SRs had full filled one item, it does not mean that is free of bias (5).This high agreement is probably due to the overlapping questions between these instruments (22), as well as the low general methodological quality of the SRs analysed.
The main source of SRs was the collaboration among authors from different continents (26.8%) and most SRs (97.6%) were published in English.This trend demonstrates the globalization of world science with authors from different countries resulting in international partnerships, exchange of knowledge and resources between research groups and a greater visibility of scientific research (23).
Regarding the scope of the journals, 44.9% of SRs were published in general dental journals.This can be partially explained by the high percentage of studies (26%) whose interventions aimed at improving oral hygiene habits (plaque reduction and gingivitis), areas of common interest in most dental specialties.In addition, it is important to note that some periodontology journals are no longer accepting submissions of reviews.It was recently reported that there are no significant differences between moderate/high and low/critically low methodological quality SRs in dentistry regarding publication year, continent, dental specialty and the impact factor of the journal (18).On the contrary, the increase in the impact factor of the journal decreased the risk of bias according to ROBIS in the present study.Few RSs (7.9%) did not mention conflict of interest in the paper at all or did not mention about funding (21.3%).
The presence of financial ties can be associated with positive outcomes in RCTs (24).In addition, a survey of 3,247 scientists funded by the US National Institutes of Health showed that 15.5% admitted to altering a study's design, methods, or results in response to pressure from funding sources (25).Thus, reporting potential conflicts of interest and funding sources is mandatory in scientific publications, as they aim to demonstrate the transparency and impartiality of the researchers who carry out the studies (11).Only a quarter of SRs assessed the certainty of evidence using the GRADE approach.The assessment of the certainty of the evidence is important to help interpreting the results.As it is a more conservative approach, it can help to avoid misleading conclusions (26).Therefore, any SRs of intervention, independent of the field of science, should add the analysis of the certainty of the body of evidence in their methods (26).Methodological and structural variability among systematic reviews have been observed and the quality of some studies is expected to vary (7).Notwithstanding the systematic and stringent approaches, not all systematic reviews are conducted and reported in the same manner and high methodological quality are uncommon according to specific checklists (7)(8)(9)(10).Quality assessments of systematic reviews are quite recent and researches should consider some guidelines when designing, conducting and reporting their reviews.
In the contemporary scientific scenario, it has been speculated that some issues may influence the quality, reliability and bias of currently scientific research such as the pressure for scientific publication, large volume of articles, predatory journals, quality of the peer review process, among others (27)(28)(29)(30)(31)(32)(33).It was also reported that the dental literature has been increasingly reviewed on various topics leading to SRs with questionable clinical or scientific value in terms of up-to-date information to advance knowledge (34).Overall, researches should critically reflect on these issues in order to their scientific production be aligned to core principles of evidence-based dentistry.Guidelines and quality assessment tools may be helpful to identify topics to be improved.Some limitations of the present study should be discussed.It had three pairs of independent reviewers, which may have resulted in different classifications by the e693 peers.However, in order to establish solid classification criteria and to achieve high levels of agreement, four training and calibration sessions were conducted using the guidance documents of AMSTAR 2 (3) and ROBIS (5).
A certain degree of variability in inter-examiner agreement was previously demonstrated (22,35).This methodological review is strong as it is the first one that raised the methodological quality using the new AMSTAR 2 together with ROBIS for risk of bias.Moreover, we extracted data of several characteristics of included SRs that are detailed in the supplementary material.
Conclusions
Most SRs of intervention studies in periodontology were classified as low methodological quality and high risk of bias.Methodological quality increased and risk of bias decreased with the increase in the impact factor of the journals.Although designed for different purposes, both AMSTAR 2 and ROBIS could lead to similar directions.Efforts should be direct to better adherence to reporting guidelines and stricter research practices when conducting SRs.AMSTAR 2 and ROBINS could help the authors to plan the protocol and the reporting of their SRs.
Protocol and register
This study protocol was registered a priori at PROSPERO (#CRD42020215676; "Quality assessment of systematic reviews and meta-analysis of periodontal intervention studies: an overview") and no changes were deemed necessary after the start of the study.
Source of Funding
This study was supported by grants from Conselho Nacional de Desenvolvimento Científico -CNPq, Brazil (grant #302251/2019-7), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -CA-PES, Brazil and Pró-Reitoria de Pesquisa da UFMG (PIBIT/PRPq), Brazil.Funding agencies had no participation in the research design and data interpretation.
e694
commentary on current publishing trends in the field of temporomandibular disorders and bruxism.J Oral Rehabil.2019;46:1-4.35.Gates M, Gates A, Duarte G, Cary M, Becker M, Prediger B, et al.Quality and risk of bias appraisals of systematic reviews are inconsistent across reviewers and centers.J Clin Epidemiol.2020;125:9-15.
-3 cont.-2).Did the research questions and inclusion criteria for the review include the components of PICO?Did the report of the review contain an explicit statement that the review methods were established prior to conduct of the review and did the report justify any significant deviations from the protocol?Did the review authors report on the sources of funding for the studies included in the review?
9. Did the review authors use a satisfactory technique for assessing the risk of bias (RoB) in individual studies that were included in the review?
Table 1 :
Methodological quality assessment through AMSTAR 2 according to the type of studies included in the systematic reviews.If meta-analysis was performed did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?
e683 12.15.If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?16.Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?
Table 1 cont . :
Methodological quality assessment through AMSTAR 2 according to the type of studies included in the systematic reviews.Did the research questions and inclusion criteria for the review include the components of PICO?
3. Did the review authors explain their selection of the study designs for inclusion in the review?
Table 2 :
Methodological quality assessment through AMSTAR 2 according to the impact factor of the journals.
11.If meta-analysis was justified did the review authors use appropriate methods for statistical combination of results?(Only complete this item if meta-analysis of other data synthesis techniques were reported)12.If meta-analysis was performed did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?
Table 2 cont . :
Methodological quality assessment through AMSTAR 2 according to the impact factor of the journals.
Table 2 cont.-1:
Methodological quality assessment through AMSTAR 2 according to the impact factor of the journals.
Q2.1 Did the search include an appropriate range of databases/electronic sources for published and unpublished reports?
Table 3 :
Risk of bias assessment through ROBIS according to the type of studies included in the systematic reviews.Were the terms and structure of the search strategy likely to retrieve as many eligible studies as possible?
Table 3 cont . :
Risk of bias assessment through ROBIS according to the type of studies included in the systematic reviews.
Table 3 cont.-1:
Risk of bias assessment through ROBIS according to the type of studies included in the systematic reviews.
Table 3 cont.-2:
Risk of bias assessment through ROBIS according to the type of studies included in the systematic reviews.
Table 4 :
Risk of bias assessment through ROBIS according to the impact factor of the journals.
Table 4 cont . :
Risk of bias assessment through ROBIS according to the impact factor of the journals.Q1.4 Were any restrictions in eligibility criteria based on study characteristics appropriate (e.g.date, sample size, study quality, outcomes measured)?Did the search include an appropriate range of databases/electronic sources for published and unpublished reports?Were sufficient study characteristics available for both review authors and readers to be able to interpret the results?
Table 4 cont.-1:
Risk of bias assessment through ROBIS according to the impact factor of the journals.Was the synthesis appropriate given the nature and similarity in the research questions, study designs and outcomes across included studies?
Table 4 cont.-2:
Risk of bias assessment through ROBIS according to the impact factor of the journals.
|
2023-08-07T15:03:38.898Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "bf3e948ec9025a7507f6c59b4c67c66cb7625b11",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/jced.60197",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e26b356beb438162656b65053adffb2eb5f8bad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4268
|
pes2o/s2orc
|
v3-fos-license
|
Spatial dependency of action simulation
In this study, we investigated the spatial dependency of action simulation. From previous research in the field of single-cell recordings, grasping studies and from crossmodal extinction tasks, it is known that our surrounding space can be divided into a peripersonal space and extrapersonal space. These two spaces are functionally different at both the behavioral and neuronal level. The peripersonal space can be seen as an action space which is limited to the area in which we can grasp objects without moving the object or ourselves. The extrapersonal space is the space beyond the peripersonal space. Objects situated within peripersonal space are mapped onto an egocentric reference frame. This mapping is thought to be accomplished by action simulation. To provide direct evidence of the embodied nature of this simulated motor act, we performed two experiments, in which we used two mental rotation tasks, one with stimuli of hands and one with stimuli of graspable objects. Stimuli were presented in both peri- and extrapersonal space. The results showed increased reaction times for biomechanically difficult to adopt postures compared to more easy to adopt postures for both hand and graspable object stimuli. Importantly, this difference was only present for stimuli presented in peripersonal space but not for the stimuli presented in extrapersonal space. These results extend previous behavioral findings on the functional distinction between peripersonal- and extrapersonal space by providing direct evidence for the spatial dependency of the use of action simulation. Furthermore, these results strengthen the hypothesis that objects situated within the peripersonal space are mapped onto an egocentric reference frame by action simulation.
Introduction
In this study, we examined the spatial dependency of action simulation by measuring participants' engagement in motor imagery. We used two mental rotation tasks to study the spatial dependency of effector-specific-and objectoriented action simulation by presenting the stimuli in the spaces near to and far away from participants. The space immediately surrounding our body is often referred to as the peripersonal space . Objects within this peripersonal space (PPS) can be reached, grasped, and manipulated (Holmes and Spence 2004). Objects situated beyond this space, termed as extrapersonal space (EPS), cannot be grasped without moving oneself or the object. According to Gallese (2005), objects presented in PPS but not those in EPS are automatically mapped onto an egocentric frame of reference via action simulation (Graziano 1999;Farne et al. 2000;Gallese 2005Gallese , 2007. The presence of the actual action simulation itself, however, has never been directly tested empirically. Besides the suggested properties of the PPS on the phenomenological level, the PPS has been shown to be multimodal in nature (Graziano 1999;Maravita et al. 2003) and neurally dissociateable from the EPS in both primates (Rizzolatti et al. 1981a, b;Fogassi et al. 1996Fogassi et al. , 1999Graziano et al. 1994Graziano et al. , 1997Murata et al. 1997;Duhamel et al. 1998;Caggiano et al. 2009) and humans (di Pellegrino et al. 1997;Mattingley et al. 1997;Ladavas et al. 1998a, b;Pavani et al. 2000;Makin et al. 2007;Gallivan et al. 2009). Objects observed within PPS are typically mapped in motor terms, i.e., related to the egocentric frame of reference (Graziano 1999;Makin et al. 2007). Furthermore, Costantini et al. (2010) showed that affordances rely not only on the action possibilities of grasping or using an object, but also on the object being within reach. These findings point to the automatic simulation of an action toward the observed object when it is located within PPS. Moreover, the ability to simulate sensory consequences of potential movements has been shown to be crucial for action simulation (Coello and Delevoye-Turrell 2007).
In 2005, Gallese formulated the action simulation hypothesis, stating that observed objects within PPS are automatically mapped onto an egocentric frame of reference by action simulation (Gallese 2005(Gallese , 2007Gallese and Lakoff 2005;Knox 2009). This hypothesis was based on, among others, the findings of Graziano (1999), who showed an egocentric mapping of observed stimuli near the primates' arm and the similar activation patterns of the ventral premotor cortex in humans during observation, naming, and imagined use of objects (Grafton et al. 1996;Chao and Martin 2000). According to Gallese (2007), the perception of an object within reach, automatically triggers a ''plan'' to act, that is, a simulated potential action. This implicitly induced simulated action would then, in turn, represent the observed object in motor terms, thereby mapping the object onto an egocentric frame of reference (Gallese and Lakoff 2005;Gallese 2007).
Still, today's findings supporting the action simulation hypothesis do not provide direct empirical evidence for the implicit use of action simulation. That is, despite the important findings on differential firing of visuomotor neurons and elicitation of affordances to objects situated in PPS, no study has focused on behavioral performance inherently related to the use of action simulation. In true action simulation, the imagined movement must exhibit the same biomechanical constraints as the overt movement (Jeannerod 2006). Using this facet, the simulation of actions can be studied directly by testing the influence of biomechanical constraints on performance.
A well-established experimental paradigm to study the possible influence of biomechanical constraints is the mental rotation task of hands or graspable objects (Parsons 1994;de Lange et al. 2008b;ter Horst et al. 2010). In the mental rotation task of hands, participants have to judge the laterality of a presented picture of a rotated hand. The time needed to react typically increases with increasing angle of rotation (Sekiyama 1982) and is analogous to the time needed to move one's own hand into the position of the presented hand (Parsons 1987). These features exemplify that the mental rotation of one's own hands is restricted to the same biomechanical constraints as overt movement (Parsons 1994). This influence can be found in reaction time differences for hand stimuli rotated laterally and medially. That is, laterally rotated hands are rotated away from the body's midsagital plane and result in prolonged RTs compared to medially rotated stimuli (rotated toward the midsagital plane) as laterally rotating one's arm is more difficult (Parsons 1994;ter Horst et al. 2010). Besides biomechanical constraints, one's posture also influences performance on the hand laterality judgment task (de Lange et al. 2005(de Lange et al. , 2006Ionta et al. 2007). Ionta et al. (2007) showed that holding one's hands behind the back decreases performance compared to keeping both hands on the lap. These biomechanical and postural influences point to the use of an underlying embodied process denoted as Motor Imagery (MI) (Ionta et al. 2007).
MI is defined as a process in which participants mentally simulate a movement from a first person perspective without overtly performing the movement and without sensory feedback due to overt movement (Decety 1996a, b). Moreover, it has been shown that MI is a form of action simulation (Currie and Ravenscroft 1997). This fits well with the simulation theory, stating that covert actions are neurally simulated actions and that all aspects of the action are involved during the simulation process, except for the movement execution itself (Jeannerod 2001(Jeannerod , 2006. In the present study, we addressed the research question whether action simulation, i.e., MI, during object observation exhibits spatial dependency. Specifically, we aimed to test whether the engagement in MI is enhanced for stimuli presented in the PPS compared to stimuli presented in the EPS, in accordance with the action simulation hypothesis (Gallese and Lakoff 2005). In order to test the spatial dependency of action simulation, we conducted two experiments. In these experiments, we addressed two consecutive questions in order to scrutinize the spatial dependency of action simulation. In the first experiment, we tested the spatial dependency of the automatic action simulation of the effector itself. In the second experiment, we tested whether the hypothesized automatically simulated movement of the effector toward an observed object, induced by mere passive observation of the object, exhibits spatial dependency. Both experiments are complementary, as experiment 1 focuses on the simulation of motor acts of the effector and experiment 2 focuses on the object-effector interaction. In experiment 1, we used a hand laterality judgment task. Typically, presenting rotated hands induce the use of MI to solve the task even when they are presented about 60 cm away from the participant (Parsons 1994;Shenton et al. 2004;Lust et al. 2006;Ionta and Blanke 2009;Ionta et al. 2007;ter Horst et al. 2010). In order to show a differential engagement in MI for hand stimuli presented in the PPS compared to the EPS, we needed a set of stimuli typically not inducing MI when presented in the EPS. Therefore, we used a stimulus set containing back view hand stimuli which were recently shown not to induce the use of MI when presented at a distance of 60 cm in contrast to hand stimulus sets that used combinations of back and palm view hand stimuli (ter Horst et al. 2010). We expected to replicate the findings of ter Horst et al. (2010) concerning the lack of engagement in MI for the presentation of mere back view stimuli when presented in the EPS. In contrast, for stimuli presented within PPS, we expected the participant to use MI. In experiment 2, we used an identical experimental design as for experiment 1. However, we replaced the hand stimuli with stimuli of graspable objects (i.e., cups). Participants were required to judge the laterality of the displayed cups. We hypothesized that the observation of graspable objects within PPS, but not EPS, automatically induces the use of MI. This expectation is in line with the action simulation hypothesis and would provide direct empirical evidence for an automatic coding of observed objects within PPS in motor terms. In sum, we hypothesize a facilitated use of MI for hand and cup stimuli presented in PPS compared to EPS. This hypothesis is confirmed if a lack of biomechanical influence on the performance for stimuli presented in the EPS is found in combination with a significant influence of those constraints on the performance for stimuli presented within PPS.
Participants
In total, 21 healthy right-handed participants were included in the present study (16 women, age 20.5 ± 3.0 years, mean ± SD). Two participants were excluded from analysis due to an error percentage of more than 15%. All participants had normal or corrected-to-normal vision. No participant had a history of neurological or psychiatric disorder. The study was approved by the local ethics committee and all participants gave written informed consent prior to the experiment, in accordance with the Helsinki declaration.
Stimuli
Stimuli were derived from a 3D hand model designed with a 3D image software package (Autodesk Maya 2009, USA). The stimulus set consisted of back view left and right hand stimuli rotated over six different angles from 0°t o 360°in steps of 60°. The left and right hand stimuli were mirror images of each other, but otherwise identical ( Fig. 1). Stimuli were projected on a flat surface of 100 cm by 80 cm by a beamer (Sharp NoteVision) with a resolution of 1,024 9 768 pixels at 70 Hz. Stimulus size was 320 9 256 pixels (i.e., 31.25 by 25 cm). The size of the presented hands was realistic, approximately 20 cm by 12 cm. All stimuli were repeated 16 times resulting in a grand total of 384 stimuli (16 * 6 angles * 2 sides * 2 locations). Prior to the experiment, a test of 24 stimuli was run to familiarize the participants with the task.
Experimental procedure
Participants were seated in a chair positioned in front of the table. Stimulus presentation was controlled using customdeveloped software in Presentation (Neurobehavioral systems, Albany, USA). Prior to the stimulus, a fixation cross was presented at the center of the table in between the two possible stimulus locations for a variable duration between 800 and 1,200 ms. The participants were instructed to focus on the fixation cross. After this, the stimulus was presented and was visible until a response was given. Participants had to respond by pressing the left button with their left hand for left hand stimuli and vice versa. After the response, a black screen was displayed for 1,000 ms. Participants were instructed to judge the laterality of the hand as fast and as accurate as possible, without explicit instructions on how to solve the task.
The participants positioned their hands on the table surface with the palms oriented downward, approximately 30 cm in front of their body. Both of the participant's hands were occluded from view by a black cloth. The stimuli were presented in two locations, namely in between the participants' hands, referred to as ''Near,'' and 60 cm in front of the participants hands, referred to as ''Far'' (i.e., 90 cm in The order of location was randomized per block.
Data analysis
Reaction times smaller than 500 ms and larger than 3,500 ms were excluded from analysis (total loss 4.7% of all trials). These upper and lower boundaries are based on similar studies using a hand laterality judgment task (Sekiyama 1987;Parsons 1994;Ionta et al. 2007;Iseki et al. 2008). Analysis was performed on correct responses. Incorrect responses were a ''left'' response for a ''right'' hand and vice versa. We expected to find an influence of biomechanical constraints indicating the use of MI. This can be observed by differences in RTs between laterally and medially rotated hand stimuli (Parsons 1987(Parsons , 1994de Lange et al. 2008b;ter Horst et al. 2010) referred to as Direction Of Rotation (DOR). Medially rotated hand stimuli consisted of right hand 240°and 300°, and left hand 60°and 120°rotated stimuli. Laterally rotated stimuli consisted of right hand 60°and 120°, and left hand 240°a nd 300°rotated stimuli. Data analysis was performed using repeated measures analysis of variance (ANOVA).
In order to test whether participants mentally rotated the stimuli, we conducted a repeated measures ANOVA with the following design: two within-subjects factors (Location, Angle); with two levels for Location (Near, Far) and four levels for Angle (0°, 60°, 120°, and 180°). The values labeled 60°and 120°are the averaged RTs of 60°and 300°, and 120°and 240°rotated stimuli, respectively. A significant effect of Angle, accounted for by increasing RTs with increasing angles of rotation, would indicate that participants mentally rotated the hand stimuli (Shepard and Metzler 1971;Sekiyama 1982Sekiyama , 1987Parsons 1994;Kosslyn et al. 1998;Ionta et al. 2007;ter Horst et al. 2010).
To test our hypothesis on the facilitated engagement in MI for stimuli presented in the location ''Near'' compared to stimuli presented in the location ''Far,'' we conducted a repeated measures ANOVA which tested the engagement in MI via the influence of biomechanical constraints. This influence would be evident by a significant DOR effect. This ANOVA had two within-subject factors (Location, DOR); with two levels for Location (Near, Far) and two levels for DOR (Lateral, Medial). The rationale for using two separate ANOVA's is the exclusion of the 0°and 180°s timuli for testing the DOR effect as they are neither laterally nor medially rotated. The exclusion of these two rotational angles obviates valid testing of the typical Angle effect obtained in a mental rotation task. The latter ANOVA design was also used to analyze the accuracy data. Post hoc analysis was Bonferroni corrected and alpha level was set at P = 0.05.
Results experiment 1
The total number of erroneous responses (i.e., 4.4% of all trials) corresponds to former studies (Ionta et al. 2007;ter Horst et al. 2010). The ANOVA on the accuracy data revealed a significant DOR effect [F(1,21) = 4.404; P \ .05; g 2 = .173]. This effect was accounted for by a larger percentage of erroneous responses for laterally compared to medially rotated stimuli. No other effects were found significant.
For the correct responses, the ANOVA on RT's per Location and the angular disparity revealed a significant effect of Angle [F(3,54) = 85.217; P \ .001; g 2 = .826]. This effect revealed an increasing RT for increasing angles of rotation, see Fig. 3. All angles differed significantly from each other (P \ .001), except for 0°and 60°. No other effects were significant (all P [ 0.25).
Discussion experiment 1
In this first experiment, we tested the spatial dependency of simulated movements of the hand. We hypothesized that the perception of hand stimuli within PPS, but not EPS, would implicitly induce an action simulation of the effector.
Because of the low error rates and the increasing RT with increasing angles of rotation for stimuli in both PPS and EPS, we can conclude that the participants used mental rotation to solve the task (Parsons 1994). The overall performance did not differ between both locations as shown by the non-significant Location effect in the ANOVA on angular disparity. The ANOVA on biomechanical constraints, however, did reveal a marginally significant effect of Location. These differing results occur due to the exclusion of the 0°and 180°rotated stimuli in the latter ANOVA. Consequently, the marginal significant Location effect does not represent differences in overall performance between both locations. Importantly, we found an engagement in MI for hand stimuli presented within PPS, but not when the same stimuli were presented within EPS. This is evident from the presence of the DOR effect for Near but not the Far location and shows the influence of biomechanical constraints on the performance for stimuli presented in PPS (Parsons 1994;ter Horst et al. 2010), see Fig. 4. These findings indicate that the engagement in MI exhibits spatial dependency. The observed effects might be attributed to the experience of moving one's hands in the PPS, thereby triggering the use of motor-related simulations of actions. Hands observed in EPS, typically not belonging to the self, might facilitate the use of a third person perspective strategy for judging the hands' laterality.
In order to verify if the observed spatially dependent action simulation is also automatically triggered when a graspable object is observed within PPS, we conducted a second experiment. In this second experiment, we used stimuli of graspable objects (i.e., cups), which we presented within PPS and EPS.
Experiment 2
To study the possible spatial dependency of engagement in MI, we again focused on measuring the influence of biomechanical constraints on the performance. This influence can be found in differences in the difficulty of (mentally) grasping the presented cup. For example, if the left hand is used for grasping a cup, then it is easier when the handle of that cup is oriented toward the left than toward the right. In the second experiment, we used stimuli of rotated cups, which we defined as ''left'' and ''right'' cups. By dissociating between ''left'' and ''right'' cups, we were able to test for possible influences of biomechanical constraints. In the literature on the mental rotation of hands, it was shown that participants make an ''estimated guess'' of the stimulus Fig. 3 Reaction times as function of angular disparity in experiment 1 for both locations, mirrored at 180°(i.e., 60°and 120°represent average RT for 60°and 300°, and 120°and 240°rotated hand stimuli, respectively). Error-bars indicate standard error of the mean (SEM) Fig. 4 Reaction times for both locations divided into Lateral rotation and Medial rotation. Lateral rotation indicates rotations away from the mid-sagittal plane and medial rotation indicates rotations toward the mid-sagittal plane. As can be seen, the significant interaction of Location by DOR (P \ 0.02) represented by the differences in RTs between lateral and medial rotation (i.e., DOR) was modulated by the location at which the stimuli were presented. Double asterisk indicates significance at the P \ 0.002 level. Error-bars indicate standard error of the mean (SEM) Exp Brain Res (2011) 212:635-644 639 laterality prior to the final judgment (Parsons 1987;de Lange et al. 2008a). In other words, participants subconsciously ''decide'' that they observe, for example, a left hand and perform a mental rotation of the own corresponding hand to verify their decision before making the final judgment (Parsons 1994). For this second experiment, we assumed that participants would mentally grasp the observed cup with the corresponding hand in order to make the final laterality judgment. That is, mentally grasping a left or a right cup with the left or right hand, respectively. This is also in agreement with the introspective results from pilot studies in our lab in which participants reported to mentally grasp the observed cup with the corresponding hand in order to rotate the cup into its canonical position before making the final laterality judgment. Similar to experiment 1, we hypothesized that biomechanical constraints of mentally grasping a shown cup would only be observed for stimuli presented within PPS, but not EPS. This would be evident from prolonged RTs for rotated cup stimuli that are more difficult to grasp with the corresponding hand compared to rotated cup stimuli that are more easy to grasp within PPS. For cup stimuli presented in EPS, we expected a lack of biomechanical effects on the RT profile.
Participants
Twenty-five healthy participants took part in this study (24 women, mean age 19.3 ± 1.9 years, mean ± SD). None of the participants had participated in the first study. One participant was excluded from analysis due to an error percentage of more than 15%. All participants had normal or corrected-to-normal vision. No participant reported a history of neurological or psychiatric disorder. The study was approved by the local ethics committee and all participants gave written informed consent prior to the experiment, in accordance with the Helsinki declaration.
Stimuli and procedure
Stimuli were derived from a 3D model designed in a 3D image software package (Autodesk Maya 2009, USA). The cup stimuli consisted of pictures of rotated left and right cups. A left cup was defined as having the handle oriented to the left when situated upright and the face in front and vice versa for right cup stimuli, see Fig. 5. The cups were shown from both front view and back view. By including both views, the congruent and incongruent stimuli contained all angular disparities. Prior to the experiment, participants were familiarized with the ''left'' and ''right'' cups by showing a real ''left'' and ''right'' cup, identical to the stimuli. The participants were not allowed to touch the cups. Participants were instructed to judge as fast and as accurate as possible whether a left or right cup was shown by pressing a button with their left or right hand, respectively. The experimental setup of the second experiment was identical to that of the first experiment except for the used stimuli, i.e., graspable cups instead of hands.
Data analysis
Reaction times smaller than 500 ms and larger than 3,500 ms were excluded from analysis (total loss 1.5% of all trials). Analysis was performed on correct responses. Incorrect responses were a ''left'' response for a ''right'' cup and vice versa. Our analysis focused on the possible difference in RTs for stimuli with congruent and incongruent oriented handles. Congruent stimuli consisted of left cups with the handle oriented leftward and right cups with the handles oriented rightward. Incongruent stimuli consisted of left cup stimuli with the handles oriented rightward and right cup stimuli with the handles oriented leftward, see Fig. 5. For example, a ''left'' cup seen from the front (i.e., face in sight) has a rightward-oriented handle when rotated 180°and hence is denoted as incongruent. Data analysis was performed using repeated measures ANOVA with the factors Location (Near, Far), Direction of Handle (Congruent, Incongruent), and Angle (0°, 60°, 120°, 180°). This ANOVA design was also used to analyze the accuracy data. Post hoc analyses were Bonferroni corrected and alpha level was set at P = 0.05.
Results experiment 2
The total amount of erroneous responses was 5.0% of all trials. The ANOVA on accuracy data did not reveal any significant effects. The ANOVA on RTs did reveal a significant main effect of Angle
Discussion experiment 2
In experiment 2, we studied the spatial dependency of action simulation for an observed object. Based on the action simulation hypothesis, we hypothesized that the object stimuli within PPS, but not EPS, would induce action simulation.
Given the low error rates and increasing RT for increasing angles of rotation for stimuli in both locations, we can conclude that the participants effectively mentally rotated the observed objects. The results of experiment 2 show that the facilitation of the effector-specific engagement in MI for corporeal stimuli within PPS that was shown in experiment 1 is also present for the observation of graspable objects within PPS. This is evident from the observed influence of biomechanical constraints on performance for stimuli within PPS, but not within EPS. Moreover, this finding closely corresponds to the previously observed motoric mapping of objects situated within PPS, as evident from the selective firing of different visuomotor neurons to objects in the macaque area F5 (Murata et al. 1997). Collectively, these results imply that participants simulated a grasping movement toward the observed object in PPS, but not EPS.
Discussion
As a direct test of the action simulation hypothesis, we investigated the spatial dependency of the automatic action simulation toward stimuli observed in PPS. In the first experiment, we tested the spatial dependency of action simulation of the hand. In experiment 2, we studied the spatial dependency of the action simulation toward an observed object. Based on the action simulation hypothesis, we hypothesized that the perception of hand-(experiment 1) or object stimuli (experiment 2) within PPS, but not EPS, would implicitly induce an action simulation. In correspondence to our hypotheses, the results from both experiments show a spatial dependency of the use of MI. For both hand-and cup stimuli, an action is automatically simulated when they are situated within PPS, but not when they are situated in EPS. 1 According to the action simulation theory by Gallese (2005), an action is automatically simulated toward an observed object. The simulation, in turn, enables the mapping of the object in motor terms, thereby mapping the object onto an egocentric frame of reference, according to the simulation theory as proposed by Jeannerod (2001). This is in line with the notion of observed objects eliciting affordances (Gibson 1979). The simulation of an action toward an object might be regarded as the mental rehearsal of the affordances related to the object (Tipper et al. 2006). Costantini et al. (2010) showed that eliciting affordances related to an observed object are only present for objects observed in PPS, but not EPS. This was evident from an observable compatibility effect between instructed movement of one arm and the elicited affordances related to the observed object only for objects situated within PPS. Our results extend the findings of Costantini et al. (2010) by directly showing the actual influence of biomechanical constraints on movement at the cognitive level without any overt movement. The observed spatial dependency of the influence of biomechanical constraints on performance in our study provides direct evidence for the automatically induced action simulation toward objects observed within PPS, but not within EPS. Additionally, the results of experiment 1 show that the automatic action simulation is also present at an effector-specific level and does not necessarily have to involve the observation of graspable objects, but can also be triggered by the observation of corporeal objects. Importantly, the observation of hands or objects within PPS is not a prerequisite to be able to use MI. Indeed, the use of MI within EPS is also shown to be elicited in mental rotation tasks of corporeal objects (Parsons 1994;ter Horst et al. 2010) and non-corporeal objects (Kosslyn et al. 2001;Tomasino and Rumiati 2004). This engagement in MI is likely to be attributable to task instructions (Tomasino and Rumiati 2004) and stimulus properties (ter Horst et al. 2010). Consequently, the use of MI, or simulating an action, in itself does not necessitate the involvement of multisensory PPS mechanisms. However, when objects are presented within PPS, multisensory PPS mechanisms are involved in the action simulation (Graziano et al. 1997;Murata et al. 1997;Duhamel et al. 1998;Ladavas et al. 1998a, b;Makin et al. 2007;Gallivan et al. 2009). The involvement of multisensory mechanisms is likely to underlie the differential use of MI between stimuli presented within PPS and EPS in our study.
Our results are in apparent contrast with the findings by Coello et al. (2008). Their findings show that action simulation is only used for observed stimuli placed near the transition of PPS to EPS. These findings, however, are likely to cover a different aspect of the functionality of the PPS than covered in the action simulation theory. Coello et al. (2008) studied the relation between the use of action simulation in a reachability task, while the action simulation theory, on the other hand, cover the automatic use of action simulation toward observed graspable objects within PPS. Consequently, task differences are likely to underlie the observed differences in the use of action simulation in the results of Coello et al. (2008) and the results observed in our experiments.
Finally, we consider alternative interpretations. First, the results of experiment 1 might also be explained by the influence of visual experience. Lateral hand rotations at the ''Near'' location are more difficult to adopt than the same orientation at the ''Far'' location. This is especially so when the elbow is flexed and the upper arms in parallel to the body as in our set-up. Because of the biomechanical difficulty to adopt this posture, people rarely adopt it. It is likely that the visual experience of one's own hand in this orientation in the ''Near'' location is also less than for the ''Far'' location, which, in turn, might explain the observed differences between lateral and medial rotations. Still, this interpretation of the results cannot completely account for our findings for two reasons. First, we obtained similar results in our second experiment and the visual experience of cups with the handle rotated leftward or rightward is likely not to differ. Secondly, for hand laterality judgment tasks, motor-related processes have been shown to be used by, for example, postural influences (de Lange et al. 2005(de Lange et al. , 2006Ionta et al. 2007;Ionta and Blanke 2009). Importantly, postural effects have been shown to influence the performance for hand stimuli, but not letter stimuli, which typically induce the use of a visual strategy (de Lange et al. 2005). In addition, when participants are instructed to use a visual strategy, the DOR effect is not obtained in a hand laterality judgment task (Tomasino and Rumiati 2004;ter Horst et al. 2010). In our Experiment 1, we did obtain the influence of biomechanical constraints for stimuli in PPS, but not EPS. Second, another possible interpretation for the results of experiment 1 and 2 might be sought in the difference in visual angle between the stimuli presented at the Near and Far location. That is, despite the identical physical size of the stimuli in both locations, the visual angles differed. As a consequence, it may be argued that the larger visual angle of the stimuli at the Near location influenced the engagement in MI. At odds with this explanation is a recent study in which it was shown that a consistent visual angle of a cup shown nearby or far away does not influence the relationship between spatial positioning of objects and the automatic triggering of potential motor acts (Costantini et al. 2010). Moreover, on a more phenomenological level, maintaining identical visual angles for stimuli presented at the Near and Far location would result in an unrealistic situation as objects far away are presented smaller on the retina than objects situated nearby.
For the hand stimuli presented in the EPS, we hypothesized no influence of biomechanical constraints on the participants' performance. As we indeed did not find a DOR effect for stimuli presented at the ''Far'' location, we presume that the participants used a more visually guided strategy such as Visual Imagery (VI) to solve the task. VI encompasses simulating the execution of a movement from a third person perspective. As a consequence, VI is not subject to biomechanical constraints and shown to be used effectively to solve the hand laterality judgment task (Tomasino and Rumiati 2004;ter Horst et al. 2010). As a consequence, it is likely that participants mentally rotated the stimuli presented within EPS in an allocentric frame of reference.
In sum, in the present study, we found that the presentation of stimuli of hands and graspable objects within PPS resulted in the engagement in MI compared to stimuli presented in EPS. These findings provide direct evidence for the action simulation hypothesis and show the automatic action simulation toward objects presented in PPS, but not when presented in EPS.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
2014-10-01T00:00:00.000Z
|
2011-06-26T00:00:00.000
|
{
"year": 2011,
"sha1": "bc25baf7d473de822e82d5b2f48a1969c9f34a48",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00221-011-2748-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "14b4c51c5090215672e0fa4d008293f4f6511dd3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
150855107
|
pes2o/s2orc
|
v3-fos-license
|
ICT-Pedagogy Integration in Elementary Classrooms: Unpacking the Pre-service Teachers’ TPACK
This study aimed to investigate the Technological Pedagogical Content Knowledge (TPACK) self-efficacy and ICT integration skills of elementary pre-service teachers in elementary classrooms. The respondents were fifty-two (52) elementary pre-service teachers enrolled in the student teaching program at the Central Luzon State University. Results revealed that most of respondents perceived themselves to be highly proficient in all domains of TPACK framework: Technology Knowledge (TK), Content Knowledge (CK), Pedagogical Knowledge (PK), Pedagogical Content Knowledge (PCK), Technological Content Knowledge (TCK), Technological Pedagogical Knowledge (TPK) and Technological Pedagogical Content Knowledge (TPCK). Most of them were found to be good in integrating ICT in classroom instruction, particularly in terms of planning and implementation.Respondents’ GPA in Educational Technology and ICT-related courses was found to have negative significant relationship with their planning and implementation of ICT-Integrated instruction. Their TPACK self-efficacy had a highly significant relationship with their planning and implementation of ICT-integrated instruction.
Introduction
As modern technology paved its way to classrooms, there had been an increased interest in the development of technology integration in instruction to provide better quality education among the students. As Adcock (2008) emphasized, "the evolution of teaching and learning through technology integration is apparent to all levels of education which has changed the classroom as well as the roles of the teachers and students". This inspired a new conceptual reform in delivering quality and effective instruction and the need for individuals to be involved in technological change and development had arisen (Kazu & Erten, 2014). Recognizing the possible significant benefits of technology in the field of education, various researchers tested and evaluated the effects of modern technology in teaching and learning (Sife, Lwoga, & Sanga, 2007;Walters & Lydiatt, 2004). They concluded that properly designed learning materials inspired and delivered by modern technology could add more value to the teaching and learning environment (Walters & Lydiatt, 2004). In developing countries (like the Philippines), information and communications technology (ICT) had been considered as a key factor to improve teaching and learning processes (Sife, Lwoga, & Sanga, 2007).
Though the utilization of modern technology in classrooms yielded positive effect on teaching and student learning, researchers pointed out that it was important to have the right competency and literacy in utilizing these technologies to really improve learning. The proper use of available modern technology [rather than the presence of that technology] could advance student learning and could improve the efficiency and effectiveness of teaching (Walters & Lydiatt, 2004). Thus, teachers should have technical competence and literacy to properly use technologies in classroom instruction. Kereluik, Mishra, and Koehler (2011) explained that technical competence and technical literacy required having knowledge and skills on knowing how to use the technologies which would provide comprehensive learning and effective teaching.
The responsibility of training prospective teachers to gain technical competence and literacy, which was vital for a successful classroom technology integration, laid on the hands of institutions offering teacher education program. The need for pre-service teachers to be involved in technological change had ascended demanding teacher education programs to strengthen courses integrating technology in classroom teaching.
In the Asia-Pacific Region, various efforts and practices (Sife, Lwoga, & Sanga, 2007;UNESCO, 2013) were undertaken by different institutions to strengthen the use of ICT for teaching and learning. The United Nations, Educational, Scientific, and Cultural Organization (UNESCO) Bangkok collected and documented case studies from different institutions offering Educational Technology and ICT-related courses in different countries including the Philippines. The reviewers concluded that the cases demonstrated the viability of tweaking existing Education Technology courses to be more adaptive to the needs of the current realities towards better integration of contents with ICT and their subsequent application in real world environments (UNESCO, 2013).
The Commission on Higher Education (CHED) also recognized the importance of gearing quality prospective teachers who could be capable of integrating technology in classroom teaching. Through CMO Order 30, series of 2004, the CHED mandated all Teacher Education Institutions (TEIs) to strictly follow the set of program specifications and embraced the new teacher education curriculum. This curriculum included two (2) Educational Technology courses to prepare teachers with technological competence and skills to facilitate and evaluate learning in diverse types of students in a variety of learning environment.
It was in this light that this study was conceptualized. With the acquired knowledge and experiences from various content, theories, methods/ strategies, and field study courses, pre-service teachers were expected to have acquired technological, pedagogical and content knowledge which were important in establishing technology integrated instruction. The advancement of technology in our classrooms had increased the interest of Teacher Education Institutions (TEIs) to develop prospective teachers who could be capable of integrating technology in classroom teaching. However, some researchers suggested that many teacher education programs had not been preparing teacher candidates adequately to integrate technology, and many teachers in schools were reluctant to use technology for teaching and learning (Walters & Lydiatt, 2004;Zhao, Pugh, & Sheldon, 2002) Supporting the results of research conducted by Vannatta and Beyerbach (2000), Hew and Brush (2007) recognized that student teachers had very little knowledge about effective technology integration, even after completing courses about instructional technology.
Though technology courses had offered a variety of technological tools and had provided opportunities to learn and practice technical skills, it had been emphasized that mere exposure to a number of ICT tools would not necessarily mean that pre-service teachers can develop abilities to design successful, technology integrated lessons (Hyo-Jeong & Bosung, 2009). These mentioned setbacks in foreign context prompted the researchers to investigate the current condition of ICT integration among elementary pre-service teachers.
In this regard, this study aimed to investigate the pre-service teachers' ICT integration in classroom teaching in elementary classrooms during the student teaching program and determine the relationship between pre-service teachers' socio-demographic characteristics, Technological Pedagogical Content Knowledge (TPACK) self-efficacy and their preparation and implementation of ICT-integrated lessons. Moreover, it was also conducted to determine the problems or challenges that might impede successful technology integration in classroom instruction. Specifically, it aimed to: 1. describe the socio-demographic characteristics of the respondents in terms of age, sex, major/ specialization, grade level handled, subject taught, ICT-related trainings attended, personal ICT equipment, and GPA in Educational Technology and ICT-related courses; 2. determine the respondents' TPACK self-efficacy; 3. describe the ICT program of the cooperating schools in terms of administration, facilities and equipment, and problems encountered; 4. find out respondents' ICT integration in classroom instruction in terms of planning (curriculum goals and technologies, instructional strategies and technologies, technology selections, and fit) and implementation (instructional use and technology logistics); 5. determine whether pre-service teachers' socio-demographic characteristics and TPACK self-efficacy are related to ICT integration in classroom instruction; and 6. identify problems of pre-service teachers in integrating ICT in classroom instruction.
Theoretical and conceptual framework
This study was anchored on the TPACK framework developed by Mishra and Koehler (2006) and Self-Efficacy Theory proposed by Bandura (1977). This study also used TPACK-Based Evaluation model suggested by Abbit (2011).TPACK framework was introduced to the field of educational research for understanding teacher knowledge required for effective technology integration (Mishra & Koehler, 2006). This framework arises from multiple interactions among content, pedagogical, and technology knowledge. It encompassed understanding the representations of concepts using technologies; pedagogical techniques that may apply technologies in constructive ways to teach content in differentiated ways according to students' learning needs; knowledge of what could make concepts difficult or easy to learn and how technology can help redress conceptual challenges; knowledge of students' prior content-related understanding and epistemological assumptions; and knowledge of how technologies can be used to build on existing understanding to develop new epistemologies or strengthen old ones (Mishra & Koehler, 2008). It was composed of seven domains: Technology Knowledge (TK), Content Knowledge (CK), Pedagogical Knowledge (PK), Pedagogical Content Knowledge (PCK), Technological Content Knowledge (TCK), Technological Pedagogical Knowledge (TPK), and Technological Pedagogical Content Knowledge (TPCK).
Self-efficacy referred to an individual's perception dealing with different challenges, the ability to accomplish an activity, and his/her belief in his/her own capacity (Senemoğlu, 2010). It was described as an individual's perception of personal ability to assume a task and complete it, thereby enabling an individual to accomplish his goals amidst the challenges or difficulties. Furthermore, Bandura (1997) claimed that self-efficacy could predict positive motivational and achievement outcomes across contexts, including persistence and performance. Pre-service teachers' self-efficacy oriented on TPACK framework was necessary to understand their perceived knowledge of different domains under the TPACK framework. Abbitt (2011) examined the development of the TPACK framework with a particular focus on assessing TPACK in the context of pre-service teacher preparation programs. In his review of different existing methods, he suggested the combination of different valid and reliable instruments to properly assess pre-service teachers' development of TPACK and technology integration in classroom instruction. He recommended the utilization of [a]survey of pre-service teachers' knowledge of teaching and technology (Schmidt, et al., 2009) and [b]technology integration assessment rubric (Harris, Grandgenett, & Hofer, 2010). He further explained that these instruments were highly complementary in their current form and appropriate to elementary education or early childhood education programs due to the design of the survey of pre-service teachers' knowledge. Based on the framework, theories, and model presented above, the researcher was able to design the conceptual paradigm of this study.
Research design
This study employed a mixed method design quantitatively and qualitatively. For the quantitative part, the researchers used a correlation design and for the qualitative part, interviews with selected participants were used. This study was likewise correlational for it aimed to determine whether the respondents' socio-demographic characteristics and TPACK self-efficacy were related with classroom integration of ICT.
Sampling procedure
Since the aim of this study was to investigate the integration of ICT in classroom instruction among elementary pre-service teachers, the researcher employed purposive sampling method. Purposive sampling as defined by Black (2010) South Central School, DepEd-CLSU (Lab.) Elementary School, San Jose West Central School, and San Jose East Central School were selected as respondents of the study. These cooperating schools were chosen over other schools with the assumption that these schools had established ICT programs and had more facilities to support ICT integration in classroom instruction. The initial plan of the researchers was to have a total enumeration of the pre-service teachers who fitted to the abovementioned criteria. As each of them was given a copy of the research questionnaire, not all of them positively responded to the request. Fifty-two (75.4%) pre-service teachers participated in the study.
Instrumentation
Four (4) instruments were used in this study. The first instrument was developed by the researchers covering respondents' socio-demographic characteristics (age, sex, major/ specialization, grade level handled, subjects taught, ICT-related trainings attended, personal ICT equipment, and GPA in ICT-related and Educational Technology courses).The second instrument was based on the "Survey of Pre-service Teachers' Knowledge of Teaching and Technology" developed by Schmidt et al. (2009). It consisted of seven sub domains under TPACK framework (TK, CK, PK, PCK, TCK, TPK, and TPCK). These sub-domains included4-10 survey items measuring multiple knowledge domains represented in the framework.
Respondents were asked to indicate the degree of their proficiency to the survey items of the instrument on a scale of: 1 -Strongly Disagree, 2 -Disagree, 3 -Agree and 4 -Strongly Agree. A pre-testing was also conducted to pre-service teachers at the Muñoz North Central School to test the reliability of the instrument. To measure internal consistency, alpha reliability coefficient was calculated. The result was 0.951 indicating that the survey questionnaire was reliable considering 0.70 or higher could be considered "acceptable" in most social science research situations (Mishra & Koehler, 2006).
The third instrument was utilized to determine pre-service teachers' planning and implemenation by analyzing lesson plans and observing actual demonstrations. Technology Integration Observation Instrument, developed by Hofer et al. (2011),was found to be highly reliable with a computed internal consistency of 0.914 (Cronbach's Alpha). Because of its validity evaluations, it was offered to other researchers for research purposes. Thirty (30) respondents were interviewed during classroom visitations and observations, and analysis of lesson plans. Coding of responses was applied to analyze data for emergent themes. The last instrument was also developed by the researcher and designed to describe ICT program of the cooperating schools in terms of administration, facilities, equipment, and problems relative to teaching and learning.
Methods of data analysis
Based on the objectives and hypotheses of the study, the data were analyzed by using the different statistical methods in Statistical Package for Social Science (SPSS). Descriptive statistics like mean, standard deviation, percentages, ranks and frequency counts were utilized to describe the socio-demographic characteristics, TPACK self-efficacy, and planning and implementation of the respondents. Pearson Product Moment Correlation was used to identify the relationship between the independent and dependent variables. On the qualitative part, coding was applied to analyze data gathered from the thirty respondents for emergent themes (Wolcott, 1990). After repeated readings, overlap shown among codes was reduced when similar codes were clustered together and were combined into a number of broad categories or themes. To provide more detailed background information of the data, the themes were tallied to reveal frequency of responses and were converted to percentage for easier view of the data summary.
Socio-demographic characteristics
Age. The age of the respondents ranged from 19 to 32 years old with a mean of 20.06 (SD=2.26). Almost half (48.1%) of the respondents were 19 years old while 40.4 percent of the respondents aged 20 years old. A very small number of respondents (11.5%) was 21 years old and above. This indicated that almost all of the respondents were in the age bracket that could be expected from typical enrollees for this fourth year level of college.
Sex. Based on gathered data, most of the respondents were females (88.5%). This confirmed the typical condition in our educational system where teaching positions were predominated by female.
Major/specialization. Most of the respondents (86.5%) were taking generalist as their specialization while only seven respondents (13.5%) were taking pre-school education. According to the official enrolment report of the Office of Admissions for the school year 2016-2017, the number of Bachelor of Elementary Education (BEEd) students taking Generalist as their specialization was greater than those who were specializing in Pre-school Education.
Grade Level. Findings also indicated that the respondents were widely spread on different grade levels in the cooperating schools. Respondents who handled Grade 5 pupils obtained the highest number (12 or 23.1%), followed by respondents assigned to Grade 3 (9 or 17.3%). The least number (5 or 9.6%) was obtained by respondents who were handling Grade 1 and 2.
Subject taught. Respondents who handled Science obtained the highest frequency (19 or 36.5%), followed by respondents assigned to Mathematics (13 or 25.0%), and Language arts -Filipino and English (11 or 21.2%).
ICT-related trainings attended.
More than half of the respondents (53.8%) indicated that they had attended ICT-related trainings and/ or seminars. All of the indicated seminars were categorized as local and primarily provided by the institution. Meanwhile, almost half of the respondents (46.2%) indicated that they had not attended any ICT-related trainings and/or seminar. Probably, the college provided seminars/ trainings but not all pre-service teachers were involved. Table 1 also revealed that most of the respondents owned three to four ICT equipment (46.2%). It was followed by respondents with one to two ICT equipment (34.6%). Meanwhile, only three respondents indicated that they owned seven and more number of ICT equipment (5.8%). When asked about the ICT equipment possessed, almost all of the respondents (96.2%) owned cellular/ mobile phone, forty-seven respondents (90.3%) owned personal computers (laptop, desktop, netbook, or notebook). Only four (2.38%) respondents had their own router/ switch/ hub installed with internet connection.
GPA in educational technology and ICT-related courses. Respondents'
academic performance in subjects related to the use of technology in teaching obtained an overall mean of 2.13 which could be verbally interpreted as "Good" in the standpoint of CLSU grading scheme. It can also be noticed that the mean of grades obtained by the respondents in Educ 120 (ICT in Education) was 2.15 (Good), in Educ 120a (Educational Technology I was 2.18 (Good) and in Educ 120b (Educational Technology II) was 2.08 (Good). Only eight respondents obtained a grade of 1.00 -1.50 (Excellent) in all ICT-related courses taken. Table 2 presents the TPACK self-efficacy of the respondents. Analyzing the data presented, they posted an overall TPACK self-efficacy mean of 3.03 which determined that their TPACK self-efficacy levels were at the level of "High proficiency". Also, the overall standard deviation was 0.31 indicating a narrow distribution of responses. These findings could be possibly attributed to the prior experiences of the respondents in using technology in teaching as all of them underwent ICT-related courses such as ICT in education, Educational Technology I, and Educational Technology II. Among the domains in the TPACK framework, TCK and TPK obtained the highest means( x =3.20, 3.13 respectively).
TPACK self-efficacy
Meanwhile, TK obtained the lowest mean( x =2.87).It can also be noticed that respondents were found to be highly proficiency in all items except for item "I can explain advantages of using technology in a content area" which had a descriptive rating of Very high proficiency ( x =3.31).
Technology knowledge (TK). TK of the respondents had an overall mean of 2.87 which indicated that respondents were found to be highly proficient in this domain. All items under this domain received a descriptive rating of "Agree". The overall standard deviation of 0.27 indicated that respondents' answers were narrowly dispersed in terms of TK. It can be implied that elementary pre-service teachers perceived themselves that they had sufficient knowledge to learn technologies easily and technical skills needed to use technology. These findings supported the results determined by Kazu and Erten (2014) in testing TPACK self-efficacies of preservice teachers in Turkey.
Content knowledge (CK). By looking at the respondents' self-efficacy under CK, it
was indicated that respondents were highly proficient on matters concerning content based on the overall mean obtained ( x =2.97). These finding supported the study of Kazu and Erten (2014) that pre-service teachers perceived themselves as proficient in various lesson contents. These results could also be related to the respondents' degree program (Bachelor of Elementary Education) where different content areas were included in their curriculum. Also, interdisciplinary integration was included in professional subjects allowing them to relate two subject contents in one lesson.
Pedagogical knowledge (PK). The domain PK had an overall mean of 3.00 and standard deviation of 0.25, which means that respondents' knowledge on teaching was sufficient. It also showed that all items under PK received a descriptive rating of "Agree". The same result was obtained by Kazu and Erten (2014) where pre-service teachers viewed themselves as efficacious when it came to assessing student performance. These findings suggested that pre-service teachers had sufficient self-efficacies on classroom management, learning and teaching methods, learning and teaching processes and practices.
Pedagogical content knowledge (PCK). The overall mean of the items was 2.99 with a descriptive rating of "High proficiency" and the standard deviation was 0.27.These findings implied that respondents' perceived knowledge of the pedagogies and teaching practices was sufficient. As suggested by Aquino (2015), this can also be attributed to having professional subjects which covered the preparation, utilization of different methods and strategies to teach a specific content area. Moreover, respondents were exposed to lesson plan development and demonstrations aligned to different subject matters during their student teaching program.
Technological content knowledge (TCK).
Respondents suggested that their knowledge in this domain was sufficient. With an overall mean of 3.20 and standard deviation of 0.36, respondents were found to be highly proficient in this domain. These findings coincided with the findings of Aquino (2015) where she found out that science pre-service teachers' high TCK can be attributed to their personal ICT equipment or devices. Most of the respondents had possession of mobile phones and computers which can be used to gather and analyze data or information about a specific content.
Technological pedagogical knowledge (TPK). On the aspect of TPK, respondents revealed that they had sufficient understanding of how teaching and learning changed when particular technologies were used. With an overall mean of 3.13 and standard deviation of 0.37, respondents were found to be highly proficient in this domain. This entailed that respondents had confidence on their use of technologies to improve teaching and learning. Respondents' adaptation and use of technology to different teaching activities can be associated with their possessed ICT devices. This was observed by the researcher and the cooperating teacher during their implementation of the lesson. Majority of them used computer as in various ways to support different teaching activities (e.g. using the computer for drill, review, motivation, and application).
Technological pedagogical content knowledge (TPCK). This domain was seen
as the intersection of all three bodies of knowledge. Understanding of this knowledge could go above and beyond understanding technology, content, or pedagogy in isolation, but rather as an emergent form that understands how these forms of knowledge interact with each other (Koehler and Mishra, 2008). TPCK obtained an overall mean of 3.07 with a descriptive rating of "Agree (High proficiency)"and standard deviation of 0.42. All of the items under this domain also got a descriptive rating of "Agree" which revealed that respondents had confidence that they were highly proficient in this domain. As argued by Aquino (2015), the way pre-service teachers viewed the interrelationship of content, pedagogy and technology resulted to their confidence in choosing and utilizing technologies that would enhance their teaching and learning of a specific content or topic. This can be associated with their learning experiences while they were attending classes in college and performing demonstration teaching inside and outside the campus. Table 3 presents the results of classroom observations and analysis of lesson artifacts during the student teaching program. To generate quantitative data from observations, the researcher utilized the "Technology Integration Observation Instrument" developed by Hofer et al.
ICT integration in classroom instruction
(2011) during actual demonstration. It can be observed that pre-service teachers' ICT integration in classroom instruction obtained an overall mean of 3.06 and standard deviation of 0.36 which implied that most of the observed lessons were rated as "good".
Based on the gathered data, most of the respondents were observed to have selected technologies aligned with one or more curriculum goals set on their lesson plans. With a mean of 3.18, the alignment of curriculum goals and technologies was described to be "good". Most of the respondents were also observed to have used technologies to support instructional strategies ( x =3.10) and had their content, instructional strategies, and technology fitted together within their lesson plans ( x =2.85). Technology selections of the respondents were also considered appropriate and "good" based on the observed lesson plan ( x =3.10). Most of the respondents were also considered "good" and effective in using technologies in instruction ( x =3.11). Teachers and students were able to use and operate the technologies presented in the class ( x =2.94).These findings indicated that most of the respondents were "good" in planning and implementing technology-integrated lesson in the classroom.
Planning.
When asked about what encouraged the respondents to select a topic or concept for integration of technology, majority of the respondents disclosed that the improvement of pupils' learning was their priority (17 or 32.7%) Increasing pupils' motivation (12 or 23.1%) and ease of introducing a topic (10 or 19.2%) were also considered in the selection of topic and technology to be integrated. These findings can be attributed to their confidence level in TPK and TPCK domains which indicated that they were highly proficient in explaining the advantages of using technology in a content area and selecting technologies to use that would enhance their teaching and support students' learning. Moreover, all of the respondents indicated that learning a lesson integrated with technology was a good idea. One respondent claimed: "I incorporated technology in this lesson because it helped my students to understand our lesson by listening to the audio recorder. Other than that, it's a way to motivate them to listen and participate in our discussion." It was also worth mentioning that majority of the respondents revealed that the materials (e.g., pictures used in the presentation, video presentation) they used in the implemented lesson were obtained from other sources (e.g., internet-based materials, downloaded files). Other respondents used self-created materials while other used a combination of self-created materials and materials obtained from other materials. Some respondents modified and customized materials obtained from other sources. One respondent shared: "I searched for a video in YouTube and edited it a little". Another participant commented: "Audio clips are downloaded from YouTube". Respondents were also asked about the activities or exercises they prepared for teaching their lesson with technology. Most of the respondents used technology for presentation and illustration (22 or 42.3%), interactive activities (9 or 17.3%), and listening activities involving audio materials (6 or 11.5 %).
Preparations made by the respondents were also affected by the use of technology. Almost all of them felt prepared to teach the lesson using technology. When asked about how the incorporation of technology affected their preparations, half of the participants revealed that the use of technology made their preparations faster and easier. However, twenty-five per cent of the respondents said that it required time and effort to finish the materials of the lesson. Mixed perceptions of the respondents can be explained by how they obtained materials appropriate for their lesson and how they used ICT tools for the preparation of their materials.
Almost half of the respondents indicated that they had obtained their materials from other sources (which materials were commonly downloaded from the internet, e.g. video clip). Correspondingly, almost half of them disclosed that they used these materials for presentation/ illustration. Obtaining readily available materials, which were compatible with their set lesson objectives and aligned with their teaching strategies, positively impacted their preparation for their lessons by making them cost and time efficient. When interviewed, one respondent said, "In preparing my lesson, technology helped me save time and effort; and my expenses decreased". Another respondent commented: "Well, I prepared differently because incorporating technology in the preparation of my lesson made everything a lot easier than preparing using traditional materials." On the other hand, there was a likely possibility that respondents who had indicated that they had put more effort and time in the preparation of their lesson may be those whose prepared materials were self-created. It can also be related to the teaching activities they were trying to implement and accessibility of needed equipment/ tools. As one respondent said, "The incorporation of technology was not easy because it took time to prepare the presentation and [there is] lack of equipment to be used." Respondents' motivation relative to technology integration jived with their expectations. Most of them had expectations that because of technology integration, their students would learn the lesson easily (38 or 73.1%). Further, they also expected that their students did the activities in the lesson actively (6 or 11.5%) and appreciated the use of technology in learning (6 or 11.5%).
Implementation.
Respondents, during implementation of the lesson, were rated as "good" in terms of instructional use of technology and operation of technology inside the classroom (see Table 3).When observed, they were comfortable in using the technology in teaching the lesson. When asked about what aspect of technology-integrated lesson went well and supported student learning, most of the respondents disclosed that the use of technology in developmental activities (20 or 38.5%) and abstraction (11 or 21.2%) helped the students in learning the content or topic of the lesson. Moreover, the operation of the technology was thought to be helpful in supporting the students. They also described their students as more engaged and active during the lesson. Students' attention to the content was also sustained.
Some of the respondents believed that they needed to improve their activities in application (9 or 17.3%), utilization of technology inside the classroom (8 or 15.4%), classroom management (6 or 11.5%), communication skills (5 or 9.6%), and motivational activities (5 or 9.6%).When asked about what were the difficulties they had in guiding the students to use technology, they disclosed that students were not behaving properly (14 or 26.9%), too focused on the technology use and did not listen to their instructions (13 or 25.0%), and did not use the technology properly (7 or 13.5%).
Contextual factors affecting technology integration. During observation,
contextual factors that affected the planning and implementation of technology-integrated lesson were also considered. These were used to help scorers analyze the observed lesson objectively in relation to the content objectives and teaching approaches/ methods employed by the respondents. Based onTable 4, there were contextual factors that positively and negatively affected the observed lesson. Among the positive contextual factors noted during observation, the suitability of technology used for instruction (41 or 78.8%) was the most observable. This indicated that respondents were able to use the technology to support teaching learning during the progress of the lesson. It was followed by respondents' methods, strategies, and techniques used inside the class (39 or 75.0%); and students' attitude towards learning (9 or 17.3%). On the other hand, frequent contextual factors that negatively affected the observed lesson were the following: availability of needed technology (22 or 42.3%), medium of instruction (20 or 38.5%), and behavior of students (11 or 21.2%). List of materials used during the observed lesson. Following the modified guidelines set for the use of Technology Integration Observation Instrument, the researcher recorded all ICT Tools/ Equipment used in the lesson implementation for the purpose of discussion. Among the ICT Tools used in the observed lessons as shown in Table 5, computer (42 or 80.8%) was the most frequent ICT material used by the respondents. Most of these computers (laptop, netbook, and notebook) were personally owned by the respondents. It was followed by television (8 or 15.4%), projector (7 or 13.5%) and speaker (7 or 13.5%).
ICT program of cooperating schools
Cooperating Schools are very important in the culmination of prospective teachers. They are said to be key partner institutions providing real-world experience to practicing student teachers.
Administration. All schools had ICT coordinators (9 or 100.0%) and more than half (5 or 55.6%) had designated ICT teachers. While it was true that there were ICT coordinators as mandated by the Department of Education, it was worthy to mention that none of the participating schools had ICT technician which was vital in maintaining the equipment in the school. Less than half of the schools (4 or 44.4%) had budget for the implementation of the school ICT program plan.
Facilities and equipment. Seven schools (77.8%) had ICT building/ room. All of the schools with ICT building/ room had computer tables, chairs, and proper electrical wirings. It can also be observed that only three schools (33.3%) indicated that they had ICT building/ room with at least ten networked personal computers and air-conditioning units. All schools were using Windows Operating System. Four schools (44.4%) stated that their internet service provider was Globe Telecom while three schools (33.3%) indicated PLDT.
It was indicated that all schools had available LCD projectors, desktop computers, and printers. Based on the gathered data, LCD projector obtained the highest total number of available units (72), followed by desktop computer (67), and television (51). Meanwhile, Telephone (7), Interactive whiteboard (7) and digital camera (5) garnered the least total number of available units. It was worth mentioning that among the schools with LCD projectors; only one cooperating school indicated a total number of fifty available units which was relatively larger than other schools with 1-5 projector units. Moreover, only three schools stated that the total number of their available desktop computers units were more than ten. Problems relative to ICT integration in teaching and learning. The researcher also asked common problems relative to ICT integration teaching and learning in cooperating schools. Coding of answers was done to categorize the themes of the answers expressed by the respondents. As shown in Table 7, insufficient number of computer units (3 or 33.3%); lack of ICT teacher/ technician in the school (3 or 33.3%); and lack of trainings to enhance teachers' knowledge of ICT (3 or 33.3%) tied in the first rank. These were followed by Poor network connection (2 or 22.0%); lack of budget to implement the program (1 or 11.0%); and defective computer units (1 or 11.0%).
Relationship between elementary pre-service teachers' socio-demographic characteristics and ICT integration in classroom instruction
Based on the result shown in Table 8, GPA in Educational Technology and ICT-related courses was found to have highly negative significant relationship with pre-service teachers' planning of instructional strategies and technologies (r=-0.623) while the rest of the variables were found to have no relationship with their ICT integration in classroom instruction.
GPA and instructional strategies and technologies. It was found that GPA in
Educational Technology and ICT-related courses obtained a highly negative significant relationship with pre-service teachers' planning of instructional strategies and technologies (r=-0.623). The negative correlation can be explained as the mean grade of respondents in subjects related to technology integration which was set to 1.00 as the highest number value while 3.00 as the lowest . The researcher based this order from the grading scheme of Central Luzon State University where the highest possible passing grade that can be given to an individual is 1.00 (excellent) while the lowest possible passing grade is 3.00 (Passing). Thus, this relationship indicated that higher academic performance of respondents in technology-related courses positively affected their integration of technology in classroom instruction, particularly their planning of instructional strategies and technologies to be utilized. It was also supported by the data gathered through interview. In the summary of coded responses from the result of interview conducted after the observation, respondents suggested that because of the educational technology courses they had attended, they became prepared and knowledgeable on how to integrate ICT in their lessons. Furthermore, these courses also helped them create lessons that were easier to discuss and understand; helped them create good presentations; and helped them gain confidence in using technology inside the classroom.
It was worth mentioning that when respondents were asked about their preparation in their educational technology courses, they had indicated that their professors/ instructors helped and prepared them to successfully integrate technology in their lessons. On the other hand, other variables were found to have no relationship with pre-service teachers' ICT integration in classroom instruction. These findings could have been affected by the homogeneity of the samples. Meanwhile, numerous studies found similar results where these factors were not significantly correlated with technology integration (Berry, 2011;Brunk, 2008;Chen, 1986;Inan & Lowther, 2010;Karakaya & Avgin, 2016;Schulze, 2014). Age, for instance, was found to have no direct relationship with teachers' integration of technology in instruction. As disclosed by Schulze(2014) in his study to determine relationship between teacher characteristics and educational technology, technology integration and respondent age did not seem to have a dominate age group for integration. This result was also supported by technology experiments conducted by Berry (2011), Brunk (2008, and Inan and Lowther (2010). As cited by Schulze (2014), they found that age did not seem to play a role in determining the amount of technology integration.
Sex, which was considered one of the limitations of this study, had found to be not correlated with technology integration and TPACK. The fact that gender might affect teachers' attitudes toward ICT was firstly rejected in Chen's (1986) study, where he found no correlation between gender and teachers' attitudes in integrating technology in instruction. On a study conducted by Karakaya and Avgin (2016), it was understood that there was no statistical logical difference between male and female respondents' TPACKSCS and other sub-dimensions of TPACK (TCK, TK). This confirmed the results that there were no statistically significant differences among pre-service teachers' self-efficacy perception levels towards technology integration based on gender (Keser, Yilmaz, & Yilmaz, 2015). In other words, it can be explained that being male or female did not have an impact on self-efficacy perception level towards technology integration and actual integration of technology in classroom instruction. However, this assumption was not statistically proven given the uneven number of respondents in this study.
Moreover, ICT-related trainings and seminars attended and number of personal ICT equipment owned by the respondents were found to be not significantly related to their integration of technology. Although more than half of the respondents indicated that they had attended one seminar related to ICT, the results showed that this did not have any relationship with their performance in integrating technology in the classroom. This can be interpreted in a way that while trainings and seminars may positively affect an individual's knowledge about technology, it may have limited impact on classroom practice. Table 9 shows the relationship between the respondents' TPACK self-efficacy and ICT integration in classroom instruction. It was found that all TPACK sub-domains were significantly related to technology integration.
Relationship between elementary pre-service teachers' TPACK self-efficacy and ICT integration in classroom instruction
Technology knowledge (TK) and ICT integration. It was found that respondents' perceived TK had highly significant relationship with technology selections (r=0.526) and technology logistics (r=0.542).Based on the data presented, the subdomain TK had a highly significantly relationship with technology selections (r=0.526). This indicated that respondents' perceived knowledge about various digital technologies, such as the internet, digital video, interactive whiteboards, and software programs, was significantly related to their planning and selection of technology appropriate to the curriculum goals of their lesson and instructional strategies to be employed. It also revealed that respondents' TK was highly correlated to technology logistics (r=0.542) which indicated that their perceived knowledge on various digital technologies greatly affected their use and operation of technologies during implementation of the lesson.
These findings can be explained by the respondents' self-efficacy in selection and utilization of technologies to be used appropriate in teaching and learning content. This coincided with the results found by Mustafina (2016) that the level of confidence and knowledge that the respondents possessed played a significant role in their attitudes toward technology. These aspects predetermined the teachers' acceptance of this technology and their "likelihood" to use ICT in pedagogical/teaching practices.
Respondents reported that they were highly proficient when it came to knowledge related to technology. This was verified when respondents disclosed that they chose appropriate technologies for the improvement of pupils learning of the topic and they were comfortable in using technology inside the classroom. Their high confidence in perceived TK and actual use of technology in lesson implementation can also be related to the exposure to available technologies. Based on the given data in Table 5, almost all of them used computers as ICT tool in teaching and learning a subject content which was one of the common ICT tool they were exposed to. Therefore, it also supported this relationship between having technical skills in using technologies and gaining technical competency in using ICT tools in teaching and learning. Table 9, perceived CK of the respondents was found to have significant relationship with Technology Selection, Fit, and Technology Logistics. Though the significance of the correlation was not high, the data still suggested that respondents' perceived CK was related to their selection of appropriate technology in relation to curriculum content and instructional strategies (r=0.306). Respondents' perceived CK was also found to be significantly correlated to the fitness of curriculum goals, instructional strategies, and technologies used in the instructional plan (r=0.285). It was also indicated that their perceived CK had a significant relationship with their utilization of technologies during implementation of a lesson (r=0.316). This suggested that their knowledge about various subjects/ topics can affect the way they used a particular technology inside the classroom.
Content knowledge (CK) and ICT integration. As shown in
Subject matter/content was a thought to be a major factor that a teacher should consider when it comes to planning teaching and learning activities and selecting appropriate technologies. Respondents' understanding of the content to be taught (having sufficient knowledge about various content areas; explaining various concepts in a specific content area; having various ways and strategies of developing understanding of a specific content area; and making appropriate connections to other content areas) had a direct effect on their selection and use of appropriate technologies to be used. This clearly supported Shulman's(1986) claim that teachers must know and understand the subject they teach before they present to the students. Otherwise, teachers who did not have these understandings can misrepresent those subjects to their students as argued by Ball and McDiarmid (1990). respondents' PK had a significant relationship with their use of curriculum-based technologies and had a highly significant relationship with the operation of these technologies inside the classroom. The data revealed that respondents' perceived knowledge of different teaching strategies was significantly correlated with curriculum goals and technologies (r=0.280). As shown in Table 9, respondents' perceived PK had been found to have highly significant relationship with technology logistics (r=0.462). Argued by Mishra and Koehler (2008) in their paper about technology integration, this domain pertained to deep knowledge about the processes and practices or methods of teaching and learning and how it encompassed (among other things) overall educational purposes, values and aims. Respondents reported that they had understanding of different teaching methods and strategies which could be helpful in improving students' learning. This knowledge had been their selection of technology to achieve the set objectives (curriculum goals) because it required them to have an understanding of cognitive, social and developmental theories of learning and how they applied to students in their classroom (Mishra & Koehler, 2008). Thus, appropriate use of technology should be based on a teacher's devised teaching and learning activities.
Pedagogical content knowledge (PCK) and ICT integration. It was found out that this domain was significantly related to instructional strategies and technologies, technology selections, and technology logistics. As Table 9shows, respondents' PCK had a significant relationship with their planning of instructional strategies and technologies (r=0.279). Respondents' perceived PCK was also found to be significantly correlated with technology selections (r=0.303). The data also revealed that their perceived knowledge under this domain had a significant relationship with technology logistics (r=0.420).
PCK was actually based on the concept of Shulman (1986) about the intersection of pedagogy and content in education which includedrepresentation and formulation of concepts,pedagogical techniques, knowledge of what madeconcepts difficult or easy to learn, knowledge of students' prior knowledge and theories of epistemology. However, Donald (2002) claimed that different disciplines emphasized certain processes and under-emphasized others. For example, verification in Science subject would be pragmatic, while in English subject, verification would be a search for interpretive coherence.
The knowledge of evaluating what technological tools should be used to support teaching and learning of the concept was clearly demonstrated by most of the respondents during the observed lesson. Developmental strategies and abstraction were found to be the key aspects which helped students fully understand the lesson. In relation to this, students were also found to be more engaged and active during the lesson and their focus on the content was sustained.
Technological Content Knowledge (TCK) and ICT Integration. TCK of the respondents was found to be highly correlated with their curriculum-based use of technology, instructional strategies and technologies, and instructional use during implementation of the lesson. Respondents' perceived TCK had a significant relationship with Curriculum Goals and Technologies (r=0.415). It was also revealed that their perceived TCK was highly associated with instructional strategies and technologies (r=0.459). Table 9 also reveals that respondents' knowledge of this sub domain had a significant relationship with their use of technology for instruction during implementation of the lesson (r=0.388).
Understanding the manner in which technology and content influenced and constrained one another was the focus of this domain. These findings supported the study of Mishra and Koehler (2008) that teachers needed to master more than the subject matter they taught, they must also have a deep understanding of the manner in which the subject matter (or the kinds of representations that can be constructed) can be changed by the application of technology. Based on respondents' performance and self-efficacy under this domain, it can be concluded that their TCK had a direct relationship with their preparation of technologies aligned with curriculum goals. As Mishra and Koehler (2008) claimed, teachers should understand which specific technologies were best suited for addressing subject-matter learning in their domains and how the content dictated or perhaps even changed the technology -or vice versa.
Technological pedagogical knowledge (TPK) and ICT integration. This part
presents the results of relationship between respondents' perceived TPK and their actual integration of ICT in classroom instruction. As shown in Table 9, respondents' knowledge in this domain had a highly significant relationship with curriculum goals and technologies, instructional strategies and technologies, and technology selections. It also was significantly related to technology logistics. The data revealed that respondents' knowledge on the use of technology to effectively implement a teaching strategy had a highly significant relationship with curriculum goals and technologies (r=0.384). As shown in Table 9, respondents' TPK was found to have highly significant correlation with instructional strategies and technologies (r=0.547).Respondents' knowledge on this domain was also revealed to have a highly significant relationship with technology selections (r=0.449).
The data also revealed that their TPK had a significant relationship with technology logistics (r=0.342). Observation results showed that among the factors that positively affected the integration of ICT in the lesson, suitability of the technology used was the most highly observable. It was followed by methods and strategies. Aside from this, gathered data also revealed that developmental strategies and abstraction were the aspects of technology-integrated lesson that greatly impacted the students' learning. Along with these results, students were observed to have learned the content of the lesson. These clearly established a direct positive relationship between their perceived PCK and their actual integration of technology in classroom instruction. Respondents were observed to have deeper understanding of the constraints and affordances of technologies and the disciplinary contexts within which they functioned. Knowing these pedagogical affordances and technological constraints, they were able to plan disciplinarily and developmentally appropriate pedagogical designs and strategiesreporting similar results with Mishra and Koehler (2008).
Technological pedagogical content knowledge (TPCK) and ICT integration.
TPCK of the respondents was found to have significant relationship with their planning and implementation during ICT integration in classroom instruction. It had been found to be significantly correlated with all the elements of planning: curriculum-based use of technology (r=0.553); planning and strategies (r=0.451); technology selections (r=0.355); and fitness of technology, content, and pedagogy (r=0.389). It was also found to have relationship with their instructional implementation: instructional use (r=0.532) and technology logistics (r=0.313).
TPCK was defined as the "intersection of all three bodies of knowledge" (Mishra & Koehler, 2008). Understanding of this knowledge could be above and beyond understanding technology, content, or pedagogy in isolation, but rather as an emergent form that understands how these forms of knowledge interact with each other. Respondents' TPACK self-efficacy was found to have direct relationship with ICT integration. Thus, it can be concluded that developing technological content knowledge among pre-service teachers could be very important to effectively integrate ICT in classroom instruction. As argued by numerous researchers, technology integration in teaching and learning required understanding the dynamic, transactional relationship among these three knowledge components (Abbitt, 2011;Bruce & Levin, 1997;Dewey & Bentley, 1949;Harris, Mishra, & Koehler, 2009;Mishra & Koehler, 2008;Rosenblatt, 1978). Table 10 shows the problems encountered by respondents in integrating ICT in classroom instruction. It can be noticed that among the problems identified, "lack of internet connection/ slow connectivity" (27 or 51.9%) and "lack of computers, equipment and devices" (21 or 40.4%) obtained the highest frequency. These results could be associated with the problems identified by the cooperating schools: insufficient number of computer units and poor network connection (see Table 7). These problems affected the use of technology inside the classroom, limiting the possible ways of preparation and implementation of technology-integrated lesson.
Conclusions
Based on the results of the study, the conclusions were drawn. Most of them were female and taking generalists as their specialization. This implied that there was homogeneity when it came to the respondents who participated in the study. Further, only half of them attended a local ICT-related seminar conducted by the institution. Respondents reported that opportunities to work with different technologies were insufficient. It meant that most of the respondents were not able to use various technologies in the courses that they had attended .Most of the respondents were able to integrate ICT in classroom instruction. In terms of overall ICT integration, they were found to be "good" in all components of technology integration -planning and implementation. Most of their instructional materials were obtained from other sources and were commonly used for presentations/ illustrations. None of the cooperating schools had employed ICT technician which was found vital in maintaining ICT equipment in the school. The leading problems reported by schools relative to integrating ICT in teaching and learning were insufficient number of computer units, lack of ICT teacher/ technician in the school, and lack of trainings to enhance teachers' knowledge of ICT.
GPA of the respondents in Educational Technology and ICT-related courses had highly but negative significant relationship with their preparation and implementation of instruction integrated with technology. It implied that pre-service teachers with higher academic performance performed better in preparation and implementation of technology-integrated lessons. Pre-service teachers' TPACK self-efficacy had highly significant relationship with their preparation and implementation of technology-integrated instruction. This tended to suggest that the higher TPACK self-efficacy of pre-service teachers, the more effective they would be in integrating technology in classroom instruction.
Recommendations
In the light of the results and conclusions of this study, the following measures are strongly recommended: 1. Although age, sex, grade level handled, subject taught, ICT-related trainings attended, and personal ICT equipment were not found to be related to pre-service teachers' integration of ICT, these factors should still be considered by other researchers. Since homogeneity of samples affected the results of this study, future researchers should include good and large distribution of samples. Moreover, participation of pre-service teachers to various ICT-related seminars should be encouraged by the institution.
2. Since respondents reported that opportunities to work with different technologies were insufficient, professors/ instructors of Educational Technology and ICT-related courses should be encouraged to provide opportunities for pre-service teachers to use a wide range of technologies in classroom instruction. 3. While most of the respondents were able to integrate ICT in classroom instruction, the College of Education should provide more opportunities for the pre-service teachers to further develop their skills in using technology in classroom instruction. Pre-service teacher should also be trained in selecting and creating instructional materials; and utilization of these materials in various instructional techniques. 4. Cooperating schools should be encouraged to develop plans to improve their ICT program. Furthermore, concerned government units should also be advised about the problems encountered relative to ICT integration in classroom instruction (e.g. insufficient number of computer units, lack of ICT teacher/ technician in the school, and lack of trainings to enhance teachers' knowledge of ICT). 5. Since GPA of pre-service teachers in Educational Technology and ICT-related courses was found to be significantly related to their development of technology-integrated instruction, Teacher Education Institutions, particularly the College of Education, should strengthen Educational Technology and ICT-related courses by engaging students to various activities essential for the development of their ICT integration in classroom instruction. 6. Since pre-service teachers' TPACK self-efficacy was found to have significant relationship with their ICT integration in classroom, pre-service teachers should be encouraged to develop their TPACK. TPACK-oriented trainings should also be provided by the institution for them to fully understand the multidimensional aspects of technology integration.
|
2019-05-13T13:05:38.443Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "354b95beb1ab94805825357030cde9af517288e8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.22437/irje.v3i1.6506",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b92135fb94369b4ab8ac248fbdfcde5a4351a819",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
258634681
|
pes2o/s2orc
|
v3-fos-license
|
Failure of Micractinium simplicissimum Phosphate Resilience upon Abrupt Re-Feeding of Its Phosphorus-Starved Cultures
Microalgae are naturally adapted to the fluctuating availability of phosphorus (P) to opportunistically uptake large amounts of inorganic phosphate (Pi) and safely store it in the cell as polyphosphate. Hence, many microalgal species are remarkably resilient to high concentrations of external Pi. Here, we report on an exception from this pattern comprised by a failure of the high Pi-resilience in strain Micractinium simplicissimum IPPAS C-2056 normally coping with very high Pi concentrations. This phenomenon occurred after the abrupt re-supplementation of Pi to the M. simplicissimum culture pre-starved of P. This was the case even if Pi was re-supplemented in a concentration far below the level toxic to the P-sufficient culture. We hypothesize that this effect can be mediated by a rapid formation of the potentially toxic short-chain polyphosphate following the mass influx of Pi into the P-starved cell. A possible reason for this is that the preceding P starvation impairs the capacity of the cell to convert the newly absorbed Pi into a “safe” storage form of long-chain polyphosphate. We believe that the findings of this study can help to avoid sudden culture crashes, and they are also of potential significance for the development of algae-based technologies for the efficient bioremoval of P from P-rich waste streams.
Introduction
Phosphorus (P) is a major nutrient central to the processes of energy and information storage and exchange in cells [1][2][3]. Most of the habitats accessible to microalgae are characterized by the variable availability of P [4,5], but inorganic P species (referred to below as P i ) are usually present at limiting concentrations [6][7][8]. To withstand prolonged P shortage, microalgae developed a set of adaptations collectively called "luxury uptake" [6,7]. These adaptations include the capability of absorbing P i in amounts much greater than their metabolic demand [9]. Another set of mechanisms is converting the newly acquired P i into relatively metabolically inert polyphosphate (PolyP) and storing it in the cell vacuole serve to avoid fatal displacement of the equilibria of vital metabolic reactions in which P i participates [1,3].
Possibly as a side effect of these adaptations, microalgae became remarkably resilient to high levels of external P i by far exceeding its environmental concentrations (1 g L −1 and more [10]). Some microalgal species found in P-polluted sites display very high P i tolerance (see, e.g., [10]). It makes microalgae a powerful vehicle for the biocapture of P from waste streams [11], thereby increasing the sustainability of using P resources, which are currently 2 of 20 notoriously low (<20%) [12]. Indeed, there are numerous reports on the successful use of microalgae exerting luxury uptake of P [13,14] for the biotreatment of waste streams to avoid eutrophication [12,15] and produce environmentally friendly biofertilizers [16,17].
From a practical view, microalgae-based approaches for P biocapture from waste streams offer advantages over conventional techniques such as Enhanced Phosphorus Bioremoval, EPBR [2,18]. Using waste and side streams as P sources for industrial cultivation of microalgae makes bulk bioproducts such as P biofertilizers [16,19] or biofuels economically viable [20]. Biomass of microalgae is a potential source of PolyP that can be used in medicine for the development of biomaterials and in the food industry [21,22].
Despite the promise of microalgae-based P capture, its use is hindered by insufficient knowledge of luxury P uptake mechanisms [1,3] and the lack of strains resilient to high concentrations of P i , since many algal species commonly used in biotechnology can be already inhibited at a P i concentration above 0.1-0.3 g L −1 [23,24].
Nevertheless, there are reports on the toxicity of exogenic P i to microalgae [23][24][25], although the mechanisms of this phenomenon a far from being understood. Still, understanding P toxicity is important, e.g., for the development of viable biotechnologies for nutrient biocapture from P-rich waste streams and/or highly nutrient-polluted sites. To further bridge this gap, we investigated the failure of high P resilience in P-starved cultures.
We hypothesized that P toxicity can be linked to the formation of short-chain PolyP in the cell upon abrupt transition from P starvation to ample conditions. The Micractinium simplicissimum strain IPPAS C-2056 recently isolated from a P-polluted site served as a model organism for this study. The cultures of the M. simplicissimum grown in P-replete media exhibit a very high P i resilience [10]. In our recent experiments on P starvation, we observed a sudden culture death upon replenishment of P i to the P-depleted culture of the M. simplicissimum. Here, we report on the effect of an abrupt increase in external P i on the cell viability, P i uptake, and internal content of P and PolyP. We elaborate on possible mechanisms of P i -induced death of microalgal cells acclimated to P deficiency. Special attention was paid to the potential role of short-chain PolyP in these mechanisms.
Results
To test the effect of P starvation with subsequent P i replenishment on the M. simplicissimum culture, the P-sufficient culture was starved of P for 14 days, then P i was replenished to the medium, and the culture was monitored for another 10 days (see Section 4).
The Dynamics of the Culture Condition during Phosphorus Starvation and Re-Feeding
During 14 days of P starvation, the precultures gradually changed their color from green to yellow-green ( Figure 1), which is typical of nutrient-starved cultures responding to the stress by a reduction in their photosynthetic apparatus. During the first week of P starvation, the culture exhibited a vigorous cell division commensurate to that commonly observed in P-replete cultures of the M. simplicissimum. The P-starved cultures demonstrated the onset of the stationary phase due to the P limitation by the 10th day of incubation.
Inorganic phosphate, P i , was replenished to the P-starved cultures to the final concentration of 0.8 g L −1 , which is far below the level potentially toxic to the M. simplicissimum [10]. Since the pH of the cultures remained in the range of 6.7-7.7 throughout the experiment, the P i added to the cultures is expected to be readily available for uptake by the microalgal cells.
Our analysis of the absorbance spectra of the P-starved cultures ( Figure 2) revealed an increase in the relative contribution of the absorption of pigments in the blue-green region of the visible part of the spectrum, which is consistent with the increase in the carotenoid-to-chlorophyll ratio (not shown) observed in this culture manifesting itself as the changes in coloration described above (see also Figure 1b). carotenoid-to-chlorophyll ratio (not shown) observed in this culture manifes the changes in coloration described above (see also Figure 1b). Changes in absorption spectra of the Micractinium simplcissimum IPPAS C-2056 cultures during (a) its P starvation and (b) after re-feeding of the pre-starved Micractinium simplcissimum IPPAS C-2056 cultures with P i . The time of P starvation indicated in the panels (h) is counted down to the moment of P i re-supplementation to the culture (t = 0 h; see Figure 1).
The absorbance spectra recorded for the first 24 h after P i replenishment revealed a synchronous increase in the light absorption by the culture in the blue and red regions of the spectrum (Figure 2, curves 1 and 2) attributable to the accumulation of pigments and balanced culture growth. A total of 24 h after P i replenishment, the cultures started to show visual signs of damage such as discoloration and increased turbidity (see culture (iii) in Figure 1b) indicative of the presence of cell debris. The corresponding absorbance spectra revealed a profound bleaching of the photosynthetic pigment absorption bands (Figure 2, curves [3][4][5]. The cultures became whitish by the 10th day of cultivation (Figures 1b and 2). At the same time, the red absorption band of chlorophyll became undetectable on the spectra of the cultures, although the small peaks attributable to the absorption by carotenoids remained (Figure 2, curve 5). The appearance and the optical properties of the cultures almost did not change until the end of the observation period. Overall, these observations evidenced the progressive lysis of the microalgal cells upon replenishment of P i to the P-depleted cultures of the M. simplicissimum, which was confirmed by microscopic observations (see below).
Phosphorus Removal from the Medium and Its Uptake by the Cells
Monitoring of the removal of P i after its replenishment to the P-starved cultures showed that most (75-80%) of the added P i remained in the medium at all stages of the experiment ( Figure 3). Around 3-5% of the added P i was adsorbed by the superficial structures of the M. simplicissimum cells as well as by cell debris (see below); the amount of the adsorbed P i was relatively constant. synchronous increase in the light absorption by the culture in the blue and red regio the spectrum (Figure 2, curves 1 and 2) attributable to the accumulation of pigment balanced culture growth. A total of 24 h after Pi replenishment, the cultures start show visual signs of damage such as discoloration and increased turbidity (see cu (iii) in Figure 1b) indicative of the presence of cell debris. The corresponding absorb spectra revealed a profound bleaching of the photosynthetic pigment absorption b ( Figure 2, curves [3][4][5]. The cultures became whitish by the 10th day of cultivation (Fig 1b and 2). At the same time, the red absorption band of chlorophyll became undetec on the spectra of the cultures, although the small peaks attributable to the absorptio carotenoids remained (Figure 2, curve 5). The appearance and the optical properties o cultures almost did not change until the end of the observation period. Overall, thes servations evidenced the progressive lysis of the microalgal cells upon replenishme Pi to the P-depleted cultures of the M. simplicissimum, which was confirmed by m scopic observations (see below).
Phosphorus Removal from the Medium and Its Uptake by the Cells
Monitoring of the removal of Pi after its replenishment to the P-starved cul showed that most (75-80%) of the added Pi remained in the medium at all stages o experiment ( Figure 3). Around 3-5% of the added Pi was adsorbed by the superficial s tures of the M. simplicissimum cells as well as by cell debris (see below); the amount o adsorbed Pi was relatively constant. Changes in the distribution of Pi, which was either taken up or adsorbed by the c Micractinium simplicissimum IPPAS C-2056 or remained in the medium after re-feeding of Pi P-starved cultures (see Figure 1).
The amount of Pi taken up by the cells gradually increased over the first three of the experiment, but it later started to decline with the corresponding increase i external Pi concentration ( Figure 3). The increase in the residual Pi in the culture l reflected the release of P from the ruptured dead cells whose proportion increased i culture, according to our observations outlined above.
The analysis of the total intracellular P content revealed a rapid increase in th Figure 3. Changes in the distribution of P i , which was either taken up or adsorbed by the cells of Micractinium simplicissimum IPPAS C-2056 or remained in the medium after re-feeding of P i to the P-starved cultures (see Figure 1).
The amount of P i taken up by the cells gradually increased over the first three days of the experiment, but it later started to decline with the corresponding increase in the external P i concentration ( Figure 3). The increase in the residual P i in the culture likely reflected the release of P from the ruptured dead cells whose proportion increased in the culture, according to our observations outlined above.
The analysis of the total intracellular P content revealed a rapid increase in the dry weight (DW) percentage of P from 0.25% typical of P-starved cells to 4% DW during the first 24 h after replenishment of P i (Figure 4). The discrepancies between the trends shown in Figure 4 might stem from the difference in the techniques of sample processing (for more details, see Section 4). Another plausible reason is the shortage of metabolic resources of the cell available for the conversion of the incoming P i to PolyP. Later, the internal P content of the microalgal cells declined, on average to 1%, likely due to the predominant lysis of the cells, which took up the highest amounts of P i .
x FOR PEER REVIEW 5 of 20 internal P content of the microalgal cells declined, on average to 1%, likely due to the predominant lysis of the cells, which took up the highest amounts of Pi. The PolyP content of the cells also increased rapidly during the first few hours after Pi replenishment, but it reached its maximum of 0.35% DW within ca. four hours. This parameter returned to the initial level of ca. 0.02% DW by the end of the first day of incubation of the culture under P-replete conditions. The PolyP content of the cells did not change significantly thereafter but tended to decline ( Figure 4).
Ultrastructural View of the Changes in the Cell P-Pools
To gain a deeper insight into the rearrangements of P pools in the culture of the M. simplicissimum under our experimental conditions, an electron microscopy study of cells sampled at the key stages of the experiment (see Figure 1) was carried out (Figures 5, A1 and A2).
The ultrastructure of the cells of M. simplicissimum constituting the preculture was similar to that of the cells grown under similar conditions [10]. Briefly, the preculture consisted of individual round-shaped cells containing a single nucleus and a parietal chloroplast with a centrally located pyrenoid; the culture also contained occasional autosporangia (Figures 5a and 6a,b). Empty envelopes of the autosporangia and cell walls of dead cells were also encountered ( Figure 6). The PolyP content of the cells also increased rapidly during the first few hours after P i replenishment, but it reached its maximum of 0.35% DW within ca. four hours. This parameter returned to the initial level of ca. 0.02% DW by the end of the first day of incubation of the culture under P-replete conditions. The PolyP content of the cells did not change significantly thereafter but tended to decline ( Figure 4).
Ultrastructural View of the Changes in the Cell P-Pools
To gain a deeper insight into the rearrangements of P pools in the culture of the M. simplicissimum under our experimental conditions, an electron microscopy study of cells sampled at the key stages of the experiment (see Figure 1 Figure 1). The micrographs of the culture in the BG-11 K medium in glass columns, which was incubated with constant bubbling with 5% CO 2 (a), P-starved cells (b), and the cells (a-d,f-h) and sporangia (e,i) sampled 4 h (c), 24 h (d,e), and 72 h (f-i) after re-feeding of the P-starved cultures with P i are shown. C, chloroplast; CW, cell wall; N, nucleus; Os, oleosome; P, pyrenoid; S, Starch granule; SE, sporangium envelope; V, vacuole. The arrows point to the inclusions in the vacuole. The arrowheads point to the electron-opaque particles on/in the cell wall and them adsorbed on the cell surface of the clusters and also to the spherules in the cytoplasm, the nucleus, and destructed chloroplast. Scale bars = 1 µm.
The ultrastructure of the cells of M. simplicissimum constituting the preculture was similar to that of the cells grown under similar conditions [10]. Briefly, the preculture consisted of individual round-shaped cells containing a single nucleus and a parietal chloroplast with a centrally located pyrenoid; the culture also contained occasional autosporangia (Figures 5a and 6a,b). Empty envelopes of the autosporangia and cell walls of dead cells were also encountered ( Figure 6). The surface of the cells and autosporangia lacked bristles. The cell wall did not reveal the characteristic trilamellar pattern (sporopollenin-or algenat-like layer). Cell walls of live and dead mature cells and the envelope of the autosporangia manifested, upon staining with DAPI, the yellow-green fluorescence indicative of the presence of PolyP ( Figure 6b). However, this was not the case for the cell walls of the young cells. The surface of the cells and autosporangia lacked bristles. The cell wall did not reveal the characteristic trilamellar pattern (sporopollenin-or algenat-like layer). Cell walls of live and dead mature cells and the envelope of the autosporangia manifested, upon staining with DAPI, the yellow-green fluorescence indicative of the presence of PolyP (Figure 6b). However, this was not the case for the cell walls of the young cells.
The EDX spectra revealed the presence of P and iron in the cell walls as well as in the particles associated with them (Table A1). Carbon reserves were represented mainly by starch grains in the chloroplast (ca. 7.8 grains per cell section) and small (diameter 326 ± 18 nm, 986 nm max.) oleosomes (ca. 8 per cell section). Vacuoles contained spectacular inclusions-large round-shaped P-containing inclusions of non-uniform electron density (Figure 5a). Their EDX spectra possessed a distinct peak of P along with the characteristic peaks of nitrogen, calcium, and magnesium (Table A1, Figure 7a). The vacuoles also contained small roundish, electron-opaque inclusions harboring P or P in combination with uranium (see below and Table A1). Moreover, small electron-opaque spherules (10-40 nm in diameter; 5-20 instances per cell section) of the same composition were detected in the cytoplasm ( Figure A1a, Table A1). Roundish inclusions were in the cells and in the sporangia. Roundish inclusions located in the cells and in the sporangia revealed, upon staining with DAPI, the yellow-green fluorescence characteristic of PolyP (Figure 6a,b).
The starvation of P profoundly influenced the cell morphology and ultrastructure ( Figure 5b). The amount of cells retaining their structural integrity declined by 10-15% (76% vs. 92% in the preculture). The thylakoids remaining in the granae were moderately expanded (the lumen width was up to 20 nm vs. 5-7 nm in the preculture; Figure A1a,b). The total number of starch grains decreased by 27%. At the same time, the number of oleosomes increased by 35%. Moderately electron-dense oleosomes were located on the periphery of the cell and merging, forming large oleosomes (ca. 3350 nm in diameter; Figure 5b). These rearrangements were in accord with the observed decline in chlorophyll ( Figure 2a).
The vacuolar inclusions also displayed considerable changes during P starvation of the culture. They were represented by large amoeboid globules with moderately increased electron density (Figures 5b and A1c) and sickle-shaped elongated or reticular zones of high electron density or aggregations of small particles (Figure 5b) as well as by fragments of structures resembling a multiwire cable ( Figure A1d). Small electron-dense P-containing globules were occasionally found in the vacuoles of the P-starved cells (Table A1).
Interestingly, the DAPI-stained P-starved cells, which ceased to divide, envelopes of the sporangia, and the cell debris retained the characteristic yellow-green fluorescence (Figure 6c,d). The EDX spectra of the structures (cell wall, different types of vacuolar inclusions, and small cytoplasmic electron-opaque spherules in the stroma of chloroplast) revealed the peaks of P. The spectra were also accompanied by the peaks of other elements typically found together with PolyP (calcium, magnesium, and sodium). These spectra also featured the peaks of sulfur or uranium stemming from the binding of the uranyl acetate (see Methods) with phosphate and carboxil groups of proteins and nucleic acids (Table A1, Figure 7). The magnitude of the P peak was lower than that of the N peak in the EDX spectra of the vacuolar large globules.
The analysis of ultrastructural traits of the cells after replenishment of P i to the culture revealed diverse signs of P uptake by the cells. During the first 24 h after P i re-feeding, the proportion of the cells retaining their structural integrity increased to 88-86%, the formation of autosporangia and autospore release resumed, and young cells appeared (Figures 5c-e and 6).
peaks of nitrogen, calcium, and magnesium (Table A1, Figure 7a). The vacuoles also contained small roundish, electron-opaque inclusions harboring P or P in combination with uranium (see below and Table A1). Moreover, small electron-opaque spherules (10-40 nm in diameter; 5-20 instances per cell section) of the same composition were detected in the cytoplasm ( Figure A1a, Table A1). Roundish inclusions were in the cells and in the sporangia. Roundish inclusions located in the cells and in the sporangia revealed, upon staining with DAPI, the yellow-green fluorescence characteristic of PolyP (Figure 6a,b). Figure 1). Vacuolar large globules in cell of the culture in the BG-11K medium (a), in P-starved culture (b), and in the cells sampled 4 h after re-feeding of the P-starved cultures with Pi (c); all show inclusions in cytosol (d), in the chloroplast stroma (e), the vacuolar small spherules (f), in the nucleus (g), and in the cell wall (h) after re-feeding of the P-starved cultures with Pi. All the EDX spectra possessed characteristic peaks attributable to carbon (Kα = 0.28 keV) and oxygen (Kα = 0.53 keV), the major organic constituents of biological samples and the epoxy resin they were embedded in. The spectra also contained peaks of copper (Lα = 0.93 keV) from the copper grids used for the sample Figure 7. Representative EDX spectra of Micractinium simplicissimum IPPAS C-2056 cell structures potentially related to metabolism and storage of phosphorus at different phases of the experiment (see Figure 1). Vacuolar large globules in cell of the culture in the BG-11 K medium (a), in P-starved culture (b), and in the cells sampled 4 h after re-feeding of the P-starved cultures with P i (c); all show inclusions in cytosol (d), in the chloroplast stroma (e), the vacuolar small spherules (f), in the nucleus (g), and in the cell wall (h) after re-feeding of the P-starved cultures with P i . All the EDX spectra possessed characteristic peaks attributable to carbon (K α = 0.28 keV) and oxygen (K α = 0.53 keV), the major organic constituents of biological samples and the epoxy resin they were embedded in. The spectra also contained peaks of copper (L α = 0.93 keV) from the copper grids used for the sample mounting, as well the peaks of osmium (M β = 1.91 keV) and uranium (M α = 3.16 keV, M β = 3.34 keV) used for cell fixation. The peaks of silicon (K α = 1.74 keV) and aluminum (K α = 1.49 keV) originate from the microscope hardware background elements.
The cell walls of the young cells were not stained with DAPI; accordingly, the P peak in their EDX spectra was low or absent. The cells in the cultures re-fed with P i were highly heterogeneous regarding their vacuolar inclusion condition. Most of these inclusions assumed a porous, sponge-like structure (Figure 5c,d and Figure A1e-g). This process was accompanied by a decline in the magnitude of N peaks and an increase in the magnitude of P peaks in the EDX spectra of these structures. In addition to this, electron-opaque globules and layers appeared inside the vacuole and on the inner surface of the tonoplast. The cells possessing these features also retained structurally intact chloroplasts (Figures 5c-e and A1e-g). A sharp increase in the number (from several dozens to several hundred instances per cell section) of small (14-36 nm in diameter) spherules was also observed in the cytoplasm, mitochondria, nucleus, and dictyosomes of the Golgi apparatus. The EDX spectra of the inclusions, both vacuolar and extra-vacuolar, featured spectral details typical of PolyP including the peaks of P, magnesium, and calcium (Table S2, Figure 7). The presence of abundant PolyP was also confirmed with DAPI staining (Figure 6).
The rest of the cell population, including autospores, displayed a gross accumulation of small granules in their cytoplasm; this phenomenon was accompanied by the degradation of all organelles (mitochondria, nucleus, vacuoles with their content, chloroplasts, etc.) (Figures 5c-e, A1e-g and A2, Table S2). The amount of the destructed cells and cell debris increased with time after the P i re-feeding from 53% on the 3rd day to 77% on the 7th day and to 92% by the 10th day. The small electron-dense spherules remained at the location of the degraded organelles and cell walls; later, their number decreased from several hundreds to several dozens of instances. Surprisingly, the degraded cells retained abundant starch grains and oleosomes.
Responses of the Photosynthetic Apparatus of the Cells to P Starvation and Re-Feeding
The physiological condition of the microalgal cells as manifested by the functioning of their photosynthetic apparatus (PSA) during the different phases of the experiment was assessed by recording and analyzing the induction curves of chlorophyll a fluorescence. This analysis was based on the relevant parameters of the JIP test (Table S1; Figure 8).
Int. J. Mol. Sci. 2023, 24, x FOR PEER REVIEW 10 location of the degraded organelles and cell walls; later, their number decreased from eral hundreds to several dozens of instances. Surprisingly, the degraded cells reta abundant starch grains and oleosomes.
Responses of the Photosynthetic Apparatus of the Cells to P Starvation and Re-Feeding
The physiological condition of the microalgal cells as manifested by the function of their photosynthetic apparatus (PSA) during the different phases of the experiment assessed by recording and analyzing the induction curves of chlorophyll a fluoresce This analysis was based on the relevant parameters of the JIP test (Table S1; Figure 8) Table S1) parameters (potential maximal ph chemical quantum yield of photosystem II, Fv/Fm; performance index, PiABS; the flux of therm dissipated energy flux per reaction center, DI0/RC; left scale) and Stern-Volmer non-photochem quenching (NPQ) in the cultures of Micractinium simplcissimum IPPAS C-2056 during its phosph starvation (negative time values) and after re-feeding of the P-starved cultures with Pi (positive values). The moment of Pi re-feeding (t = 0 h) is specified on the graph. Acclimation of the culture to P starvation was accompanied by a small decline in potential maximal photochemical quantum yield of photosystem II, Fv/Fm. The ph synthetic performance index, PIABS was more responsive as an indicator of the P starva Table S1) parameters (potential maximal photochemical quantum yield of photosystem II, Fv/Fm; performance index, Pi ABS ; the flux of thermally dissipated energy flux per reaction center, DI 0 /RC; left scale) and Stern-Volmer non-photochemical quenching (NPQ) in the cultures of Micractinium simplcissimum IPPAS C-2056 during its phosphorus starvation (negative time values) and after re-feeding of the P-starved cultures with P i (positive time values). The moment of P i re-feeding (t = 0 h) is specified on the graph.
Acclimation of the culture to P starvation was accompanied by a small decline in the potential maximal photochemical quantum yield of photosystem II, Fv/Fm. The photosynthetic performance index, PI ABS was more responsive as an indicator of the P starvation stress in M. simplicissimum, declining from ca. 0.4 to nearly 0. The flux of energy thermally dissipated by the PSA of the microalgal cells indicative of the engagement of photoprotective mechanisms, DI 0 /RC, increased very slightly. Likely, this was due to the concomitant decline in the number of reaction centers manifested by the decline in chlorophyll content (not shown; see also Figure 2). At the same time, the parameter reflecting Stern-Volmer quenching of chlorophyll fluorescence, NPQ, increased dramatically from 0 (typical of unstressed cells of P-sufficient preculture) to 1.5.
Collectively, the data on the condition of PSA (Figure 8, negative time values) suggested that the M. simplicissimum culture was apparently unaffected by the lack of available P in the medium for ca. 10 days (likely due to large intracellular P reserves). Later, the cells had rapidly (over three days) acclimated to the stress, mostly by adjusting their chlorophyll content and thermal dissipation of the absorbed light energy.
Replenishment of P i to the P-depleted M. simplicissimum culture triggered rapid, dramatic changes in the condition of the PSA of its cells (Figure 8, positive time values). Thus, Fv/Fm declined to the level of 0.1-0.2, and near-zero PI ABS evidenced a near-complete lack of photosynthetic activity. The NPQ parameter declined rapidly and did not increase significantly thereafter. By contrast, a tremendous increase in DI 0 /RC and its variation was detected (again, likely to a gross decline in the number of reaction centers manifested by the decline in photosynthetic pigment content).
Discussion
To the best of our knowledge, this is the first report on a failure of tolerance to a high external concentration of P i in a microalga Micractinium simplicissimum IPPAS C-2056, which was previously shown to be highly tolerant to this stressor [10]. We attempted to link the actual level of tolerance to acclimation of the microalga to different levels of P in the medium. We also tried to infer a plausible hypothesis explaining these apparently controversial phenomena from the physiological and ultrastructural evidence collected during this study and backed up by the current knowledge of luxury P uptake and the physiological role of PolyP in microalgal cells.
Importantly, the phenomenon of failed P i tolerance was observed only when the P-starved M. simplicissimum culture was abruptly re-fed with P i . This phenomenon was observed despite the fact that the concentration of the P i added was far below the level potentially toxic to P-sufficient cultures of this microalga as was revealed by our earlier studies [10].
The dramatic response of M. simplicissimum to abrupt re-feeding with P i was accompanied by a peculiar pattern of changes in the distribution of P in the cells. There were other physiological manifestations (e.g., the changes in the photosynthetic apparatus) indicative of the acclimation state of the microalga. Overall, the acclimation of M. simplicissimum to P shortage at the first phase of the experiment manifested as the onset of mild stress as was documented in other microalgal species such as Chlorella vulgaris [26] and Lobosphaera incisa [27,28].
Specifically, a moderate reduction in the photosynthetic apparatus was observed as reflected by a decline in chlorophyll ( Figure 2) and the accumulation of carbon-rich reserve compounds (Figures 5, A1 and A2) along with a depletion of P reserves in the cell. These rearrangements were accompanied by the up-regulation of photoprotective mechanisms based on thermal dissipation of the observed light energy, which is also typical of the acclimation of microalgae to nutrient shortage [26,28,29]. Nevertheless, the cells retained their structural integrity, and their photosynthetic apparatus remained functional despite a clearly observed expansion of the thylakoid lumens. Interestingly, the cells of the P-starved culture, which already ceased to divide, possessed a sizeable amount of PolyP granules and N-containing matter accumulated in their vacuoles similarly to that documented in other microalgae [27,30,31]. These P reserves are obviously represented by a slowly mobilizable fraction of PolyP, which frequently remains even in the P-starved cells [7,32]. Taken together, these observations suggest that the microalgal cultures experience only mild stress as a result of P starvation under our experimental conditions.
After re-feeding of the P-starved culture of M. simplicissimum with 800 mg L −1 P i , up to 20% of the added P i was gradually removed from the medium by the cells by the 3rd day of incubation (Figure 3). Approximately 5% of the added P i was reversibly adsorbed on the surface of the cells. DAPI staining revealed a characteristic yellow fluorescence localized in the cell wall ( Figure 6). This observation is in line with the previously documented ability of this strain to adsorb P i and form P-containing nanoparticles on its surface structures [10]. It corroborates previous reports on the dynamics of P depots in cell walls of diverse organisms including fungi, bacteria [21,33], and microalgae [34]. As revealed by EDX spectroscopy, PolyP is characterized by co-localization with calcium, magnesium, or (less frequently) potassium and sodium [35][36][37]. In certain cases, we also observed the P peak in combination with that of uranium. Since the uranyl cation used for the sample preparation binds to phosphate and carboxyl groups [38], this can be a manifestation of phosphorylated proteins and nucleic acids in these cell compartments.
The amount of P internalized by the cells as well as the amount of intracellular PolyP also increased during the first four hours after re-feeding. Later, the amount of total intracellular P remained at the level of 4% of cell dry weight, but the PolyP content started to decline. (Notably, the method of PolyP assay used here is optimized for long-chain PolyP, and the internal PolyP content can be slightly underestimated since a part of short-chain PolyP can escape detection.) At the same time, the EDX spectral signature of PolyP was still detected (Figure 7 and Table A1).
Starting from the 1st day of incubation, the progressive signs of cell rupture were recorded. Thus, a recovery of the photosynthetic apparatus would be expected normally after P i replenishment as was normally the case [1][2][3]. Instead of this, we observed a complete failure of photoprotective mechanisms, indicative of acute damage to the cell similar to the effect of severe stress or a toxicant at a sublethal concentration [39]. Confronting the observed effects with the reports on P i toxicity found in the literature [23,40], we hypothesized that short-chain PolyP might be involved in the massive cell death observed in M. simplicissimum after its re-feeding with P i following P starvation.
A possible scenario of the short-chain PolyP-mediated P i toxicity might involve the following steps. First, the P-starved microalgal cells deploy, as a common pattern of nutrient shortage acclimation, the mechanisms making them capable of fast P i uptake [1,6,7,26]. At the same time, their capability of photosynthesis becomes impaired because of the reduction in the photosynthetic apparatus (see also Figure 8). Upon re-feeding of the culture acclimated to P shortage, a large amount of P i rapidly enters the cell. As a result, the biosynthesis of PolyP is triggered, since PolyP serves as a buffer for the storage of P i when it becomes available [41,42]. However, the cell acclimated to a nutrient shortage is, to a considerable extent, metabolically quiescent (in particular, its photosynthetic apparatus is downregulated, and a large part of the light energy it captures is dissipated into heat). At the same time, the biosynthesis of PolyP is very energy-intensive, and this energy comes mostly from photosynthesis [43]. As the net result, these processes trigger the mass accumulation of short-chain PolyP, but the newly formed PolyP cannot be further elongated due to the metabolic restrictions mentioned above. Overall, the toxic effect of the short-chain PolyP rapidly accumulated in all compartments of the cell leads to its damage and, eventually, death, which was the case under our experimental conditions.
It should be noted in addition that the barrier function of the cell wall regarding P i uptake is an important determinant of the P i resilience in M. simplicissimum [10]. P i re-feeding of the culture that was pre-starved of P triggers cell division. Therefore, the proportion of young cells in the cell population increases, whose cell wall can be less efficient as a barrier to P i uptake than the cell wall of mature cells. This can render the young cells more vulnerable to the surge of P i into the cell.
The hypothesis outlined above can explain the observed phenomenon of the failed P i tolerance by analogy with the toxic effect of short-chain PolyP initially described in yeast cells [40], which was also implied in Chlorella regularis [23]. In these works, disorganization of the cell structure has been proposed as a major hallmark of elevated P i toxicity mediated by PolyP. This hypothesis is also supported by the presence of genes encoding the PolyP polymerases from the VTC family [1][2][3] potentially involved in the synthesis of the shortchain PolyP [39] in the genome of a closely related representative of the genus Micractinium, M. conductrix [44]. A homologue of one of these genes was putatively discovered in our pilot studies of the species used in this work, M. simplicissimum (in preparation).
It was also demonstrated recently that PolyP that accumulates outside the cell vacuole is the main factor of PolyP-mediated toxicity of elevated external P i [25]. One can speculate that, mechanistically, the short-chain PolyP, when formed outside the vacuole, can interfere with protein folding and/or the matrix synthesis of biomolecules. This capability of interacting with important polymeric biomolecules is a typical trait of PolyP as the "molecular fossil" retained from ancient times when they were potentially involved in the primordial matrix synthesis and the genesis of life [45].
Finally, we would like to underline the importance of understanding the limits of the high tolerance of microalgae to elevated levels of external P i not only for basic science but also from the practical standpoint as well. One should consider that abrupt changes in P availability can cause the normally high P i tolerance of microalgae to fail and lead to a sudden culture crash. This is possible, particularly in wastewater treatment facilities during the injection of a new portion of P-rich wastewater into a P-depleted culture. Nevertheless, the toxic effects of P i in microalgae remain quite underexplored. Further research in this direction might include studies of the genetic control and implementation of the formation of PolyP of variable chain length in different microalgal species as a function of P i availability.
Strain and Cultivation Conditions
Unialgal culture of an original chlorophyte M. simplicissimum strain IPPAS C-2056 served as the object of this study. The preculture was grown in 750 mL Erlenmeyer flasks with 300 mL of modified BG-11 [10,46] medium designated below as BG-11 K medium. The microalgae were cultured at 25 • C, and continuous illumination of 40 µmol m −2 s −1 PAR quanta was provided by daylight fluorescent tubes (Philips TL-D 36W/54-765). The cultures were mixed manually once a day.
To obtain P-depleted cultures of M. simplicissimum, the preculture cells were harvested by centrifugation (1000× g, 5 min), twice washed by the BG-11 K medium lacking P and resuspended in the same medium to the chlorophyll concentration and biomass content of 25 mg·L −1 and 0.4 g·L −1 , respectively, in 0.6 L glass columns (4 cm internal diameter) containing 400 mL of the cell suspension. The columns were incubated in a temperature-controlled water bath at 27 • C with constant bubbling with a 5% CO 2 : 95% air mixture prepared and delivered at a rate of 300 mL·min −1 (STP). Air passed through a 0.22 µm bacterial filter (Merck-Millipore, Billerica, MA, USA) and pure (99.999%) CO 2 from cylinders were used. A continuous illumination of 240 µmol PAR photons · m −2 · s −1 by a white-light-emitting diode source as measured with a LiCor 850 quantum sensor (LiCor, Lincoln, NE, USA) in the center of an empty column was used. Culture pH was measured with a bench-top pH meter pH410 with a combined electrode ESK-10601/7 (Aquilon, St.-Petersburg, Russia).
The cultures were considered to be P-depleted (having the minimum cell P quota) when the culture biomass did not increase consecutively for three days and showed a decline in chlorophyll content (for details on the monitoring of the corresponding parameters, see below). To ensure that the lack of P was the only limiting factor, the P-starved culture was diluted with BG-11 K medium to keep their OD 678 below 1.5 units, and the residual nitrate content was checked periodically (see below). P i was replenished to the stationary P-depleted cultures in the form of KH 2 PO 4 to a final P i concentration of 0.8 g L −1 , and the cultures were monitored (see below) for another 10 days.
Suspension Absorption Spectra and Pigment Assay
Absorbance spectra of the microalgal cell suspension samples were recorded using an Agilent Cary 300 UV-Vis spectrophotometer (Agilent, Santa Clara, CA, USA) equipped with a 100 mm DR30A integrating sphere of the same manufacturer and corrected for the contribution of light scattering [47]. Pigments from the cells were extracted by dimethyl sulfoxide (DMSO) and quantified spectrophotometrically using the same spectrophotometer. The content of chlorophyll a and b and total carotenoids was calculated using the equations from [48].
Biomass Content Determination
Dry cell weight (DCW) was determined gravimetrically. Routinely, the cells deposited on nitrocellulose filters (24 mm in diameter and 0.22 µm pore size) Millipore (Merck-Millipore, Billerica, MA, USA) were oven-dried at 105 • C to a constant weight and weighed on a 1801MP8 balance (Sartorius GmbH, Gottingen, West Germany). In certain experiments, the 1.5 mL aliquots of the culture were gently centrifuged (1000× g, 5 min) in pre-weighed microtubes, the supernatant was discarded, and the pellet was dried under the same conditions. The tubes with the dried cell pellet were closed and weighed on the same balance.
Assay of External and Intracellular Content of Different P Species
The concentration of P i , along with that of nitrate, was determined in a cultivation broth, and cell wash liquid was assayed essentially as described in our previous work [10] using a Dionex ICS1600 (Thermo-Fisher, Sunnyvale, CA, USA) chromatograph with a conductivity detector, an IonPac AS12A (5 µm; 2 × 250 mm) anionic analytical column, and an AG12A (5 µm; 2 × 50 mm) guard column according to the previously published protocol [49].
To determine the P i adsorption of the cultures, the cells were twice washed with a 15 mL BG-11 K medium lacking phosphorus. According to our previous data on independent 32 P-NMR control on the completeness of the P i wash-off [26], it was sufficient to remove > 99% of the adsorbed P i . The wash liquid batches obtained from washing the same sample were pooled. The P i content of the pooled wash liquid was assayed with HPLC as described above and confirmed independently with the molybdenum blue method [50].
A modified method by Ota and Kawano [49] was used for the total intracellular P and PolyP determination. Briefly, the cell pellets from 15 mL aliquots of MA suspension after washing and removing absorbed P were resuspended in 3 mL of distilled water. The suspended samples were divided and transferred to 2 mL microtubes: 2 mL for the PolyP assay and 1 mL for the total-P assay. The PolyP was extracted from cells with 5% sodium hypochlorite and precipitated with ethanol [51]. A total of 500 µL of distilled water and 100 µL of 4% (w/v) potassium persulfate were added to the precipitated PolyP. For hydrolysis to orthophosphate, the PolyP solution was autoclaved at 121 • C for 20 min. For the total P assay, the cell pellets were disrupted in 1 mL of distilled water with G8772 glass beads (Sigma-Aldrich, St. Louis, MO, USA) and 200 µL of 4% (w/v) potassium persulfate added by vigorous mixing on a V1 vortex (Biosan, Riga, Latvia) for 15 min at 4 • C and subsequent autoclaving at 121 • C for 20 min. After centrifugation of the autoclaved samples (3000× g, 5 min), the P i concentration was assayed in the supernatants using the molybdenum blue method [50].
Light Microscopy
Light microscopy was carried out using a Leica DM2500 microscope equipped with a digital camera Leica DFC 7000T and the light filter set AT350/50xT400lp ET500/100m (Leica, Wetzlar, Germany). PolyP inclusions were visualized using vital staining with fluorescent dye 4 ,6-diamidino-2-phenylindole (DAPI) dissolved in dimethyl sulfoxide [28]. The cells were incubated in a 0.05% water solution (w/v) of the dye for 5−10 min at room temperature, washed with water, and studied with the microscope in brightfield mode.
Transmission Electron Microscopy
The microalgal samples for transmission electron microscopy (TEM) were prepared according to the standard protocol as described previously [52]. The cells were fixed in 2% v/v glutaraldehyde solution in 0.1 M sodium cacodylate buffer (pH 7.2-7.4, depending on the culture pH) and then post-fixed for 4 h in 1% (w/v) OsO 4 in the same buffer. The samples, after dehydration through graded ethanol series including anhydrous ethanol saturated with uranyl acetate, were embedded in araldite. Ultrathin sections were made with Leica EM UC7 ultratome (Leica Microsystems, Wetzlar, Germany), mounted to the formvar-coated TEM grids, stained with lead citrate according to Reynolds [53], and examined under JEM-1011 and JEM-1400 (JEOL, Tokyo, Japan) microscopes. All quantitative morphometric analyses were performed as described previously [52]. Briefly, at least two samples from each treatment were examined on cell sections made through the cell equator or sub-equator. The subcellular structures and inclusions were counted on the sections. Linear sizes of the subcellular structure were measured on the TEM micrographs of the cell ultrathin sections (n ≥ 20) using Fiji (ImageJ) v. 20200708-1553 software (NIH, Bethesda, MA, USA).
Analytical Electron Microscopy
The samples for nanoscale elemental analysis in analytical TEM using energy-dispersive X-ray spectroscopy (EDX) were prepared as described previously [29]: fixed, dehydrated, and embedded in araldite as described above except the staining with lead citrate. Semi-thin sections were made with Leica EM UC7 ultratome (Leica Microsystems, Wien, Germany) and examined under a JEM-2100 (JEOL, Japan) microscope equipped with a LaB 6 gun at the accelerating voltage of 200 kV. Point EDX spectra were recorded using a JEOL bright-field scanning TEM (STEM) module and X-Max X-ray detector system with an ultrathin window capable of analysis of light elements starting from boron (Oxford Instruments, Abingdon, UK). The energy range of recorded spectra was 0-10 keV with a resolution of 10 eV per channel. At least 10 cells per specimen were analyzed. Spectra were recorded from different parts of electron-dense inclusions and from other (sub)compartments of microalgae cells. Spectra were processed with INKA software (Oxford instruments, Abingdon, UK) and presented in the range of 0.1-4 keV.
Photosynthetic Activity and Photoprotective Mechanism Assessment
Estimations of the photosynthetic activity of the microalgal cells dark-adapted for 15 min were obtained by recording Chl a fluorescence induction curves by using an FP100s portable PAM fluorometer (PSI, Czech Republic) using the built-in protocol supplied by the manufacturer. The recorded curves were processed by the built-in software of the fluorometer, and the JIP test parameters indicative of the functional condition of the photosynthetic apparatus of the microalgal cells were calculated (Table S1, see also [54,55]).
Statistical Treatment
Under the specified conditions, three independent experiments were carried out for each treatment repeated in duplicate columns. The average values (n = 6) and corresponding standard deviation are shown unless stated otherwise.
Conclusions
Microalgae including the M. simplicicssimum strain studied here are resilient to very high concentrations of exogenic P i . As it turned out in this work, this resilience fails after the abrupt re-supplementation of P i to the culture pre-starved of P. This was the case even if P i was re-supplemented at a concentration far below the level that can be toxic to the P-sufficient culture. The obtained evidence suggests that this effect can be mediated by the rapid formation of the potentially toxic short-chain PolyP following the mass influx of P i into the P-starved cell. A possible reason for this is that the preceding P starvation impairs the capacity of the cell to convert the newly absorbed P i into a "safe" storage form of long-chain PolyP. We believe that the findings of this study can help to avoid sudden culture crashes and are of potential significance for the development of algae-based technologies for efficient bioremoval of P from P-rich waste streams. Figure 1 and Section 4). The micrographs of P-sufficient preculture cells (a), P-starved cells (b-d), and cells sampled 4 h (e-g), 24 h (h), and 72 h (i-k) after re-feeding of the P-starved cultures with Pi are shown. C, chloroplast; CW, cell wall; G, dictyosomes of Golgi apparatus; N, nucleus; NE, nuclear envelope; Os, oleosome; P, pyrenoid, Pg, plastoglobuli; S, Starch granule; SE, sporangium envelope; T, thylakoids; TL, thylakoid lumen; V, vacuole; VI, vacuolar inclusion. The arrows point to (i) the electron-opaque particles located on the surface of or within the cell, (ii) to their clusters adsorbed on the cell surface, and (iii) to the spherules in the cytoplasm, the nucleus, and the destructed chloroplast. Scale bars: 0.2 µm (a-i) and 0.5 µm (j,k).
Appendix B
Int. J. Mol. Sci. 2023, 24, x FOR PEER REVIEW 18 of 20 Figure A1. Representative transmission electron micrographs of Micractinium simplcissimum IPPAS C-2056 cells reflecting their condition at different phases of the experiment (for more details, see Figure 1 and Section 3). The micrographs of P-sufficient preculture cells (a), P-starved cells (b-d), and cells sampled 4 h (e-g), 24 h (h), and 72 h (i-k) after re-feeding of the P-starved cultures with Pi are shown. C, chloroplast; CW, cell wall; G, dictyosomes of Golgi apparatus; N, nucleus; NE, nuclear envelope; Os, oleosome; P, pyrenoid, Pg, plastoglobuli; S, Starch granule; SE, sporangium envelope; T, thylakoids; TL, thylakoid lumen; V, vacuole; VI, vacuolar inclusion. The arrows point to (i) the electron-opaque particles located on the surface of or within the cell, (ii) to their clusters adsorbed on the cell surface, and (iii) to the spherules in the cytoplasm, the nucleus, and the destructed chloroplast. Scale bars: 0.2 µm (a-i) and 0.5 µm (j,k).
Appendix B Figure A2. Representative transmission electron micrographs of Micractinium simplcissimum IPPAS C-2056 cells reflecting their condition at different phases of the experiment (for more details, see Figure 1 and Section 3). The micrographs of cells (a-d,f) and sporangium (e) sampled 168 h (a-c) and 240 h (d-f) after re-feeding of the P-starved cultures with Pi are shown. C, chloroplast; CW, cell wall; Os, oleosome; Pg, plastoglobuli; S, starch granule; SE, sporangium envelope; T, thylakoids; V, vacuole; VI, vacuolar inclusion. The arrows point to (i) the electron-opaque particles located on the surface of or within the cell wall, (ii) to the inner surface of the tonoplast, and (iii) to the spherules in the cytoplasm and the destructed chloroplast. Scale bars: 1 µm (a,c-f) and 0.1 µm (b). Figure A2. Representative transmission electron micrographs of Micractinium simplcissimum IPPAS C-2056 cells reflecting their condition at different phases of the experiment (for more details, see Figure 1 and Section 4). The micrographs of cells (a-d,f) and sporangium (e) sampled 168 h (a-c) and 240 h (d-f) after re-feeding of the P-starved cultures with P i are shown. C, chloroplast; CW, cell wall; Os, oleosome; Pg, plastoglobuli; S, starch granule; SE, sporangium envelope; T, thylakoids; V, vacuole; VI, vacuolar inclusion. The arrows point to (i) the electron-opaque particles located on the surface of or within the cell wall, (ii) to the inner surface of the tonoplast, and (iii) to the spherules in the cytoplasm and the destructed chloroplast. Scale bars: 1 µm (a,c-f) and 0.1 µm (b).
|
2023-05-12T15:13:49.454Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8a4e786942ea361cab68b52f0ef15927ae16f804",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms24108484",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91235d41b526f083ba5bec3302a3d60049587fc3",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
30377056
|
pes2o/s2orc
|
v3-fos-license
|
Tumor cells
![Graphic][1]
Cdc2 (red) promotes migration by phosphorylating caldesmon (green) in membrane ruffles.Metastatic tumor cells are dangerous because they not only proliferate but also migrate and invade other tissues. On
[page 817][2], Manes et al. reach the surprising conclusion that
etastatic tumor cells are dangerous because they not only proliferate but also migrate and invade other tissues. On page 817, Manes et al. reach the surprising conclusion that the cyclindependent kinase cdc2 regulates both of these activities. The work identifies a novel signaling pathway and points to a promising new strategy for targeting metastatic cells, but it may also force a reevaluation of some current drug development efforts.
Cdc2 is well known as a cell cycle regulator, but previous work had shown that it also phosphorylates multiple cytoskeletal proteins. In the new work, the authors found that ␣ v  3 integrin expression in a prostate cancer cell line increases cdc2 mRNA and protein levels and leads to an increase in cdc2 kinase activity. Using cyclin M Modulating AMPA receptors makes memories special n current models of learning and memory, the brain stores information by remodeling synapses, specifically by changing the numbers of AMPA glutamate receptors at postsynaptic densities. Different brain regions lay down memories differently, though, and AMPA receptors are distributed throughout the brain, so there must be another component of the system providing specificity. On page 805, Tomita et al. address this long-standing problem by defining a family of four differentially expressed transmembrane proteins that regulate AMPA receptors in all types of neurons.
Previously, the authors showed that AMPA receptors in the cerebellum are regulated by a transmembrane protein called I Four TARP isoforms (represented by different colors) are segregated in the brain.
B2 as a cofactor, cdc2 acts in ruffles to phosphorylate the cytoskeleton-associated protein caldesmon. Others have recently shown that this phosphorylation relieve an inhibition of actin polymerization, and thus may be pro-migratory.
The results show that, besides regulating the cell cycle, cdc2 also acts as a downstream effector of ␣ v  3 to regulate cell migration. This result is surprising: cdc2 is the first cyclin-dependent kinase to be linked to both migration and the cell cycle, and cyclin-dependent kinases were not known to have their expression induced by integrin expression. Manes et al. have found that the unusual dual function of cdc2 in migration and proliferation appears to be a feature of normal cells as well as tumor cells.
Because of its cell cycle function, cdc2 has been a popular target for drug developers, but its connection to two important signaling pathways suggests that cdc2 inhibitors might have wide-ranging side effects. Further dissection of the cdc2mediated pathway regulating migration may enable the development of drugs that target only the migratory or proliferative signals mediated by cdc2, resulting in greater specificity.
Cdc2 (red) promotes migration by phosphorylating caldesmon (green) in membrane ruffles.
stargazin, which is mutated in a strain of epileptic mice, but it was unclear whether this was a general mechanism or restricted to the cerebellum. The new study shows that stargazin and three related proteins comprise a family of transmembrane AMPA receptor regulatory proteins (TARPs). TARPs promote the surface expression of functional AMPA receptors, and each TARP shows a specific pattern of expression in the brain. In areas that express multiple isoforms, TARP complexes are strictly segregated.
The expression patterns and properties of the four TARP isoforms could explain how AMPA receptors are differentially regulated in different parts of the brain. TARPs appear to stabilize AMPA receptors, either during biogenesis or at the cell surface, so the TARP isoforms expressed in a particular neuron could determine whether AMPA receptor concentrations are increased, decreased, or maintained at a synapse in response to a given stimulus.
All four isoforms contain a conserved cytoplasmic protein binding domain that appears to drive synaptic clustering, and phosphorylation of this domain might initiate synaptic remodeling. The authors are now studying the prototypic TARP, stargazin, to see whether calcium influxes can induce changes in the phosphorylation status and activity of the protein.
|
2017-05-27T21:12:01.854Z
|
2003-05-26T00:00:00.000
|
{
"year": 2003,
"sha1": "30e833b71fd226afae032d3b635a66da454d88d4",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2199370",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5ba3bc4a96d7b0be081caad2f83a73f544c57cc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
15607406
|
pes2o/s2orc
|
v3-fos-license
|
Allosteric Modulation of Muscarinic Acetylcholine Receptors
Muscarinic acetylcholine receptors (mAChRs) are prototypical Family A G protein coupled-receptors. The five mAChR subtypes are widespread throughout the periphery and the central nervous system and, accordingly, are widely involved in a variety of both physiological and pathophysiological processes. There currently remains an unmet need for better therapeutic agents that can selectively target a given mAChR subtype to the relative exclusion of others. The main reason for the lack of such selective mAChR ligands is the high sequence homology within the acetylcholine-binding site (orthosteric site) across all mAChRs. However, the mAChRs possess at least one, and likely two, extracellular allosteric binding sites that can recognize small molecule allosteric modulators to regulate the binding and function of orthosteric ligands. Extensive studies of prototypical mAChR modulators, such as gallamine and alcuronium, have provided strong pharmacological evidence, and associated structure-activity relationships (SAR), for a “common” allosteric site on all five mAChRs. These studies are also supported by mutagenesis experiments implicating the second extracellular loop and the interface between the third extracellular loop and the top of transmembrane domain 7 as contributing to the common allosteric site. Other studies are also delineating the pharmacology of a second allosteric site, recognized by compounds such as staurosporine. In addition, allosteric agonists, such as McN-A-343, AC-42 and N-desmethylclozapine, have also been identified. Current challenges to the field include the ability to effectively detect and validate allosteric mechanisms, and to quantify allosteric effects on binding affinity and signaling efficacy to inform allosteric modulator SAR.
INTRODUCTION
G protein-coupled receptors (GPCRs) account for 1 -3% of the human genome, are abundantly expressed throughout the central nervous system (CNS) and periphery, and represent the major targets for approximately 30% of all medicines on the world market. However, current CNS-based GPCR drug discovery has a higher than average attrition rate with respect to translating fundamental research to the clinic [41]; this is likely due to two reasons, namely, an insufficient mechanistic understanding of the complexities of CNS GPCR-mediated signaling and a lack of selective pharmacological tools for targeting therapeutically relevant GPCRs. As a consequence there are many GPCR-based drug discovery programs aiming to develop more selective compounds, both as tools to probe GPCR biology and also as potential therapeutic leads. The traditional approach to GPCR-based drug discovery has been to focus on targeting that region of the receptor utilized by the receptor's endogenous ligand, i.e., the "orthosteric" site [80]. However, it is now recognized that GPCRs possess topographically distinct, allosteric binding sites, and that ligands that bind to these sites (allosteric modulators) offer tremendous potential for more selective and/or effective therapies than conventional orthosteric ligands. This brief review will focus on one of the best-studied families of GPCRs with respect to the phenomenon of allosteric modulation, namely, the muscarinic acetylcholine receptors.
MUSCARINIC ACETYLCHOLINE RECEPTORS (mAChRs): A BRIEF OVERVIEW
The mAChRs belong to the Family A (rhodopsin-like) subclass of GPCRs. Pharmacological and genetic studies have identified five distinct mAChR subtypes, classed M 1 -M 5 . The M 1 , M 3 and M 5 subtypes preferentially couple to the G q/11 family of G proteins, resulting in phospholipase C activation, hydrolysis of inositol phosphates and mobilization of intracellular Ca ++ stores. In contrast, the M 2 and M 4 subtypes preferentially couple to the pertussis toxinsensitive G i/o family of G proteins, resulting in the inhibition of adenylyl cyclase and subsequent cAMP formation. Although these generalizations speak to the best-characterized signaling pathways associated with the mAChRs, they should by no means be taken as absolutes. All five mAChR subtypes are known to couple promiscuously to multiple G proteins, usually in a cell background The mAChRs are widely distributed throughout the periphery and the CNS. Activation of peripheral mAChRs leads to increases in exocrine secretion, contraction of cardiac and smooth muscle (gastrointestinal tract and lungs), and reduced heart rate. Within the CNS, a far more complex array of physiological behaviors is thought to be mediated by the mAChRs, depending on their distribution and localization [13]. M 1 mAChRs are predominantly expressed post-synaptically in forebrain regions including the cerebral cortex, hippocampus and striatum [68,69,76,80,88]. These receptors have long been associated with cognitive deficits linked to neurodegenerative disorders, such as Alzheimer's disease, and as such selective agonists of the M 1 mAChR have been pursued as a potential avenue for treatment of dementia-related conditions [32]. The M 2 mAChR is located pre-synaptically on both cholinergic and non-cholinergic neurons [30,88] in the brainstem, hypothalamus/thalamus, hippocampus, striatum and cortex [68,69,80], and generally serves an inhibitory function on the release of neurotransmitters. It has been suggested that enhancing synaptic ACh levels by selectively inhibiting M 2 autoreceptors may be beneficial in the treatment of psychosis and Alzheimer's disease, and an attractive alternative to the currently used cholinesterase inhibitors for the latter disorder [20]. M 3 mAChRs are expressed at relatively low levels in a number of regions including the cortex, striatum, hippocampus, hypothalamus/thalamus. These receptors have been particularly associated with appetite regulation, and the M 3 receptor is currently a potential target for treatment of obesity and other metabolic disorders [7,34,69,109]. M 4 mAChRs are predominantly found presynaptically in the striatum, hippocampus, cortex and hypothalamus/thalamus [9,69,80]. There is the potential that M 4 mAChR selective antagonists may control tremor associated with Parkinson's disease, whilst agonists may be developed as analgesics, due to the regulation of neurotransmitter release in both cholinergic and non-cholinergic neurons [23,113], and as novel antipsychotics, due to regulation of the dopaminergic system [1,91]. Finally, M 5 mAChRs are discretely expressed at low levels in the brain, in particular in the ventral tegmental area [103,110] as well as co-localised with D 2 dopamine receptors in the substantia nigra pars compacta [107]. They are also implicated in the control of vasodilatation of cerebral blood vessels [108]. M 5 mAChRs are associated with slow activation of dopaminergic neurons and sub-sequent reward behaviors [111], and as such M 5 selective agents may be used to treat addiction and psychosis, as well as maintain cerebral blood flow in the certain pathophysiological states such as cerebral ischemia.
The pharmacological characterization of mAChRs is not a straightforward task due to the high level of sequence conservation within the orthosteric binding site across all five mAChR subtypes. As a consequence, there are very few orthosteric agonists and antagonists that exhibit high selectivity for one subtype to the relative exclusion of others. The traditional approach to pharmacological delineation of which mAChR governs a given response has thus been to use a combination of compounds, generally antagonists, to build up a receptor profile. For example, the M 1 mAChR is generally defined as having high affinity for pirenzepine and 4-DAMP (4-diphenylacetoxy-N-methyl-piperidine methiodide), whilst having low affinity for methoctramine and himbacine. M 2 mAChRs have high affinity for methoctramine, himbacine and AF-DX 384 (5,11- [1,4]benzodiazepin-6-one) and have low affinity for pirenzepine and 4-DAMP. A high affinity for 4-DAMP, and low affinity for pirenzipine, methoctramine and himbacine suggests the involvement of the M 3 mAChR. The presence of the M 4 mAChR can be determined using PD102807 and the toxin, MT3. The M 5 mAChR has been notoriously difficult to identify pharmacologically, however both AF-DX 384 and AQRA741 (11-((4-[4-(diethylamino)butyl]-1-piperidinyl)acetyl)-5,11-dihydro-6H-pyrido(2,3-b)(1,4)benzodiazepine-6-one) have the lowest affinity (at least 10 fold lower) for this subtype than any other.
Given the high degree of sequence homology within the mAChR orthosteric site, and the current paucity of suitably selective mAChR orthosteric ligands, it stands to reason that alternative approaches are required to better achieve target specificity. All five mAChRs possess at least one [25], and likely two [62], extracellular allosteric binding sites for small molecules, and significant efforts have been underway, especially within the last decade and a half, in trying to understand the nature of these sites. The most important challenge in this field remains the ability to detect and quantify the myriad of possible allosteric effects that can arise when two ligands occupy a receptor at the same time.
DESCRIBING ALLOSTERIC INTERACTIONS
The binding of an allosteric ligand to its site will change the conformation of the receptor, which means that the "geography" of the orthosteric site and any other potential receptor-ligand/protein interfaces, can also change. As a consequence, the binding affinity and/or signaling efficacy of the orthosteric ligand is likely to be modulated, either in a positive or negative manner. The simplest allosteric GPCR model assumes that the binding of an allosteric ligand to its site modulates only the affinity of the orthosteric ligand; this model is referred to as the allosteric ternary complex model (ATCM; Fig. (1A)). Within the framework of an ATCM, the interaction is governed by the concentration of each ligand, the equilibrium dissociation constants (K A and K B , respectively) of the orthosteric and allosteric ligands, and the "cooperativity factor" , which is a measure of the magnitude and direction of the allosteric interaction between the two conformationally linked sites [24,94]. A value of < 1 (but greater than 0) indicates negative cooperativity, such that the binding of an allosteric ligand inhibits the binding of the orthosteric ligand. Values of > 1 indicate positive cooperativity, such that the allosteric modulator promotes the binding of orthosteric ligand, whereas values of = 1 indicate neutral cooperativity, i.e. no net change in binding affinity at equilibrium. Because the two sites are conformationally linked, the allosteric interaction is reciprocal, i.e., the orthosteric ligand will modulate the binding of the allosteric ligand in the same manner and to the same extent. Fig. (1). Allosteric GPCR models. A) The simple allosteric ternary complex model (ATCM), which describes the interaction between an orthosteric ligand, A, and allosteric modulator, B, in terms of their equilibrium dissociation constants (KA, KB) and the cooperativity factor, , which describes the magnitude and direction of the allosteric effect on ligand binding affinity. B) The allosteric two state model (ATSM), which describes allosteric modulator effects on affinity, efficacy and the distribution of the receptor between active (R*) and inactive (R) states, in terms of distinct conformations selected by ligands according to their cooperativity factors for the different states.
Since the simple ATCM describes the effect of the modulator only in terms of changes in orthosteric ligand affinity, and vice versa, the stimulus that is generated by the ARB ternary complex is assumed to be no different to that imparted by the binary AR complex. In general, many mAChR modulators studied to date appear to behave in a manner consistent with this simple ATCM. However, there is no a priori reason why the conformational change engendered by an allosteric modulator in the GPCR does not perturb signaling efficacy in addition to, or independently of, any effects on orthosteric ligand binding affinity. Indeed, changes in the predominance of drug screening methods from a focus on (orthosteric) radioligand binding to functional assays has unmasked modulators whose actions cannot be sufficiently described by the simple ATCM; it is clear that these latter compounds can affect the signaling capacity of orthosteric agonists [75]. Moreover, there are allosteric ligands that not only modulate orthosteric ligand signaling, but also act as agonists in their own right [54]. To account for such allosteric effects on efficacy, the ATCM has been extended into an allosteric "two-state" model (ATSM; Fig. (1B)) [38]. This model describes GPCR function in terms of: a) the ability of the receptor to constitutively isomerize between active (R*) and inactive (R) states, as determined by the isomerization constant, L; b) the ability of orthosteric and allosteric ligands to modify this transition between states, i.e., to act as either agonists or inverse agonists, which is governed by the parameters and ; c) the ability of each ligand to allosterically modulate the binding affinity of the other, governed by the "binding cooperativity" parameter, ; d) the ability of either ligand to modulate the transition to an active receptor state when both ligands are bound, governed by the "activation cooperativity" parameter, . While it is widely accepted that GPCRs can adopt multiple active and inactive conformations beyond the simple R and R* paradigm [102], the ATSM nonetheless provides the simplest mechanistic framework with which to describe the wide array of allosteric modulator effects on receptor binding and functional properties.
These considerations suggest that allosteric modulators can be further subdivided on the basis of their phenotypic behaviors, namely, allosteric enhancers (of affinity, efficacy or both), allosteric antagonists (affinity, efficacy or both) and allosteric agonists. It should also be noted that there is no reason why a modulator could not express more than one of these properties concomitantly, e.g., agonism (positive or inverse) together with enhancement or inhibition of orthosteric ligand binding/function [75,90]. Currently, it remains to be determined whether a single phenotype (modulator only) or a combination of both modulator and agonist properties is the optimal approach to treating GPCR-based diseases with allosteric ligands. Most likely, different therapies will benefit differently from one type of phenotype relative to another. Irrespective of phenotype, the most obvious advantage of allosteric ligands is the potential for greater receptor subtype selectivity, as allosteric sites need not have evolved to accommodate an endogenous ligand [17]. An additional advantage of allosteric modulators that have no agonistic activity in the absence of orthosteric ligand is the ability to retain the spatial and temporal aspects of normal (physiological) receptor function; the modulator would only exert an effect when and where the endogenous neurotransmitter or hormone is present. Furthermore, modulators with limited cooperativity will have an inbuilt "ceiling" level to their effect, suggesting that they may be potentially safer than orthosteric ligands if administered in very large doses.
DETECTING ALLOSTERIC INTERACTIONS
By and large, cell-based functional assays have surpassed radioligand binding assays as primary screens for allosteric GPCR modulators. However, there are advantages and disadvantages to both types of assays when measuring allosteric modulator effects, and ideally a combination of binding and functional experiments should be used where possible. When assessing experimental data for possible evidence of allosteric effects, the following approaches are generally utilized:
i) Assessment of the Translocation of Orthosteric Ligand Concentration-Response or Binding Curves
Simple competition between two orthosteric ligands for a common binding site predicts a strict relationship between the apparent potency of one ligand in the absence relative to the presence of the other. This relationship is defined by the factor 1+[B]/K B , where [B] is the antagonist concentration, and K B its equilibrium dissociation constant [2,33]. In functional assays this change in agonist potency is manifested as a progressive dextral displacement of the orthosteric agonist concentration-response curve; in binding assays this is evidenced by a complete inhibition of orthosteric radioligand binding by increasing concentrations of competitor, irrespective of the concentration of the radiolabeled probe. In contrast, because of the cooperativity that characterizes an allosteric interaction, the changes in orthosteric ligand potency in the presence of a modulator can deviate dramatically from this expectation.
In studies of mAChRs, it is common to see the use of the high affinity (non-selective) radiolabeled orthosteric antagonists, 2) shows the interaction between the allosteric modulators gallamine or alcuronium against the binding of [ 3 H]NMS at M 2 mAChRs. In each instance, the allosteric interaction is evidenced by the deviation of the [ 3 H]NMS binding isotherm from the expectations of simple orthosteric competition. In the case of alcuronium, the spe-cific binding of [ 3 H]NMS is increased to due to a stabilization by the modulator of an orthosteric ligand-receptor complex characterized by a higher affinity of the radioligand for the receptor than in the absence of modulator. In the case of gallamine, specific [ 3 H]NMS binding is reduced, but not completely; residual [ 3 H]NMS binding is still detectable, indicating that the radioligand is able to occupy the receptor in the presence of gallamine, albeit with significantly reduced affinity. In addition to detecting allosteric ligands that modulate orthosteric ligand affinity, these types of equilibrium binding assays can also be used to quantify the allosteric effect in terms of the simple ATCM, thus providing estimates of modulator K B and (Fig. 2). It should be noted, however, that for allosteric inhibitors with very high negative cooperativity ( approaches zero), the interaction may not be readily discernible from simple competition due to the profound reduction of radioligand affinity that ensues. In some cases, the allosteric nature of the interaction can be revealed by repeating the experiment in the presence of very high radioligand concentrations [57], but practical considerations may often preclude this approach.
Similar considerations apply to the measurement of allosteric modulator effects in functional assays. If the modulator behaves according to the simple ATCM, then the only effect that should be observed is a parallel translocation of the agonist concentrationresponse curve either to the left (allosteric enhancement) or the right (allosteric antagonism), with no significant change in the basal or maximum responses (but see below). In addition, if the cooperativity is limited, then the tell-tale sign of an allosteric interaction would be that the agonist curve translocation will approach a limit above which no further shifts occur, irrespective of additional increments in modulator concentration. This is illustrated in Fig. (3A), where the prototypical allosteric modulator, gallamine, displays a progressive inability to antagonize the effects of ACh on the guinea pig electrically-driven left atrium as the modulator concentration is increased. Often, these types of data are expressed in the form of a Schild regression [2], in which case the allosteric effect is seen as a curvilinear regression (Fig. 3B) that asymptotes towards a value of -Log [75]. As with binding assays, highly negative cooperative interactions may be difficult to distinguish from competitive interactions because the Schild regression will remain linear over a very large range of antagonist concentrations.
ii) Assessment of the Maximum Attainable Response to an Orthosteric Agonist
The increased use of functional screening assays has certainly expanded the spectrum of possible allosteric effects that can be observed, specifically, by facilitating the detection of compounds that alter orthosteric ligand efficacy, as well as allosteric compounds that modify receptor activity in their own right. The most common method of detecting an allosteric modulator that affects orthosteric ligand efficacy is to monitor effects on the maximal agonist response in the presence of increasing modulator concentrations. In contrast to changes in curve translocation (agonist potency), which can reflect effects on both agonist affinity and efficacy, changes in maximal agonist responsiveness are more unambiguously attributed to modulation of agonist efficacy. Fig. (4) shows the interaction between the allosteric modulator, alcuronium, and the partial orthosteric agonist, pilocarpine, at human M 2 mAChRs measured using a Cytosensor microphysiometer (which quantifies changes in whole cell extracellular acidification rates upon activation). Although the modulator is an allosteric enhancer of [ 3 H]NMS binding affinity (Fig. 2), it is clear that, when tested against pilocarpine, the same compound is an allosteric inhibitor of orthosteric agonist efficacy [112]. This is an example of the "probe-dependence" of allosteric interactions, namely, that the manifestation of cooperativity between the orthosteric and allosteric sites is totally dependent on the chemical nature of the compounds occupying the sites; the same allosteric modulator can be negatively cooperative with one orthosteric ligand, and positively cooperative with another. Fig. (4). Allosteric modulation of orthosteric agonist efficacy. Interaction between alcuronium and pilocarpine at human M2 mAChRs stably expressed in CHO cells. Receptor activation was quantified as a change in the extracellular whole cell acidification rate with a Cytosensor microphysiometer.
In practice, the ability to optimally discern an allosteric effect on agonist efficacy requires that the assay be performed under conditions where receptor reserve and/or stimulus-response coupling efficiency is sufficiently low, such that the maximum effect of the orthosteric agonist in the absence of modulator is below the maximum possible effect attainable in the assay. Under these conditions, modulation of agonist efficacy will then manifest as either a reduc-tion or an increase in the maximum observed response. In contrast, over-expressed or very efficiently-coupled receptor-transducer systems usually result in high degrees of signal amplification such that most agonists utilized behave as full agonists, i.e., yield the maximum possible cellular/tissue response. When the cellular assay system imposes such a ceiling, allosteric enhancement of agonist efficacy would only manifest as an increase in agonist potency, and may be misinterpreted as an allosteric effect on affinity only. Similarly, allosteric inhibition of agonist efficacy in highly amplified signaling assays can result in progressive reductions in potency with no effect on agonist maximum response over the modulator concentration ranges examined. Although effects on agonist maximum response (with/without changes in agonist potency) can be used to infer allosteric modulation of efficacy, an important caveat to the interpretation of functional assays is that the lack of such an effect (with/without effects on agonist potency) cannot be used as evidence to rule this out, unless it is known that the system under investigation lacks receptor reserve.
iii) Assessment of Orthosteric Ligand Binding Kinetics
Since the affinity of any ligand for its receptor is determined by the ratio of its association to dissociation rate constants, allosteric interactions that follow the simple ATCM can be detected by comparing the association and/or dissociation rates of a radiolabeled orthosteric ligand in the absence and presence of putative allosteric modulator. Unfortunately, the routine measurement of effects on association kinetics is problematic, because competitive orthosteric ligands will alter the "apparent" association rate simply by delaying the time taken for the radiolabeled probe to reach equilibrium. In contrast, the only way that the dissociation rate of a pre-equilibrated radioligand-receptor complex can be modified is if the test ligand binds to another site on this complex to change receptor conformation prior to the radioligand dissociating.
Radioligand dissociation kinetic assays thus represent a most useful means for detecting and validating an allosteric mode of action. Moreover, under certain conditions these assays can also be used to quantify the allosteric effect in terms of the parameters of the ATCM [52,60]. Another advantage of these assays is that they have the potential in some cases to detect modulators with neutral binding cooperativity ( = 1) at equilibrium. Neutral cooperativity can arise as a consequence of either a lack of effect on orthosteric ligand association or dissociation rates or due to the modulator altering both properties to the same extent. If the latter mechanism is operative, then a dissociation kinetic assay will detect allosteric modulation even when an equilibrium assay will not [51]. However, dissociation kinetic assays are not the be-all and end-all for detecting allosteric modulator effects -there are a number of situations where their utility is limited. The first is when the conformational change induced by the allosteric ligand manifests predominantly on orthosteric ligand association, and not dissociation; without an ap- Fig. (3). A) Interaction between acetylcholine and gallamine at native M2 mAChRs in the guinea pig electrically-driven left atrium. Data taken from [16]. B) Concentration-ratios (CR) were derived from the data in panel A and plotted in the form of a Schild regression. Solid curve denotes the fit of the ATCM to the data. Dashed line denotes the expected Schild regression for a simple competitive interaction.
propriately designed association kinetic assay, such a modulator would not be detected [62]. The second situation is for interactions characterized by very high negative cooperativity; under this condition, the affinity of the modulator for the radioligand-occupied receptor may be so low such that it cannot bind to perturb dissociation kinetics unless impractically high concentrations of modulator are utilized. A third situation where the dissociation kinetic assay can fail is when the conformational change mediated by the modulator is manifested predominantly on effector coupling domains (i.e. efficacy modulation) and not on the orthosteric binding pocket.
The ability of certain allosteric ligands to alter dissociation of orthosteric ligands from the receptor also has implications for the design and interpretation of "equilibrium" binding studies. The time taken to reach equilibrium is limited by the rate of slowest dissociating ligand [78], thus at very high concentrations of an allosteric modulator that retards orthosteric ligand dissociation, equilibrium may not actually be achieved over the time course of the assay. As a consequence, equilibrium binding experiments may yield complex modulator/radioligand interaction curves that appear inconsistent with the ATCM [3,60,84]. In the case of allosteric enhancers, this kinetic artifact can result in a bell-shaped binding curve; for allosteric inhibitors, this can result in a biphasic inhibition curve [3].
PROTOTYPICAL ALLOSTERIC MODULATORS OF THE mAChRs
Arguably, the most comprehensively studied allosteric modulators of the mAChRs are represented by neuromuscular-blocking agents, such as gallamine and alcuronium, and a series of alkanebis-onium compounds related to hexamethonium and exemplified by ligands such as W84 and its heptamethylene congener, C 7 /3-phth (Fig. 5). Collectively, studies with these ligands have resulted in extensive evidence for at least one allosteric site on all five mAChRs that is likely utilized by all these compounds, albeit with significantly different affinities [14,28]. This will be referred to herein as the "common" allosteric site.
The earliest evidence for allosteric modulation of the mAChRs, and indeed of any GPCR, was obtained in isolated tissue bioassays, specifically, investigations of the effects of alkane-bis-onium modulators and, subsequently, gallamine, at native guinea pig atrial M 2 mAChRs [19,70]. The key finding from these early functional assays was that the antagonism by the modulators of orthosteric agonist responses approached a limit at the highest modulator concentrations, resulting in curvilinear Schild regressions. Importantly, with the subsequent widespread adoption of radioligand binding assays, the allosteric properties of these compounds were validated and further studied, confirming that their behavior is generally consistent with the predictions of the simple ATCM. A seminal study of the effects of gallamine on M 2 mAChRs by Stockton et al. [94] identified characteristics that have come to be associated with many mAChR modulators, including incomplete inhibition of specific [ 3 H]NMS binding at high modulator concentrations and a retardation of the dissociation kinetics of [ 3 H]NMS. Subsequent functional and radioligand binding studies have been extensively used to demonstrate the probe-dependence of the allosteric effect, as well as the fact that most of these prototypical common-site modulators have highest affinity for the M 2 mAChR and lowest affinity for the M 5 mAChR [11,12,15,22,25,39,55,65,71,72].
Another significant finding in the study of mAChR allosterism was the identification of alcuronium as the first allosteric enhancer of the binding of an orthosteric mAChR ligand [84,101]. This modulator acts at the same site as that recognized by gallamine and the alkane-bis-onium modulators [56,85], and has proven a very useful tool in demonstrating the striking nature of cooperativity; at the M 2 and M 4 subtypes, alcuronium enhances [ 3 H]NMS binding, Dimethyl-W84 whereas at the M 1 , M 3 and M 5 subtypes, it inhibits it [43]. When tested against different orthosteric antagonists and agonists, varying degrees of cooperativity are observed (mostly negative) [43,45,111]. The alkaloid structure of alcuronium has also prompted investigations into related compounds, leading to the identification of strychnine, vincamine, eburnamonine, and brucine and its analogs as allosteric mAChR modulators [59,86]. Importantly, studies on this series of alkaloids also resulted in the first identification of allosteric enhancers of agonist binding at the mAChRs [5,45,61]. Obviously, the most important agonist with respect to allosteric modulation is the endogenous neurotransmitter, ACh, and proof-ofconcept studies have revealed how positive, neutral and negative cooperativity with this agonist is possible, depending on the modulator and the mAChR subtype [5,45,61]. Most recently, the identification of thiochrome as a selective allosteric enhancer of ACh at M 4 mAChRs has added a new dimension to these studies, because the modulator binds with similar affinity at all mAChRs and achieves its selective effect purely from the positive cooperativity between itself and ACh at the M 4 mAChR [64].
Given that mAChR allosteric modulators can display significant degrees of structural diversity, it may be asked whether all these compounds do, indeed, bind to a common allosteric site, or whether they utilize different allosteric sites. The most important pharmacological validation of the common-site hypothesis has been derived from interaction studies between different types of modulators. In particular, the identification of obidoxime (Fig. 5) and dtubocurarine as allosteric mAChR modulators that bound with reasonable affinity but exerted only a weak effect on radioligand dissociation kinetics [26,105] meant that they could be used in combination with more efficacious modulators to antagonize the actions of the latter, as would be expected from competition for a common binding site [26,96,106].
The most extensive SAR studies focusing on mAChR allosteric modulators has thus led to the following two general categories: neuromuscular blockers and bis-onium modulators, and monoquaternaries and tertiary amines related to alkaloids; excellent reviews on the SAR of these ligands have been published recently [6,77]. Other researchers in the field have also used selected members of these prototypical modulator families to design novel pharmacological tools with which to better probe the relationship between the common allosteric site and the orthosteric site on mAChRs. One important approach has been the development of [ 3 H]dimethyl-W84 (Fig. 5), the first radiolabeled allosteric modulator of the M 2 mAChR [97]. This compound may allow for a more direct screening of putative common-site modulators via simple competition binding assays [98,99], but has also been used the validate the ATCM as an appropriate mechanistic descriptor of the interaction between the orthosteric site and prototypical common-site modulators [98]. Another recent approach is the development of "hybrid" ligands composed of an orthosteric moiety and an allosteric moiety separated by an appropriate covalent linker, which can, theoretically, bind both the orthosteric and allosteric sites. The idea behind this approach is to utilize the allosteric site to achieve selectivity, while still targeting the orthosteric site for the purpose of receptor activation or antagonism [21,36]. Although the interpretation of the mode of action of these bivalent ligands is likely to be more complex than that predicted by the simple ATCM [75], the use of such ligands highlights but one of the many avenues available for selective mAChR targeting via exploiting the pharmacology of the prototypical allosteric ligands.
"ATYPICAL" MODULATORS OF THE mAChRs
In addition to the well-studied common mAChR allosteric site, a second site was more recently defined pharmacologically by Lazareno, Birdsall and colleagues [62,63]. A number of indolocarbazole derivatives of staurosporine (Fig. 6), exemplified by the compound, KT5720, were found to show positive, negative and neutral cooperativity with ACh depending on the mAChR subtype, yet did not appear to interact with the prototypical common-site ligands, gallamine and brucine [62]. The novel compounds differ from those reported to act at the common site, in that they generally do not possess a positively charged nitrogen, tend to show highest affinity for the M 1 rather than M 2 mAChR, and have little or no effect on [ 3 H]NMS dissociation rate. Similarly, analogs of the commercially available neurokinin receptor antagonists, WIN 62,577 and WIN 51,708 (Fig. 6), as well as the parent compounds themselves, were found to interact with gallamine and strychnine in a noncompetitive manner, whilst competing with staurosporine and KT5720 [63]. The WIN compounds also had little or no effect on [ 3 H]NMS dissociation, with the exception of the derivative, PG987, which actually accelerated [ 3 H]NMS dissociation. A more recent Fig. (6). Representative "second-site" and "atypical" mAChR modulators. study, focusing predominantly on the M 4 mAChR, found evidence for a negatively cooperative interaction between WIN 62,577 and each of C 7 /3-phth, alcuronium or brucine when the orthosteric site of the receptor was unoccupied [59]. Taken together, these findings indicate that a complex network of cross-interactions is attainable at the mAChRs. It is possible that multiple allosteric sites are also present on other GPCRs.
In addition to the "second-site" modulators described above, there are also a number of other allosteric ligands of the mAChRs that are classed as "atypical" because they exhibit pharmacological behaviors not consistent with the simple ATCM. These compounds include tacrine, the bispyridinium 4,4'-bis-[(2,6-dichloro-benzyloxyimino)-methyl]-1,1'propane-1,3-diyl-bis-pyridinium dibromide (Duo 3) and a group of pentacyclic carbazolones [35,81,96]. Tacrine (Fig. 6) is a well known anti-cholinesterase that has been reported to inhibit both the equilibrium binding and the dissociation kinetics of [ 3 H]NMS with slope factors significantly greater than 1 [31,50,81,82,99]. This behavior is consistent with the expectations of positive homotropic cooperativity, i.e. the binding of one tacrine molecule promotes the binding of another [82]. However, since this behavior is retained in dissociation kinetic assays, where the orthosteric site is occupied by radioligand, the two interacting tacrine molecules must be utilizing two different allosteric sites, perhaps across a mAChR dimer. Alternatively, tacrine is small enough such that two molecules can conceivably bind within the "common" allosteric site. There are two lines of evidence to support the latter conclusion. First, tacrine appears to interact with the common-site modulators obidoxime [26] and [ 3 H]dimethyl-W84 [99]. Second, when two molecules of tacrine are covalently attached to one another to form a dimeric molecule, the affinity of this dimer for the M 2 mAChR was significantly increased, yet its interaction with [ 3 H]NMS no longer showed slope factors greater than 1 [100].
The bispyridinium compound Duo3 (Fig. 6) is another allosteric mAChR modulator [89] that displays slope factors greater than 1 with respect to inhibition of both [ 3 H]NMS and [ 3 H]dimethyl-W84, as well as a non-competitive interaction with obidoxime [96,99]. It has been suggested that Duo3 displays positive homotropic cooperativity, however, unlike tacrine, Duo3 is a large molecule and unlikely to be binding in multiple equivalents within a single, common allosteric site [100]. It is possible that Duo3 represents an allosteric modulator that may exert its effects across receptor dimers, although this remains to be determined.
ALLOSTERIC EFFECTS ON mAChR SIGNALING AND OTHER BEHAVIORS
As outlined previously, the binding of an allosteric modulator induces a unique receptor conformation that has the potential to not only effect orthosteric ligand affinity, but also efficacy and other receptor behaviors; the abolition by alcuronium of pilocarpine's efficacy [112; see also Fig. (4)] is one such example. In addition, certain allosteric ligands may promote or inhibit receptor activation even in the absence of agonist. Indeed, W84 has been shown to be an inverse agonist with respect to [ 35 S]GTP S binding in atrial membranes [40]. Alcuronium (at the M 2 mAChR) and strychnine (at M 1 and M 2 subtypes) have both also been identified as inverse agonists with respect to [ 35 S]GTP S binding in recombinant expression systems [60,112]. These findings are generally in accord with the expectation that if a modulator induces a receptor conformation that is negatively cooperative with respect to agonist binding, then the conformation may also predispose the receptor towards a reduced probability of adopting an active state. However, a study by Jakubik et al. (1996) [44] has found that alcuronium, gallamine, and strychnine were partial (positive) agonists at the M 2 , M 4 and M 1 mAChR subtypes [44]. These findings have not been reported elsewhere, and may reflect particular requirements with respect to receptor-G protein stoichiometry and the use of recombinant expression or artificial reconstitution systems [46].
In recent years, there has been an increase in the number of reports identifying putative allosteric agonists of GPCRs. With respect to the mAChRs, McN-A-343 (4-(m-Chlorophenylcarbamoyloxy)-2-butynyltrimethylammonium chloride; Fig. (7)), probably the first mAChR agonist known to display functional selectivity [87], was actually found to interact allosterically with [ 3 H]NMS in an equilibrium radioligand binding assay on rat atrial M 2 mAChRs over twenty years ago [4]. An allosteric mode of interaction with pirenzepine had also been suggested [10], and the agonist was later shown to slow the dissociation kinetics of [ 3 H]NMS at cardiac M 2 mAChRs [106]. However, this latter effect was not competitive with d-tubocurarine, and it was suggested that McN-A-343 may in fact bind in two orientations, one to the orthosteric site, and another to an allosteric site (Waelbroeck, 1994). When investigated in functional assays [13], the interaction between carbachol and McN-A-343 appeared consistent with simple competition, suggesting that McN-A-343 does indeed recognize the orthosteric site, or else displays very high negative cooperativity against ligands such as carbachol. The ultimate delineation of the mode of action of McN-A-343 as both an agonist and an allosteric modulator is likely to provide novel insights into mAChR activation mechanisms.
A number of other agents have more recently been identified as potential mAChR allosteric agonists (Fig. 7); AC-42 (4-n-Butyl-1-[4-(2-methylphenyl)-4-oxo-1-butyl]-piperidine), its analogue AC-260584 (4-[3-(4-butylpiperidin-1-yl)-propyl]-7-fluoro-4H-benzo [1,4]oxasin-3-one and N-desmethylclozapine, the major metabolite of the antipscyhotic clozapine. AC-42 displays unprecedented functional selectivity for the M 1 mAChR relative to all other subtypes, even though it appears to bind with similar affinity for all subtypes. This led to the suggestion that it recognized an "ectopic" site different to that utilized by classic orthosteric ligands [93]. A subsequent study by Langmead et al. [54] provided conclusive evidence for an allosteric mode of action of AC-42. Specifically, the compound was shown to retard the dissociation of [ 3 H]NMS from M 1 mAChRs and, in cell-based functional assays, the antagonism of AC-42mediated Ca ++ mobilization at M 1 mAChRs by atropine was characterized by curvilinear Schild regressions, again consistent with an allosteric mode of interaction [54]. Most recently, AC-260584, a more potent AC-42 analogue, was also shown to act allosterically at the M 1 mAChR [92], thus highlighting that a clear SAR is likely to exist that defines allosteric M 1 mAChR agonism.
Like AC-42, N-desmethylclozapine is a functionally-selective M 1 mAChR agonist that has been suggested to act allosterically. The major lines of evidence for such a mechanism, however, are mainly indirect and based on mutagenesis studies that show differential effects of classic orthosteric site mutations in the M 1 mAChR on orthosteric ligands such as carbachol, on the one hand, and functionally selective agonists like AC-42 and N-desmethylclozapine, on the other [92,95].
In addition to acute effects on classic signaling pathways, it is now acknowledged that GPCR ligands can affect a far wider range of receptor behaviors that may have a significant impact on the desired therapeutic endpoint. Thus, the pharmacology of a GPCR ligand to impact phenomena such as receptor desensitization, phosphorylation and internalization may not mirror its effects in acute signaling assays [49]. It is of note, therefore, that a recent study found that prolonged exposure of CHO cells stably expressing the human M 2 mAChR to the allosteric modulators gallamine, alcuronium or C 7 /3-phth, resulted in a significant up-regulation of M 2 mAChR expression, likely due to an alteration of receptor internalization [74].
MUTATIONAL STUDIES OF THE ALLOSTERIC SITE(S)
There have been two general approaches utilized to map allosteric binding sites on the mAChRs. The most widespread approach has been to use receptor chimeras or site-directed mutagenesis of selected amino acids of one mAChR subtype into their (non-conserved) counterparts of another subtype. The other approach has been to focus on conserved amino acids across mAChR subtypes in order to define residues likely to be critical to the "common" allosteric site at all five subtypes. To date, there have been no reported studies that have aimed to map the location of the "second" allosteric site that is utilized by staurosporine and related compounds.
Since most prototypical (common-site) modulators show highest affinity for the M 2 mAChR, the bulk of structural studies of mAChR allosteric sites have focused on this subtype, in particular exploiting differences between the M 2 mAChR and the M 5 mAChR, since the latter generally displays lowest affinity for many prototypical modulators. Overall, such studies have identified roles for the second and third extracellular loops as well as transmembrane (TM) domain 7 for conferring affinity and selectivity to a diverse range of modulators [8,27,29,37,42,47,52,67,100,104], including gallamine, alkane-bis-onium compounds, alcuronium and d-tubocurarine derivatives. For instance, early site-directed mutagenesis studies revealed the 172 EDGE 175 sequence, specific to the second extracellular loop of the M 2 mAChR, to be required for gallamine selectivity [67]; E 172 and E 175 have been highlighted as particularly important, since substitution of these amino acids to their M 1 counterparts (L and G respectively) resulted in decreased affinity for gallamine and W84 [42]. A tyrosine in position 177, also in the second extracellular loop, plays a key role in contributing to the M 2 versus M 5 selectivity of WDuo3 [100] and binding affinity for diallylcarcurine V and alkane-bis-onium compounds [42,104]. N 419 , at the junction between the third extracellular loop and TM7, plays a role in M 2 versus M 5 selectivity of gallamine and W84, although a more dominant residue appears to be the nearby (TM7) T 423 [8,29,37,42,104]. In terms of conserved residues, a tryptophan in TM7 (W 422 in the M 2 mAChR) appears to be the dominant amino acid influencing common-site modulators [73,83].
Similar studies have focused on the differences in modulator activities between the M 3 and M 2 mAChRs. Thus, introduction of an asparagine at position 423 of the M 3 mAChR (corresponding to N 419 of M 2 mAChR) resulted in an increase of gallamine's affinity [52]) consistent with the important role that this particular amino acid can play in this position. In addition, N 419 , V 421 and T 423 of the M 2 mAChR were found to be important in the manifestation of positive cooperativity of strychnine-like modulators [47]. Collectively, these mutagenesis studies, together with recent homology modeling based on the crystal structure of inactive state bovine rhodopsin [58,83,104], have resulted in the consensus view that the common allosteric binding site for the majority of prototypical allosteric M 2 mAChR modulators is located at the opening of the orthosteric binding pocket, the latter which is buried further within the TM bundle. Fig. (8) illustrates the possible relationship between key residues of the orthosteric and allosteric pocket on the M 2 mAChR, based on homology to bovine rhodopsin. Fig. (8). Schematic representation of the relationship between residues comprising the orthosteric and "common" allosteric site on the M2 mAChR, using a homology model based on the crystal structure of inactive state bovine rhodopsin. Regions highlighted in blue incorporate the following orthosteric-site residues: W 99 , D 103 , S 107 , Y 110 , W 155 , T 187 , T 190 , W 400 , Y 403 , N 404 , Y 426 , Y 430 . Regions highlighted in purple incorporate the following allosteric site residues: 172 EDGE 175 , Y 177 , N 419 , N 422 , N 423 . The residues in yellow represent a cysteine pair, and corresponding disulphide bond between the second extracellular loop and top of TM3, that are highly conserved in over 90% of GPCRs.
In contrast to the prototypical modulators, the binding of putative allosteric agonists is believed to be via mAChR epitopes distinct from both the orthosteric and common allosteric sites [92,93,95], although it should be noted that it is far more difficult to interpret the results of mutagenesis studies on agonists because the mutations can affect not only binding affinity, but efficacy as well. Initial studies aimed at investigating the high degree of functional selectivity of AC-42 for the M 1 mAChR utilized M 1 /M 5 chimeras, and suggested roles for the N-terminus/TM1 and third extracellular loop/TM7 in AC-42 agonism [93]. Additionally, mutagenesis of Y 381 , a key orthosteric site residue in TM6, to Ala of the M 1 mAChR led to a dramatic reduction in the affinity and potency of carbachol, but had no effect on AC-42 [93]. Interestingly, this same mutation actually led to an increase in the agonistic activity of Ndesmethylclozapine [95], clearly indicating that the latter agonist utilizes a different mode of attachment to classic orthosteric ligands, such as carbachol. A more recent study investigating mutations in TM3 known to contribute to the orthosteric site, and which dramatically reduce the efficacy and/or potency of carbachol, found varied effects on the AC-42, AC-260584 and N-desmethylclozapine [92]. Specifically, a W 101 A substitution increased AC-42 and AC260584 potency and efficacy but had no effect on N-desmethylclozapine. Mutation of Y 106 A increased the efficacy of N-desmethylclozapine, whilst S 109 A increased AC-42, AC-260584 and Ndesmethylclozapine potency [92].
CONCLUSION
Allosteric modulation of GPCRs represents an exciting and growing field of research, both with respect to drug discovery and a better understanding of GPCR structure and function. The mAChRs remain one of the key model systems for investigating this phenomenon at Family A GPCRs. Not only are there now a good number of structurally diverse allosteric modulators identified for this receptor family, but the receptors themselves remain important therapeutic candidates that have yet to be optimally targeted, thus ensuring an impetus for additional exploration of allosteric ligand chemical space. As with many nascent fields, however, significant challenges remain. The prevalence and relevance of allosteric agonists of the mAChRs, for example, has not been fully gauged as yet. Mutagenesis and molecular modeling studies aimed at mapping putative allosteric sites, with a view towards relating structure to function and identifying novel ligands, still have much ground to cover. Nonetheless, the potential rewards are significant and, as such, the study of mAChR allosterism remains one that is likely to deliver significant pharmacological dividends.
|
2018-04-03T01:32:10.151Z
|
2007-08-31T00:00:00.000
|
{
"year": 2007,
"sha1": "eb54b1b0835ef321196a6339a25a5fce4ebee143",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc2656816?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb54b1b0835ef321196a6339a25a5fce4ebee143",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247550350
|
pes2o/s2orc
|
v3-fos-license
|
Oncoplastic breast consortium recommendations for mastectomy and whole breast reconstruction in the setting of post-mastectomy radiation therapy
Aim Demand for nipple- and skin- sparing mastectomy (NSM/SSM) with immediate breast reconstruction (BR) has increased at the same time as indications for post-mastectomy radiation therapy (PMRT) have broadened. The aim of the Oncoplastic Breast Consortium initiative was to address relevant questions arising with this clinically challenging scenario. Methods A large global panel of oncologic, oncoplastic and reconstructive breast surgeons, patient advocates and radiation oncologists developed recommendations for clinical practice in an iterative process based on the principles of Delphi methodology. Results The panel agreed that surgical technique for NSM/SSM should not be formally modified when PMRT is planned with preference for autologous over implant-based BR due to lower risk of long-term complications and support for immediate and delayed-immediate reconstructive approaches. Nevertheless, it was strongly believed that PMRT is not an absolute contraindication for implant-based or other types of BR, but no specific recommendations regarding implant positioning, use of mesh or timing were made due to absence of high-quality evidence. The panel endorsed use of patient-reported outcomes in clinical practice. It was acknowledged that the shape and size of reconstructed breasts can hinder radiotherapy planning and attention to details of PMRT techniques is important in determining aesthetic outcomes after immediate BR. Conclusions The panel endorsed the need for prospective, ideally randomised phase III studies and for surgical and radiation oncology teams to work together for determination of optimal sequencing and techniques for PMRT for each patient in the context of BR
Results: The panel agreed that surgical technique for NSM/SSM should not be formally modified when PMRT is planned with preference for autologous over implant-based BR due to lower risk of long-term complications and support for immediate and delayed-immediate reconstructive approaches. Nevertheless, it was strongly believed that PMRT is not an absolute contraindication for implant-based or other types of BR, but no specific recommendations regarding implant positioning, use of mesh or timing were made due to absence of high-quality evidence. The panel endorsed use of patient-reported outcomes in clinical practice. It was acknowledged that the shape and size of reconstructed breasts can hinder radiotherapy planning and attention to details of PMRT techniques is important in determining aesthetic outcomes after immediate BR. Conclusions: The panel endorsed the need for prospective, ideally randomised phase III studies and for surgical and radiation oncology teams to work together for determination of optimal sequencing and techniques for PMRT for each patient in the context of BR
Introduction
Selection criteria for nipple-or skin-sparing mastectomy (NSM and SSM respectively) in conjunction with immediate breast reconstruction (BR) have become less stringent with an increase in proportion of patients potentially eligible for breast conserving therapy undergoing mastectomy and BR [1,2]. A parallel trend has been broadening of the indications for post-mastectomy radiation therapy (PMRT) that is often combined with nodal irradiation for low volume nodal disease [3][4][5][6][7][8].
Hence, there is dual consideration of both BR and PMRT for many patients who undergo mastectomy for surgical treatment of breast cancer [9,10]. PMRT increases risk of complications and diminishes aesthetic outcomes and quality of life (QoL) following BR, especially when implant-based [11][12][13]. The 2018 OPBC consensus conference revealed major heterogeneity in BR practice in the context of planned PMRT with a majority of the panel agreeing that type and timing of BR in this setting should be standardized [14]. The 2019 OPBC consensus conference ranked type and timing of BR in the setting of PMRT as the two most important knowledge gaps in the wider field of BR [15]. This year's OPBC consensus conference therefore systematically addressed relevant questions pertaining to type and timing of BR when PMRT is planned and provided expert recommendations for clinical practice.
2021 OPBC expert panel
The OPBC was founded in March 2017 as a global non-profit organization and comprises a membership of 616 oncologic, oncoplastic and reconstructive breast surgeons and 38 patient advocates from 79 countries at the time of manuscript writing. The OPBC is committed to bringing safe and effective oncoplastic breast surgery to routine patient care, namely oncoplastic breast conserving surgery, NSM/SSM with immediate BR and aesthetic flat closure after conventional mastectomy. The global 2021 OPBC expert panel was selected by evident expertise in breast cancer management with a practice primarily dedicated to breast cancer. Panellists originated from 22 countries and included 68 oncologic, oncoplastic and plastic breast surgeons from private, public, community and academic settings, six patients with international renown as patient advocates along with nine radiation oncologists with robust scientific credentials and international standing (appendix B.3.1-2). Finally, 52 non-panel OPBC members attended the conference and performed live audience voting, which was displayed separately to panel voting (appendix B3.3.).
Search strategy and selection criteria
We purposefully refrained from performing a systematic literature search as a basis for questionnaire development in order for the OPBC to identify and address questions relevant to current clinical practice irrespective of available evidence to inform treatment. Nonetheless, in support of these aims, two members of staff (Elisabeth Kappos and Nadia Maggi) independently performed specific searches in PubMed, MED-LINE, Embase and the Cochrane Central Register of Controlled Trials (CENTRAL) from 2000 to 2021 (search terms "mastectomy, subcutaneous" OR "mastectomy" AND "subcutaneous" OR "subcutaneous mastectomy" OR "nipple" AND "sparing" AND "mastectomy" OR "nipple sparing mastectomy" OR "breast reconstruction" OR "wholebreast reconstruction" OR "breast reconstructive surgery" OR autologous breast reconstruction" OR "implant-based breast reconstruction" OR "post-mastectomy radiotherapy OR "irradiation" OR "radiotherapy" OR "breast reconstruction algorithm" OR "PMRT reconstruction" OR "PMRT breast reconstruction" OR "breast reconstruction algorithm radiation" OR "breast reconstruction" AND "radiation"). Their review of all abstracts and full texts of relevant articles was used to finalize the questionnaire and helped the chairs and moderators to prepare for the consensus conference. Questions, answers and content of discussions were placed in context with published evidence in the form of this report.
Development of questionnaire for pre-voting
The iterative process in question development, pre-voting, presentation of results, discussion, live re-voting and development of phraseology for recommendation outcomes followed a modified Delphi methodology. The predefined protocol was published on the OPBC website on June 08, 2021 (appendix A) [16]. The protocol pre-specified the identification of questions to include, as follows: Those questions from the OPBC 2018 conference that reported disagreement among experts on NSM/SSM and immediate BR were included with the two co-chairs adding key questions based on their expert opinion. This preliminary set of questions was amended by expert representatives based on the specific literature search. At that point in time, the list was sent for review to the entire OPBC community as well as nine radiation oncologists. The chairs adjusted these questions according to feedback and finalized the list by iterative consultation with the panellists over the months preceding the conference (appendix C).
The iterative voting process started with pre-voting, which also allowed participation of conference non-attenders, provided opportunity to prepare the agenda for live voting that focused on areas of controversy, and served as back-up in the event of technical failure during live conference voting. Results of pre-voting were revealed to panel and audience for the first time during the conference thereby promoting spontaneous discussion.
Consensus conference with live voting
The 2021 OPBC consensus conference on September 02, 2021 was held virtually using online video conferencing software (Zoom by Zoom Video Communications, Inc). This platform provided separate rooms for the OPBC panel and OPBC members who registered for audience participation. Three panel members presented their respective views as plastic surgeon, oncoplastic surgeon and radiation oncologist with subsequent structured discussion. In the second half, outcomes of pre-voting were presented, followed by live voting by both panellists and audience in case of controversy identified from pre-voting and whenever pre-voting results were challenged or demanded reinforcement. In addition, the customized live voting platform allowed questions to be devised ad hoc based on panel discussion. Results of live voting were displayed separately for the OPBC panel versus audience.
Final questionnaire
The final questionnaire comprised a total of 66 questions and subquestions in nine categories. Eight questions were newly formulated or adjusted ad hoc during the conference based on the discussion (Fig. 1); live re-voting was performed for five questions whilst no live re-voting was recommended for the remaining 53 questions with results of prevoting being reported. The answers yes, no or abstain applied to 54 statements or questions whilst the single most appropriate answer from a list of options applied in 12. Simple majority was defined by agreement among 51-75% of participants and consensus by agreement above 75%. Abstaining was recommended when panel members had any conflict of interest or considered the question not to be clear, outside their expertise, or the correct answer was missing. All abstentions were reported and included in percentages unless otherwise stated.
Report
Questions, answers and content of discussions were placed in context with current published evidence in the form of this report. Specific details of the literature search were scrutinised by chairs and expert representatives with inclusion of additional references cited in articles identified through searches of personal files. The report was circulated among all panellists as part of an iterative process until agreement was reached on the precise wording of each question such that this reflected the strength of panel support for each recommendation. Voting results are shown graphically and as exact numbers.
Results and discussion
Consensus agreement was reached on 20 questions, majority agreement on 21, no consensus and no majority on a further 21 with the strength of agreement differing between panellists and members in four questions (Figs. [1][2][3][4][5]7, and appendix figure E1). A total of 73 panellists completed the pre-voting questionnaire; 59 panellists and 52 members participated in live conference voting.
Nipple-and skin sparing mastectomy
Both OPBC panel and audience stated with strong consensus that NSM is not contraindicated when PMRT is planned (question (q) 1, Fig. 1). There was broad agreement that PMRT can be associated with hypopigmentation and shrinkage of the nipple-areola complex (NAC; q1, Fig. 2). A majority of both panel and audience felt that planned or anticipated PMRT should not usually have any impact on choice of skin incision (q2, Fig. 2). However, the panel acknowledged consistent observations in the literature that type of incision is linked to risk of complications and noted that the 2018 OPBC panel considered location of incision to be a risk factor for severe mastectomy flap necrosis [14,17,18]. There was no agreement regarding the use of NSM in conjunction with skin reduction and/or fashioning of NAC pedicles or free nipple grafting for large ptotic breasts (q1a-d, appendix figure E1); a strong majority of both panel and audience raised concerns about aesthetic results when offering NSM to this group of patients without skin reduction (q2, Fig. 1). Importantly, there was panel consensus that attempts to perform a less radical form of NSM when PMRT is planned should be avoided (q3, Fig. 2). Thickness of mastectomy flaps cannot be surgically modulated based on need for PMRTthis is pre-determined by patient anatomy and depth of the oncologic plane [19].
Type of breast reconstruction
There was general consensus that PMRT increases the risk of complications following all types of implant-based BR (q1, Fig. 3) in agreement with the published literature [11,13,20]. Interestingly, a majority also held the view that PMRT significantly increases complication risk after immediate autologous BR despite results of the Mastectomy Reconstruction Outcomes Consortium (MROC) study (q2a-e, appendix figure E1) [13]. During the conference, one of the authors of this prospective multicentre cohort study discussed the report, which compared complications and patient-reported outcomes (PROs) for 622 irradiated and 1625 non-irradiated patients undergoing implant-based and autologous BR between 2012 and 2015. Among patients who underwent autologous BR, PMRT did not increase the risk of complications. Among patients who received PMRT, autologous reconstruction was associated with lower risk of complications than was implant-based BR (OR = 0.47, 95% CI = 0.27 to 0.82, p = 0.007) and a higher BREAST-Q satisfaction with breasts score (63.5 vs 47.7; p = 0.002). The measurable impact of PMRT on QoL after implant-based BR was confirmed by another large survey of breast cancer survivors [21]. Following extensive discussion of these data, a strong majority of both panel and audience agreed that the overall long-term risk of complications in the setting of PMRT is lower after immediate autologous reconstruction compared to implant-based BR (q2, Fig. 3). When asked about timing of autologous BR in the setting of PMRT, the panel clearly favoured immediate (direct to autologous BR) or delayed-immediate (immediate use of temporary implant or expander until delayed autologous BR) over fully delayed autologous reconstruction (Q3, Fig. 3). In general, autologous BR options were preferred over all implant-based BR options in the setting of PMRT (q4, appendix figure E1). Nevertheless, the panel strongly felt that planned or anticipated PMRT is not an absolute contraindication for any type of BR (q3a-h, appendix figure E1).
Major heterogeneity in clinical practice was evident for implantbased BR in the setting of PMRT. No majority or consensus agreement was reached in terms of recommendations for type, timing, implant Fig. 3). Furthermore, panellists disagreed on whether pre-pectoral implant-based BR is associated with a higher risk of complications and failure rates than sub-pectoral implant-based BR in the context of PMRT (q5, Fig. 3). A majority of the panel considered the use of immediate one-stage pre-pectoral implant-based BR to be compatible with PMRT whilst more of the audience displayed uncertainty on this point (q3, Fig. 1).
Timing of breast reconstruction
A strong panel majority recommended waiting for a minimum of 6-12 months after initial surgery in the setting of PMRT, both before delayed autologous BR and exchange of tissue expander for a permanent implant (q1 and 2, Fig. 4). During discussion, the panel emphasized that the optimal timing of delayed autologous reconstruction should be individualized (q4, Fig. 1) and also recommended waiting for 6-12 months before performing fat grafting. The latter was recommended as a method for improving outcomes after both autologous and implantbased BR (q3-5, Fig. 4). The panel was divided on the issue of irradiation of the tissue expander or the permanent implant in two-stage implant-based BR (with or without adjuvant chemotherapy; q6 and 7, Fig. 4). Indeed, several large series have shown that favourable outcomes can be achieved with implant-based BR in the context of radiotherapy using either timing strategy for the two-stage approach [22,23]. Although the panel acknowledged that there are no specific indications for neoadjuvant radiotherapy in routine clinical practice, there was a difference of opinion on delayed implant-based BR after PMRT (q8 and 9, Fig. 4). A majority of panellists who perform delayed implant-based BR discouraged use of highly cohesive implants, smooth implants, polyurethane implants and synthetic mesh in efforts to reduce complications, while advocating use of biologic mesh and fat grafting for purposes of delayed IBBR (q6a-e and h, appendix figure E1). Nonetheless, there was no consensus on pre-versus sub-pectoral implant positioning in this setting (q6f and g, appendix figure E1).
Special considerations: research and outcomes
Almost all panellists acknowledged current trends toward increasing use of BR in the setting of PMRT (q1, Fig. 5) [10]. The panel endorsed the need for prospective studies to optimize surgical and radiation treatments and conceded that the poor quality of available data broadly precludes evidence-based recommendations at this time (q2 and 3, Fig. 5). Of note, the OPBC ranked the question on the optimal type of reconstruction in the setting of planned adjuvant radiotherapy as top knowledge gap in the field already during the 2019 consensus conference [15]. A randomized controlled trial (RCT) design, as suggested by the scientific secretaries at the time, achieved not even a majority recommendation by the panel during two rounds of voting. It was considered not appropriate mostly due to a lack of feasibility. The study design was then adjusted according to the panel discussion into a prospective cohort study with propensity score matching and patient-reported satisfaction with breast, assessed by the BREAST-Q questionnaire at two years, as primary outcome. The question on the optimal timing of reconstruction in the setting of planned adjuvant radiotherapy was ranked as second most important priority in 2019. Therefore, the study design was adjusted and the panel finally achieved consensus to recommend a prospective registry to commonly address type and timing and the present project to focus on this important topic. This year, the OPBC voting results stressed the need for phase III RCTs to specifically address the optimal timing of implant-based BR, the positioning of implants and the use of adjunctive mesh. Of note, multiple observational studies over the past three years on pre-versus sub-pectoral implant-based BR have predominantly shown either no difference or marginally favoured pre-pectoral positioning [24][25][26][27][28][29][30][31][32][33]. However, most were small, retrospective and single-centre studies, with only a few prospective or multicentre studies [25,26,28]. The OPBC-02/PREPEC trial is a pragmatic multicentre RCT designed to investigate QoL two years after pre-versus sub-pectoral implant-based BR and has currently randomized 245 of a total of 372 patients at 22 breast centres in 6 countries [34]. One of the formal substudies prospectively investigates the impact of pre-versus sub-pectoral implant-based BR on risk of early complications. Rates of unplanned reoperation were reported to be as high as 59% after immediate implant-based BR in the setting of PMRT [35]. Until risk profiles are better understood and strategies to reduce morbidity are optimized, the panel endorsed the viewpoint that patients undergoing implant-based BR must be fully informed and consent to the possibility of increased risk of complications in the setting of planned PMRT (q4, Fig. 5). Panellists and members could not agree on an acceptable upper limit for failure rate at two years after implant-based BR in daily practice (5% vs 10% vs 15%; q6, Fig. 1).
Post-mastectomy radiation therapy
A majority of the panel felt that immediate BR has the potential to affect oncologic outcomes by delaying adjuvant therapy due to complications (q 1, Fig. 6). Clinical studies are inconsistent in reports of how postoperative complications affect recurrence and survival in patients undergoing immediate BR [42][43][44][45]. Indeed, one of the largest studies showed that patients with postoperative complications had significantly worse disease-free survival than those without complications (hazard ratio (HR) 2.25; P = 0.015) [45]. However, this remained significant in patients who received adjuvant therapy without delay (8 weeks or less after surgery; HR 2.45; P = 0.034). After intense discussion of this topic, the question was re-phrased to ask whether immediate BR impairs oncologic outcomes by delaying adjuvant therapy in clinical practice. About half of panellists and members rejected that statement (q8, Fig. 1) and it was discussed that whilst there may be delays in some patients with potential impact on oncological safety, overall the average delay following PMBR is not clinically significant.
There was major disagreement regarding whether immediate BR with creation of a breast mound compromised the accuracy of radiation dosimetry in terms of target coverage and normal tissue dose irrespective of modern radiotherapy techniques (q2, Fig. 6). Similarly, there was disagreement as to whether bilateral placement of implants impairs PMRT planning and quality of PMRT delivery (q3, Fig. 6). Indeed, early experience with immediate BR resulted in compromised target coverage and/or dose to organs at risk in case of PMRT. This was most apparent for irradiation of left-sided tumours, internal mammary nodes, and for cases of bilateral reconstruction [46]. Later reports suggested that correct target volume definition and modern radiation techniques can reduce the risks posed by BR, be this unilateral or bilateral [47][48][49]. To date, various measures can be applied to minimize dosage to organs at risk whilst ensuring adequate coverage of target volumes such as deep inspiration breath hold with or without continuous positive airway pressure (CPAP) [50,51]. Techniques for PMRT continue to evolve and routine use of a bolus for mastectomy cases is controversial as this may be associated with increased toxicity without improving local control [52]. Therefore, current European consensus guidelines do not recommend a bolus unless deemed necessary to ensure that the therapeutic dose of irradiation adequately covers those areas at high-risk for recurrence, e.g., in skin invading cancer [53]. Moreover, data on safety and efficacy in the setting of breast reconstruction is lacking [54]. Nonetheless, a boost in this setting was commonly practiced to enhance radiation dosage to the mastectomy scar in order to reduce local recurrence [55]. A study by Naoum et al. aimed to evaluate whether a chest wall boost was independently associated with reconstructive complications [55]. The study cohort included patients who had delayed reconstruction procedures. Scar boost was significantly linked with higher rates of infection, skin necrosis, and implant exposure. Furthermore, a boost dose was independently associated with a higher risk of complete implant failure and addition of a boost did not improve local tumor control, even among high-risk subgroups. Therefore, routine use of a boost or bolus for PMRT cases with or without reconstruction is not recommended. It is mandatory that radiation planning is tailored to the surgical procedure with awareness of potential adverse radiation effects on BR and adherence to international guidelines [53,[56][57][58].
In contemporary practice, the type of BR is usually determined by body habitus, patient preference, and expertise of the surgeon. PMRT planning is rarely taken into account but close liaison between the surgical and radiation teams from the outset will facilitate optimal clinical decision-making in terms of BR and PMRT. In real-world practice, shape and size of the reconstructed breast mound can challenge PMRT planning and dose delivery (Fig. 7). Additionally, in case of expander with a metallic port, the ability to determine the accurate dose distribution and accurate RT delivery may be hindered [59]. Fig. 7: Axial view of radiation CT planning of a young patient who underwent bilateral mastectomy for left-sided breast cancer and immediate implant-based breast reconstruction. The size, shape and position of the reconstruction challenged the delivery of radiation to the left breast and regional lymphatics. Radiation is a trade-off between the objectives of target volume coverage and exposure of organs at risk. The radiation technique affects the interplay between these objectives (e.g., low dose bath to the lung, dose to the contralateral breast) but cannot escape the physical properties of the radiation beam.
Bearing in mind the impact of reconstructed breast volume on PRMT delivery, the panel also addressed the issue of volume in relation to tissue expanders. About half each of panellists and members opted for full expansion of the expander before PMRT in the case of unilateral twostage BR. However, the others were divided between rejection and abstention. This reflected a degree of controversy and uncertainty (q4, Fig. 6), which was more apparent when asking whether the contralateral expander should be deflated after bilateral two-stage BR (q5, Fig. 6). From a radiation perspective, the volume of the expander at the time of CT planning and during irradiation should be maintained, as dosimetry is based on the target volume at the time of CT planning. Complete inflation can hinder PMRT planning and necessitate deflation of the expander prior to PMRT. Modern radiation techniques can ameliorate but not eliminate the physical properties of the radiation beam [60,61]. Use of volumetric-based PMRT and advanced radiation techniques to overcome a "non-anatomical" protruding reconstructed breast may result in unnecessary exposure of organs at risk and a low-dose-bath of radiation (leading to potential toxicity, late heart morbidity and risk of secondary cancers) [60,61]. Half of the panel rejected the statement that irrespective of the availability of modern radiotherapy techniques, type of immediate breast reconstruction may influence the effectiveness of PMRT (q 6, Fig. 6). However, there was consensus among panellists that the type of immediate BR affects overall risk of complications with PMRT, irrespective of modern radiotherapy, but PMRT techniques will impact upon final aesthetic outcome (q7 and q8, Fig. 6).
Conclusions
During the 2021 OPBC consensus conference, a large international panel comprised of breast surgery specialists, leading radiation oncologists and patient advocates was convened to systematically develop recommendations for mastectomy, BR and PMRT. The panel agreed that surgical technique for NSM/SSM should not be modified when PMRT is planned; it favoured the use of autologous over implant-based BR in the setting of PMRT due to lower long-term risk of complications and recommended immediate and delayed-immediate approaches. The panel strongly felt that PMRT is not an absolute contraindication for implantbased BR despite higher overall rates of complications. Nonetheless, no specific recommendations were made regarding implant positioning, use of mesh or timing due to absence of high-quality evidence to guide treatment. The panel encouraged routine use of pre-and postoperative photographs and endorsed patient-reported outcomes in clinical practice. It was acknowledged that shape and size of the reconstructed breast can be a geometric challenge for radiotherapy planning and the importance of PMRT techniques in determining the final aesthetic outcome after immediate BR was emphasized. Moreover, the panel unanimously supported the need for prospective studies, especially randomised trials, and proposed that surgical and radiation oncology teams work together at the outset to evaluate optimal sequencing and techniques for integrating PMRT with BR for each patient.
|
2022-03-20T15:27:59.318Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "14cd0903e7968bb169d16fc43078ab51a21382d6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fa404b4c0cd84ae4d438fe0f386ec39985d7e1a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13898217
|
pes2o/s2orc
|
v3-fos-license
|
A Polydnavirus ANK Protein Acts as Virulence Factor by Disrupting the Function of Prothoracic Gland Steroidogenic Cells
Polydnaviruses are obligate symbionts integrated as proviruses in the genome of some ichneumonoid wasps that parasitize lepidopteran larvae. Polydnavirus free viral particles, which are injected into the host at oviposition, express virulence factors that impair immunity and development. To date, most studies have focused on the molecular mechanisms underpinning immunosuppression, whereas how viral genes disrupt the endocrine balance remains largely uninvestigated. Using Drosophila as a model system, the present report analyzes the function of a member of the ankyrin gene family of the bracovirus associated with Toxoneuron nigriceps, a larval parasitoid of the noctuid moth Heliothis virescens. We found that the TnBVank1 expression in the Drosophila prothoracic gland blocks the larval-pupal molt. This phenotype can be rescued by feeding the larvae with 20-hydroxyecdysone. The localization of the TnBVANK1 is restricted to the cytoplasm where it interacts with Hrs and Alix marked endosomes. Collectively, our data demonstrate that the TnBVANK1 protein acts as a virulence factor that causes the disruption of ecdysone biosynthesis and developmental arrest by impairing the vesicular traffic of ecdysteroid precursors in the prothoracic gland steroidogenic cells.
Introduction
Parasitic wasps represent the largest group of parasitoid insects which attack and parasitize a number of insect species, exploiting different developmental stages [1].These parasitic insects have a peculiar injection device, the ovipositor, which is used to deliver the egg along with host regulation factors that primarily disrupt the host immune reaction and endocrine balance to create a suitable environment for the development of their progeny [2,3].These host regulation factors include viruses of the Polydnaviridae family, obligate symbionts of ichneumonid and braconid wasps attacking larval stages of lepidopteran hosts, and respectively classified in the genera Ichnovirus (IV) and Bracovirus (BV) [4].
Polydnaviruses (PDVs) [5,6] are integrated as proviruses in the genome of parasitoid wasps and their transmission to offspring is strictly vertical, through the germline.The genome encapsidated in the viral particles is made of multiple circular dsDNA segments, which have an aggregate size ranging between 190 and 600 kb.PDVs only replicate in the epithelial cells of the calyx, a specific region of the ovary, where they accumulate to a high density to be injected at oviposition along with the venom and the egg.Free PDV particles infect the host tissues without undergoing replication, and express virulence factors that alter host physiology in ways essential for offspring survival [5].
Evolutionary convergence of independent host-virus associations has favored the selection of gene families shared by both IV and BV [7][8][9].For example, protein tyrosine phosphatases (PTP) and ankyrin motif proteins (ANK) are widely distributed in many PDVs and expressed to different degrees in virtually all host tissues analyzed so far, indicating that they play a key role in successful parasitism [10].
We know more about the functional bases underlying the immune disguise in parasitized hosts than we do about how the host developmental alteration is induced [3,11].This is due to the complexity of the developmental mechanisms and to the concurrent action of various virulence factors, which often have redundant and overlapping effects on the regulating gene networks [2,3,12].
One of the best characterized developmental syndromes has been described in the host-parasitoid association Heliothis virescens-Toxoneuron nigriceps (Lepidoptera, Noctuidae -Hymenoptera, Braconidae) [13].Briefly, in this experimental model the last instar larvae fail to pupate and show a higher nutritional suitability for parasitoid larvae.The developmental arrest is partly due to a marked depression of ecdysone (E) biosynthesis by the prothoracic gland (PG), induced by the infection of the bracovirus associated with T. nigriceps (TnBV).The inhibition of E biosynthesis is further reinforced by the conversion of the very low amounts of 20hydroxyecdysone (20E) produced to inactive polar metabolites, a transformation mediated by teratocytes, special cells deriving from the parasitoid's embryonic membrane.
The active transcription of TnBV genes in the PG of parasitized tobacco budworm larvae is required to disrupt their ecdysteroid biosynthesis, which remains very low and fails to increase in response to prothoracicotropic hormone (PTTH) stimulation [13,14].Unraveling the functional role of a specific virulence factor at molecular level is not easy when the natural host is used for these studies, due to the limited availability of genomic information and molecular tools.Therefore we used Drosophila melanogaster as an ideal experimental model to study TnBV genes.With this approach, we started the functional characterization of a member of the viral ankyrin (ank) gene family of TnBV, TnBVank1 [15], showing that the expression of this gene in Drosophila germ cells alters the microtubule network function in the oocyte [16].In the present study we analyze the effect of TnBVank1 gene expression during Drosophila development.Interestingly, we found that TnBVank1 expression in the PG cells blocks the transition from larval to pupal stage, mimicking the developmental arrest observed in H. virescens larvae parasitized by T. nigriceps.
The stocks used for Gal4 driven expression of UASp-TnBVank1 referred in Figure S2 and listed in Table S1 are from Bloomington Stock Center.
Crosses
Females UASp-TnBVank1 were crossed to males of the different Gal4 lines.As control, females yw 67c23 were crossed to males of the same Gal4 lines.
Larval length measurements
Five UASp-TnBVank1/+;UASp-TnBVank1/+;hairy-Gal4/+ larvae at different days after egg deposition (AED) and five control larvae were ice-anesthetized and photographed using a Nikon Eclipse 90i microscope.Images were taken at 4X magnification and the larval length was measured with NIS-Elements Advanced Research 3.10 software.
20E titer
Five larvae at different developmental stages were collected and washed with PBS and immediately frozen by liquid nitrogen.Samples were added 200 ml of methanol, homogenized and transferred into 1.5 ml plastic tubes.After 10 minutes centrifugation (12,000 rpm at 4uC) the supernatant was collected into a new tube, the precipitate was re-extracted with 200 ml of methanol and the supernatant was added to the previous one.After 30 minutes on ice, the samples were centrifuged following the same conditions.Samples were dried to remove methanol and then dissolved in the borate buffer.The standard curve was generated according to the standard process of the RIA protocol [17] and then the 20E titer in samples was calculated.
Rescue experiment
UASp-TnBVank1/+;UASp-TnBVank1/+;phantom-Gal4,UAS-mCD8::GFP/+ and control larvae were collected at 106 h AED and placed in three groups of ten individuals at 25uC in new tubes supplemented with 20E (Sigma) dissolved in ethanol at 1 mg/ml.Control larvae were fed only with ethanol.
Prothoracic gland and cellular size measurements
For measurements of PG area and its cellular size, confocal images of 50 PGs taken at 40X magnification were quantified with ImageJ software.
Statistical analysis
Statistical comparison of mean values was performed by unpaired t-test, using GraphPad Prism 4 software.
Immunofluorescence microscopy
Larvae were dissected at room temperature in 1xPBS pH 7.5 (PBS) and fixed in 4% formaldehyde for 20 minutes at room temperature.After three washes in PBS, larvae were permeabilized in PBT (PBS pH 7.5+0.3%Triton X-100) for 1 h, washed three times 5 minutes each in PBT and 10 minutes in PBT+ 2%BSA solution.After that, the larvae were incubated, overnight at 4uC, with primary antibodies diluted in PBT+2%BSA.Larvae were washed three times 10 minutes each in PBT, 10 minutes in PBT+1%BSA solution and incubated 2 hours at room temperature on a rotating wheel with secondary antibodies diluted in PBT+1%BSA.After several washes in PBT, the ring glands were dissected and mounted on microscopy slides in Fluoromount G (Electron Microscopy Sciences).Subsequently samples were analyzed by conventional epifluorescence with a Nikon Eclipse 90i microscope or with TCS SL Leica confocal system.Images were processed using Adobe Photoshop CS4 and Adobe Illustrator CS4.
TRITC-Phalloidin staining was carried out, after incubation with secondary antibodies, by washing larvae three times with PBS and then by incubating larvae for 20 minutes with TRITC-Phalloidin (40 mg/ml in PBS, Sigma).
For Propidium Iodide nuclear counterstaining, the larvae were treated with RNase A (400 mg/ml in PBT, Sigma) overnight at 4uC.After three washes in PBT, the larvae were labeled for 2 hours with Propidium Iodide (10 mg/ml in PBT, Molecular Probes).
Filipin and Oil Red O staining
Ring glands were fixed in 4% formaldehyde for 20 minutes and washed three times in PBS for 5 minutes each.Samples were stained with 50 mg/ml of filipin (Sigma) for 1 h or incubated in an Oil Red O (Sigma) solution at 0.06% for 30 minutes.After incubation tissues were washed twice with PBS before mounting in Fluoromount-G.Samples were analyzed by conventional epifluorescence with a Nikon Eclipse 90i microscope or with a Nikon Eclipse 90i confocal microscope.Images were processed using Adobe Photoshop CS4 and Adobe Illustrator CS4.
Colocalization analysis
Thresholds of confocal images were set in Adobe Photoshop CS4 to exclude background staining.509 Hrs positive vesicles were analyzed per TnBVANK1 and Hrs staining.443 TnBVANK1 positive vesicles were analyzed per TnBVANK1 and Alix staining.118 Hrs positive vesicles were analyzed per Alix and Hrs staining.
Expression of TnBVank1 in the prothoracic gland induces developmental arrest at third instar larvae
TnBVank1 gene expression during Drosophila development was targeted with the GAL4/UAS binary system [25].We used a transgenic Drosophila stock carrying two copies of the TnBVank1 gene under the control of the UASp sequences [16].Expression of this transgene was induced using different Gal4 drivers.Our first analysis expressed the TnBVank1 transgene during embryonic and larval development using the hairy-Gal4 driver (h-Gal4) [25].TnBVank1 expression did not appear to affect embryonic and larval development but, interestingly, all larvae failed to pupate and died after an extended third instar larval life, which lasted up to three weeks (Figure 1A).By measuring larval size, we found that four days AED the larvae expressing TnBVank1 did not significantly differ from control yw;h-Gal4 (n = 5; t = 0.8557; NS) (Figure 1B).Moreover, they continued to feed and significantly increased in size during their prolonged larval life, reaching at eighteen days the maximal length (Figure 1A,C; n = 5; t = 6.765; p,0.0001), while control regularly pupated on day six AED (Figure 1A).Since h-Gal4 is expressed in various larval tissues, the observed developmental arrest suggested us that TnBVank1 expression could have reasonably affected the ring gland function, the major site of production and release of developmental hormones.
The Drosophila ring gland (Figure 2A) consists of the prothoracic gland (PG), which is composed of steroidogenic cells synthesizing the E, the corpora allata (CA) that produce the juvenile hormone, and the corpora cardiaca (CC), which play a key role in the regulation of metabolic homeostasis [26].We assessed if the targeted expression of the TnBVank1 gene using different ring gland Gal4 drivers (Figure 2B) was able to reproduce the effect observed when the transgene was expressed using the h-Gal4.When the TnBVank1 gene was expressed in both CA and PG cells, using the P0206-Gal4 driver, all the larvae failed to pupate and showed the same phenotype obtained with h-Gal4.Conversely, when the august21-Gal4 (aug21-Gal4) driver specifically targeted the expression of TnBVank1 in the CA, no effects on developmental timing were observed and regular progeny were obtained.Moreover, we specifically induced expression of the TnBVank1 gene in the PG using the phantom-Gal4 (phm-Gal4) driver, which is strongly expressed in this gland.None of the larvae pupated and they had an extended larval life as shown using the P0206-Gal4 driver (Figure 2B).These data indicate TnBVANK1 impairs PG function causing the block of larval-pupal transition.
We specifically expressed TnBVank1 in several other tissues, using different Gal4 drivers, and monitored the timing of development and the adult phenotype, which were in all cases not affected (Table S1).
Collectively, our data suggest that the expression of TnBVANK1 has the potential to interfere with the steroid biosynthesis, as further indicated by the targeted expression of this viral ANK protein in PG, which is characterized by developmental arrest of mature larvae and the absence of a systemic injury response [27].
Ecdysteroid biosynthesis is impaired in TnBVank1 larvae
To assess whether the developmental arrest induced by TnBVANK1 was due to a reduced level of 20E, we measured the whole body 20E titer in larvae expressing TnBVank1 by the phm-Gal4 driver and in control larvae (Figure 2C).At 110 h AED, wild type third instar larvae enter the wandering stage and, at 25uC, they become white pre-pupae at 120 h, after the surge of a 20E peak [17].At 120 h AED and during their abnormal extended larval life, the 20E levels measured in phm-Gal4.TnBVank1 larvae are extremely reduced and significantly lower than that measured both in UASp-TnBVank1 larvae (n = 5; t = 10.12;p,0.0001) and in phm-Gal4/TM6B larvae (n = 5; t = 8.196; p,0.0001).
To further demonstrate that the block of the transition to pupal stage showed by the phm.TnBVank1 larvae (hereinafter TnBVank1 larvae) was actually due to a low level of 20E, we carried out a 20E-feeding rescue experiment.Third instar TnBVank1 larvae were fed with yeast paste containing 20E dissolved in ethanol at 106 h AED, just before the onset of the ecdysteroid peak occurring in the wild-type.As expected, at 120 h AED, 70% of control larvae started to pupate and within the following 20 h all of them reached the pupal stage (n = 30).Pupation of TnBVank1 larvae fed with 20E followed almost an identical pattern, with 100% pupation (n = 30) attained only 1 day later, but failed to progress to the pharate stage (Figure 2D).Instead, TnBVank1 larvae treated only with yeast and ethanol persisted as third instar (n = 30).This result confirms that the developmental arrest of TnBVank1 larvae is due to a reduced level of 20E.However, the rescued pupae failed to develop into adult flies.This may be due to the fact that the large peak of 20E required to trigger metamorphosis is not generated by TnBVank1 pupae and cannot obviously be supplied with food at this developmental stage.
It has been reported that a positive feedback is required for the transcriptional up-regulation of enzymes acting at late steps in the ecdysone biosynthetic pathway [28].Therefore we analyzed the expression of Disembodied (Dib), the downstream step enzyme C22 hydroxylase, which appeared strongly reduced in all TnBVank1 PGs analyzed (n = 60) compared to control (Figure 2E,F).This data is in agreement with low levels of 20E detected in TnBVank1 larvae.
TnBVANK1 affects PG morphology
Using a polyclonal antibody raised against two synthetic peptides of TnBVANK1 [16], we detected the distribution of TnBVANK1 protein in TnBVank1 PGs of five days AED larvae.As shown in Figure 3A-C, the protein was strongly expressed and present only in the cytoplasm of PG cells, confined to strokeshaped particles.We next analyzed the TnBVank1 PG gross morphology.To visualize the PG we used phm-Gal4,UAS-mCD8::GFP stock.PGs from control larvae (Figure 3D) were significantly larger (n = 50; t = 50.41;p,0.0001) (Figure 3F) than TnBVank1 PGs (Figure 3E).In addition, the TnBVank1 PG cells showed a cytoplasmic rather than the expected membrane distribution of mCD8::GFP (Figure 3C,E) [29].Measurements of the PG cell area did not show any reduction induced by TnBVank1 expression (n = 50; t = 1.262;NS) (Figure 3G).Therefore, the observed size difference of PG can be attributed to a reduction of the cell number.We then assayed if apoptosis occurs, using Cleaved Caspase-3 antibody [30] and TUNEL labeling [31].The Caspase-3 activity (Figure 3I,J; n = 60) and TUNEL positive staining (Figure 3L,M; n = 60) found in a few cells of TnBVank1 PGs suggested that the occurrence of cell death during development can partly account for this difference, which could be related to the developmental arrest induced by TnBVANK1.However, the possibility that this protein can also disrupt PG activity cannot be ruled out.Therefore, to assess the relative contribution of these two effects, not mutually exclusive, we expressed TnBVank1 in PG cells at different time points during larval life, using a temperature sensitive form of the Gal4 repressor Gal80, Gal80 ts [32], that allows to regulate the phm-Gal4 activity.TnBVank1 and control larvae were initially raised at 21uC, and then shifted to the restrictive temperature (31uC) at specific time points (96 h, 72 h and 48 h AED) to promote Gal4 activity.The temperature shift did not affect the proper development of the control larvae, which pupate normally.Conversely, the larvae expressing TnBVank1 failed to pupate, increased their size and survived for an extended period.
For each time point we also analyzed the PG size at 120 h AED (Figure 3N).When the TnBVank1 expression was triggered at 96 h or 72 h, the PGs size was not significantly different from controls (respectively n = 10; t = 0.07636; NS and n = 10; t = 1.336;NS).While the earlier induction of the transgene expression, at 48 h AED, strongly affected the PG size, which appeared significantly reduced (n = 10; t = 11.68;p,0.0001).In addition, we examined whether by inhibiting apoptosis with ectopic expression of p35 [33] it would be possible to rescue the phenotype produced by the expression of TnBVank1 in the PG.Coexpression of UAS-p35 and UASp-TnBVank1 in the same PG cells with phm-Gal4 driver did not rescue the developmental arrest phenotype (n = 58).Collectively, these data indicate that the developmental arrest induced by TnBVank1 does not depend on the reduced PG size triggered by apoptosis, but on its capacity to disrupt PG functioning when expressed before the production of the 20E peak.
TnBVANK1 affects the cytoskeletal network in the PG cells
The altered TnBVank1 PG cell morphology and the associated mislocalization of mCD8::GFP prompted us to analyze the cytoskeletal network in these cells.
We investigated F-actin and a-tubulin distribution in TnBVank1 PGs and we observed an altered cytoskeletal organization in all analyzed glands (n = 60).As shown by phalloidin staining (Figure 4A-D), cortical actin did not appear regularly distributed in TnBVank1 PG cells, in which thick masses of actin filaments were detected (Figure 4C,D).The microtubule network was investigated by analyzing the distribution of a-tubulin-GFP fusion protein, which was coexpressed with TnBVank1 in the PG.Compared to control, expressing only a-tubulin-GFP protein (Figure 4E,F), the cytoskeleton of the TnBVank1 PG cells appeared strongly affected, as shown by the formation of thick bundles of microtubules (Figure 4G,H).The dynamic function of the microtubule network was then analyzed in TnBVank1 PGs (n = 60) by assessing the distribution of the minus-end-directed microtubule motor dynein, using an anti-Dynein heavy chain antibody [20].Compared to the control (Figure 4I,J), cells of TnBVank1 PG displayed a reduced cortical distribution of dynein, along with some large dynein dots (Figure 4K,L).These data indicate that the whole cytoskeletal network is markedly altered in the PG cells expressing TnBVANK1.We also analyzed Gal80 ts -TnBVank1 PG cells at different time points (96 h AED, 72 h AED and 48 h AED) and we observed that F-actin organization is strongly altered when larvae were shifted to restrictive temperature at 48 h AED.This suggests that the prolonged expression of TnBVank1 during development is causative of the disruption of cytoskeleton (Figure S1).Moreover, as discussed above, no adult phenotypic effect or developmental delay was produced by the expression of TnBVank1 in different tissues, using a wide range of tissue specific Gal4 drivers (Table S1).This suggests that the cytoskeletal structure is not affected in all tissues, as can be observed in the fat body (Figure S2).This is further corroborated by our previous study showing that in the Drosophila oocyte TnBVANK1 interferes with proper microtubule and microtubulemotor protein functions [16], and does not affect the overall cytoskeletal structure.
TnBVANK1 expression alters the cholesterol trafficking endocytic pathway of PG cells
The observed negative impact of TnBVANK1 on the cytoskeleton of PG cells may reduce the level of ecdysteroid biosynthesis by disrupting the uptake, transport and trafficking of sterols, essential steps for ecdysteroid biosynthesis [34].Cholesterol, which cannot be synthesized by insects [35], enters the steroidogenic cells through a receptor-mediated low-density lipoprotein (LDL) endocytic pathway [36], which targets cholesterol to the endosomes.It is then transformed into 7-dehydrocholesterol in endoplasmic reticulum and transported to other subcellular compartments through further metabolic steps of the ecdysteroidogenic pathway [35].We analyzed lipid vesicular internalization and trafficking in the TnBVank1 PG cells with a staining procedure using Oil Red O. Conversely to control (Figure 5A), in all TnBVank1 PGs analyzed (n = 60), we observed a varying level of evident increased accumulation of lipid droplets (Figure 5B,C).Then, using filipin, which specifically stains non-esterified sterols [37], compared to control (Figure 5D), TnBVank1 PGs (n = 60) showed a marked cholesterol accumulation in discrete vesicular drops (Figure 5E,F).These data suggest that TnBVANK1 does not affect lipid uptake, but that the endocytic pathway is in some way disrupted.
The endocytic pathway is organized into three major compartments, each characterized by specific Rab GTPase proteins that can be used as tags for the different endosomes [38].Early endosomes are enriched in Rab5, late endosomes are associated with Rab7, and Rab11 marks the recycling endosomes.We used antibodies directed against these Rab proteins to investigate the endocytic pathway in PG cells (n = 60 PGs for each experiment) [19].The cellular distribution of the early (Figure 6A,B) and recycling endosomes (Figure 6C,D) appeared to be comparable between PGs of control (Figure 6A,C) and of TnBVank1 larvae (Figure 6B,D).Whereas, compared to control (Figure 6E), in TnBVank1 PGs few Rab7 positive vesicles were detected (Figure 6F).This suggests that TnBVANK1 may somehow affect the endocytic pathway.
We then analyzed the PG distribution of endosomes carrying the Hepatocyte growth factor-regulated tyrosine substrate (Hrs) (Figure 7A).This protein regulates inward budding of endosome membrane and multivesicular bodies (MVBs)/late endosome formation [22].Interestingly, quite a few Hrs marked vesicles in the TnBVank1 PG cells showed the stroke-shaped form associated with TnBVANK1 signals (Figure 7D,E).In addition, most of the immunodetection signals of TnBVANK1 (Figure 7E) colocalized with the Hrs marked vesicles (Figure 7F; Pearson's coefficient = 0.9660.06).In contrast, most of these vesicles showing a normal round shape did not colocalize with TnBVANK1.This finding suggests an interaction of TnBVANK1 with endosome associated proteins, which may partly account for the observed alterations of the endocytic trafficking routes.
MVBs formation is controlled by a set of proteins, the endosomal sorting complex required for transport, ESCRT-0 to III, which sequentially associate on the cytosolic surface of endosomes [39].A partner of the ESCRT proteins, which also regulates the making of MVBs, is the ALG-2-interacting protein X (Alix), first characterized as an interactor of apoptosis-linked gene protein 2 (ALG-2) [40].It has been reported that the late endosomal lipid lysobisphosphatidic acid (LBPA) and its partner protein Alix play a direct role in cholesterol export [41].Therefore, by using an antibody directed against Alix, we analyzed the distribution of this protein in control and TnBVank1 PGs (Figure 7B,G).According to its multifunctional activity [42], Alix was found widely distributed in the cytoplasm of wild type cells (Figure 7B), and, as expected, marked some Hrs positive vesicles (Figure 7C).Interestingly, in the TnBVank1 PG cells the TnBVANK1 positive stroke-shaped structures showed a strong colocalization with Alix (Figure 7H; Pearson's coefficient = 0.9960.07).In addition, several of these Alix positive stroke-shaped structures colocalized with Hrs (Figure 7I; Pearson's coefficient = 0.9560.16), indicating that these are modified endocytic vesicles.This strong interaction of TnBVANK1 with Alix containing vesicles and the altered cholesterol distribution observed in PG are concurrent evidences indicating that the cholesterol route was altered.Therefore, the interaction between TnBVANK1 and endosomes specifically affects the endosomal trafficking of sterols, likely limiting their supply to subcellular compartments where ecdysteroid biosynthesis takes place [35].
Discussion
PDVs are among the major host regulation factors used by parasitic wasps to subdue their hosts, which show immunosuppression and a number of developmental and reproductive alterations associated with disruption of their endocrine balance The proteins encoded by PDV ank genes show significant sequence similarity with members of the IkB protein family involved in the control of NF-kB signaling pathways in insects and vertebrates [44].Because they lack the N-and C-terminal domains controlling their signal-induced and basal degradation, they are able to bind NF-kB and prevent its entry into the nucleus to activate the transcription of genes under kB promoters [15,45,46].The ank gene family is one of the most widely distributed in PDVs and contains members which are rather conserved across viral isolates associated with different wasp species [10,15,[46][47][48].These genes likely originate from horizontal gene transfer from a eukaryote, which could be the wasp, the host or another organism.Indeed, the nudiviruses, ancestors of bracoviruses [7], do not encode any gene showing similarity with ank family members.Their multiple acquisition and stabilization in different evolutionary lineages are clearly indicative of the key role they play in successful parasitism.This also suggests that ank genes may be involved in multiple tasks on host parasitization, by influencing different physiological pathways.
Here, we provide experimental data that corroborate this hypothesis for TnBVank1, a gene of the bracovirus associated with the wasp T. nigriceps (TnBV), which parasitizes the larval stages of the tobacco budworm, H. virescens.Using Drosophila as a model system, we show that the TnBVANK1 protein acts as a virulence factor disrupting E biosynthesis (Figure 8) and causes developmental arrest of the larvae, which fail to pupariate.The number of late endosomes is reduced in the TnBVank1 expressing cells and this is concurrent with an interesting change of Hrs-TnBVANK1 positive vesicle morphology.This defective mechanism in MVB and late endosome formation is accompanied by an evident alteration of sterol trafficking as indicated by the accumulation of lipid and sterol-rich vesicles.
Cholesterol is processed to free cholesterol by lipase in the endosomal compartment and after that it moves to other compartments entering the ecdysone biosynthesis machinery [34].Recent evidences from mammalian cell studies indicate that the late endosomal lipid LBPA and its partner Alix play a role in controlling the cholesterol export from endosomes [41].Our finding that TnBVANK1 interacts with Alix positive vesicles and affects the sterol delivery suggests that Alix function in cholesterol export is conserved between Drosophila and mammals.
Our data let us to hypothesize that in TnBVank1 expressing PG cells cholesterol may be trapped into the MVBs.This block leads to insufficient sterol supply to reach the ecdysone level necessary to complete development.Interestingly, the fact that TnBVank1 expression in other tissues did not alter development suggests that TnBVANK1 impact on cholesterol trafficking may deeply affect the PG cells engaged in an intense steroidogenic activity.
We show that TnBVANK1 disrupts the cytoskeletal structure of PG cells, and this appears to be a PG specific alteration.Indeed, in our previous work we demonstrated that the targeted expression of this ank gene in Drosophila germ cells alters microtubule network function in the oocyte, as shown by the mislocalization of several maternal clues, without affecting the cytoskeletal structure [16].Therefore, we cannot exclude that the specific targeted effect of TnBVANK1 on the cytoskeleton function of PG cells may have a negative impact on ecdysteroidogenesis.However, it can also be true that the disruption of the cytoskeletal structure of these cells could be a downstream consequence of the impaired steroidogenic activity.The altered cell physiology and the consequent accumulation of lipids and sterols may have wide-ranging and more generalized effects on cell architecture/dynamics and survival.In fact, the prolonged expression of TnBVank1 by phm-Gal4 during larval development causes cytoskeleton alteration and also apoptosis of a few cells, which may partly account for the observed reduction of the PG size.
It is interesting to note that the developmental arrest at L3 larval stage induced by TnBVank1 expression in the PG perfectly mimics the developmental alteration of parasitized tobacco budworm larvae, which can regularly undergo larval molting but ultimately fail to pupate [13,49].The reduced gland size observed in parasitized larvae and the low basal production of ecdysteroids [14,50] are fully compatible with a general reduction of the biosynthetic activity likely induced by ank genes.However, in naturally parasitized larvae these symptoms are also associated with a disruption of PTTH signaling, which requires active TnBV infection of PG, where different viral genes are expressed [13,51].The high similarity of the recorded phenotypes represents a solid background on which to design specific experiments on the natural host.Indeed, the results reported here set the stage for specific in vivo studies in parasitized host larvae, that will have to address the respective roles of different TnBV genes in the suppression of ecdysteroidogenesis.
Figure 1 .
Figure 1.TnBVank1 expression causes the block of the transition from larval to pupal stage.(A) Light micrographs of yw;h-Gal4 larva and pupa (control) and h-Gal4.TnBVank1 larvae at different days AED.The scale bar is 500 mm.(B) Larval length of different genotypes, at 96 h AED.Five larvae of each genotype were analyzed and as control we measured larval length of yw and h-Gal4 and UASp-TnBVank1 stocks and yw;h-Gal4.Graph represents mean 6 standard deviation (s.d.); there is no significant (NS) length difference between h-Gal4.TnBVank1 (2680683 mm) and yw;h-Gal4 larvae (2580682 mm).(C) Larval length of h-Gal4.TnBVank1 increases during the extended larval life.Five h-Gal4.TnBVank1 larvae were measured at different days AED; values are the mean 6 s.d. of three independent experiments.The mean values of h-Gal4.TnBVank1 larval length at four and eighteen days AED are shown above the bars.doi:10.1371/journal.pone.0095104.g001
Figure 2 .
Figure 2. The expression of TnBVank1 in prothoracic gland affects the E biosynthesis.(A) Ring gland includes the prothoracic gland (PG; yellow), the corpora allata (CA; orange) and the corpora cardiaca (CC; red).(B) The expression of the TnBVank1 gene is driven in the different ring gland compartments, highlighted in green, by three Gal4 drivers.P0206-Gal4.TnBVank1, expressed in PG and CA, causes the developmental arrest at the last larval stage; aug21-Gal4.TnBVank1 (CA) does not induce any developmental defects; phm-Gal4.TnBVank1 (PG) blocks the transition from larval to pupal stage.(C) Total 20E titer in five larvae of UASp-TnBVank1 stock (white bars), phm-Gal4/TM6B (grey bars) and phm-Gal4.TnBVank1 (black bars), at different time (hours AED).In the control stocks UASp-TnBVank1 and phm-Gal4/TM6B, the 20E peak which induces the pupariation is present at 120 h AED.Instead, this peak is absent in phm.TnBVank1 larvae at 120 h AED and during the extended larval life.Error bars represent s.d.; *** = p,0.0001versus controls (UASp-TnBVank1 and phm-Gal4/TM6B).The mean values of total 20E at 120 h AED of different genotype larvae are shown above the bars.(D) Feeding TnBVank1 larvae with medium supplemented with 20E induces the pupariation (red), while TnBVank1 larvae fed with medium containing ethanol (EtOH) do not reach the pupal stage (green).Values are the mean 6 s.d. of three independent experiments.The yw;phm-Gal4 larvae serve as background control (blue).Immunostaining with anti-Dib in yw;phm-Gal4 (E) and TnBVank1 (F) PG reveals that the expression of Dib is strongly reduced in all TnBVank1 PGs analyzed.Panels E,F are at the same magnification and the reference scale bar is 25 mm indicated in E. doi:10.1371/journal.pone.0095104.g002
Figure 3 .
Figure 3. TnBVANK1 distribution in the PG cells and its effects on PG.The immunolocalization of TnBVANK1 in PG cells (marked with mCD8::GFP, green), analyzed with anti-TnBVANK1 (cyan), shows its presence in stroke-shaped particles (A-C), which are distributed only in cytoplasm (nucleus is stained with Propidium Iodide, red) (B,C).(D) At five days AED, the PG of yw;phm-Gal4 larvae, marked with GFP, is significantly larger (+ 54%) than the TnBVank1 PG (E).(F) The graph represents the mean 6 s.d.; 50 PGs were analyzed; *** = p,0.0001.(G) Measurement of PG cell area shows no difference between yw;phm-Gal4 and TnBVank1.50 PGs were analyzed; NS: not significant.(H-J) Immunostaining with anti-Cleaved Caspase-3 (red) or TUNEL labeling (red) in PG cells marked with GFP.In the control yw;phm-Gal4 no caspase or TUNEL signal is detected (H,K), while in TnBVank1 PG few cells undergo apoptosis (I,J,L,M).PGs in panels A,D,E,H,I,K,L are at the same magnification and their scale bar is 25 mm and is showed in A. Scale bar in B and C is 5 mm and showed in B. Boxed regions are magnified in J and M and the reference scale bar is in B.(N) Larvae of yw;tub-Gal80 ts /+;phm-Gal4/+ (Gal80 ts -phm-Gal4) and UASp-TnBVank1;UASp-TnBVank1/tub-Gal80 ts ;phm-Gal4/+ (Gal80 ts -TnBVank1) were raised at 21uC (cyan) for different time intervals, then shifted at 31uC (red) and dissected at 120 h AED.PG size from larvae incubated at 21uC until 96 h AED or until 72 h AED shows no significant (NS) differences from control larvae.PG size is strongly reduced in Gal80 ts -TnBVank1 larvae incubated at 21uC until 48 h AED compared to PG from control larvae (*** = p,0.0001).Graph represents mean 6 s.d.; 10 PGs were analyzed for each experiment.doi:10.1371/journal.pone.0095104.g003
Figure 4 .
Figure 4. TnBVank1 PG cells have an altered cytoskeleton.Phalloidin staining in control (A,B) and in TnBVank1 (C,D) PG cells.F-actin shows an altered distribution, characterized by thick masses of filaments in TnBVank1 PG cells.(E-H) a-tubulin-GFP fusion protein was expressed in yw;phm-Gal4 and TnBVank1 PG to investigate the microtubule network.Compared to control (E,F), in TnBVank1 the microtubule cytoskeleton is strongly affected and forms bundles (G,H).(I-L) Immunostaining with anti-Dynein heavy chain shows that, compared to control (I,J), in TnBVank1 PG cells the cortical localization of this protein is reduced and characterized by an evident dotted distribution (K,L).For each immunostaining we analyzed 60 PGs of five days AED larvae.PGs in panels A,C,E,G,I,K are at the same magnification and the reference scale bar is 25 mm and showed in A. Boxed regions are magnified in B,D,F,H,J,I and the reference scale bar 5 mm is indicated in B. doi:10.1371/journal.pone.0095104.g004
Figure 5 .
Figure 5. TnBVank1 PG cells show lipids accumulation.(A) In the control yw;phm-Gal4 there are few lipid droplets stained with Oil Red O, while in TnBVank1 cells several lipid droplets are detected (B,C).(E,F) In TnBVank1 there is also a sterol accumulation, shown by filipin staining, which is absent in control PG (D).60 PGs were stained for each experiment.PGs in panels A,B,D,E are at the same magnification and the reference scale bar 50 mm is showed in A. Boxed regions are magnified in C,F and the reference scale bar is 5 mm indicated in C. doi:10.1371/journal.pone.0095104.g005
Figure 6 .
Figure 6.TnBVANK1 disrupts the endocytic pathway in PG cells.60 PGs stained for Rab5 (A,B), Rab11 (C,D) and Rab7 (E,F) in yw;phm-Gal4 and TnBVank1 larvae at five days AED.The distribution of endosomes marked with Rab5 (A,B) and Rab11 (C,D) is not affected by TnBVank1 expression, while a reduction in number was observed for late endosomes marked with Rab7 (E,F).All panels are at the same magnification and the reference scale bar is 5 mm showed in A. doi:10.1371/journal.pone.0095104.g006
Figure 7 .
Figure 7. TnBVANK1 protein colocalizes with Hrs-and Alixpositive vesicles.Confocal images of PG of yw;phm-Gal4 (A-C) and TnBVank1 (D-I) larvae stained for Hrs (cyan), Alix (red) and TnBVANK1 (green).In the control cells Alix (B) and Hrs (A) are widely distributed in the cytoplasm and their signals partially overlap (C).In TnBVank1 cells (D,F) a number of vesicles marked by Hrs have different shape compared to those present in controls (A,C).These modified vesicles show a strong colocalization with TnBVANK1 signal (E,F), demonstrating that TnBVANK1 protein is associated with Hrs-marked vesicles.In TnBVank1 cells (G) most of Alix-marked vesicles have different shape compared to those present in controls (B).Immunostaining with anti-Alix and anti-TnBVANK1 shows a strong colocalization of TnBVANK1 and Alix signals in the stroke-shaped vesicles (H).In these modified vesicles the Alix and the Hrs signals are both detected (I).PGs in all panels are at the same magnification and the reference scale bar is 5 mm indicated in A. doi:10.1371/journal.pone.0095104.g007
[ 3 ,
4,6].Relatively more studies have addressed the host immunosuppression mechanisms, focusing on virulence factors in the ank gene family largely shared among different taxa[43].While an immunosuppressive function has demonstrated for the PDV ank gene family, if and how these viral genes impact endocrine pathways or other targets has not yet been addressed[11].Here we report experimental evidence demonstrating the role of a TnBV ank gene in the disruption of E biosynthesis and the induction of developmental arrest.
|
2016-05-12T22:15:10.714Z
|
2014-04-17T00:00:00.000
|
{
"year": 2014,
"sha1": "9f87ad64e5adcfc231dc6560273f9f4c775ea229",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0095104&type=printable",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "9f87ad64e5adcfc231dc6560273f9f4c775ea229",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
225551839
|
pes2o/s2orc
|
v3-fos-license
|
Using Laddering Interviews and Hierarchical Value Mapping to Gain Insights Into Improving Patient Experience in the Hospital: A Systematic Literature Review
Hospitals are continuously facing pressures to mitigate the gap between patient’s expectations and the quality of services provided. Now with Medicare reimbursements tied to Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores, institutions are attempting interventions to increase satisfaction scores. However, a standard framework to understand patient values and perceptions and subsequently translate it into reliable measures of patient satisfaction does not exist, particularly in the inpatient settings. This article highlights opportunity for the addition of qualitative customer value research to augment the information providers gain from HCAHPS scores and provide additional indicators that can be used in improving the patient experience. In this article, patient laddering interviews and hierarchical value mapping are reviewed as methodologies to understand patient core satisfaction values during their hospital stay. A systematic literature search was performed to identify articles addressing laddering interviews and hierarchical value mapping as applied to health care. Inclusion criteria involved studies relating to health care and using laddering interviews. Exclusion criteria included non-health-care studies. Only 3 studies were found eligible for this review. Our systematic review of literature revealed only few studies which may help to guide us to improve patient experience using laddering interviews. These interviews can help compose a personalized bedside survey which may be more meaningful than current widely used HCAHPS survey.
Patient Experience Versus Patient Satisfaction
The term patient experience has been used in lieu of patient satisfaction but without being understood well in health care. According to The Beryl Institute, a global leader in improving health care patient experience, it is defined as "the sum of all interactions, shaped by an organization's culture, that influence patient perceptions across the continuum of care (1)." As per Agency for Healthcare Research and Quality (AHRQ), patient experience is assessed by eliciting patient's perspective on how something should happen in health care; whereas, patient satisfaction is a summary of patient's expectations about a health care encounter and whether they were met. Two patients receiving similar care may have different satisfaction levels due to different subjective expectations (2). To understand how to effectively measure patient experience, one should be familiar with pros and cons of Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, a predominant survey used in the United States.
Review of HCAHPS Survey
In health care, HCAHPS is the dominant survey used to capture patient experience during patients' hospital stay. The HCAHPS survey is a national, standardized and the most widely used survey among health-care systems. The Centers of Medicare and Medicaid Services (CMS) and AHRQ piloted the survey in 2002 and was launched in 2006. In May 2005, the National Quality Forum endorsed HCAHPS. Then, in December 2005, the Federal Office of Management and Budget gave its final approval for the survey to be implemented nationally (3).
The HCAHPS survey is administered in a random sample of hospital inpatients 48 hours to 6 weeks after discharge. A minimum of 300 eligible surveys must be submitted by the hospital for each reporting period (4). It is also offered in multiple languages, by phone or mail. There are total of 21 core questions covering 7 composites (communication with doctors, communication with nurses, responsiveness of hospital staff, pain management, communication about medications, cleanliness of hospital, and quietness at night of hospital). Other miscellaneous composites include discharge information (no to yes), willingness to recommend (definitely no to definitely yes), and overall hospital rating (0 to 10 rating scale) (4). Unfortunately, there are several limitations in HCAHPS survey design and its use as a driver for quality improvement projects in the inpatient settings. The HCAHPS survey is routinely sent out to patients after discharge and faces challenges of low response rates (5). Survey response rate can be important determinant of the validity of survey results with greater than 70% often desirable (6); however, response rates have historically been low at 32.8% and strategies to increase response rates have been suggested (7). McFarland et al analyzed HCAHPS survey data from 934 800 patient respondents who were seen at 3907 hospitals across the country, representing more than 95% of the nation's hospitals. They studied demographic and structural factors (hospital beds) and concluded that hospital size and primary language (non-English speaking) most strongly predicted unfavorable HCAHPS scores (8). Siddiqui et al studied specialty hospitals and general medical hospitals (GMH) and found specialty hospitals having significant higher overall HCAHPS patient satisfaction score than GMHs, although more than half of this difference disappears when adjusted for survey response rate. They suggested that comparisons among health-care organizations should take into account survey response rates (7). Another drawback, HCAHPS surveys do not provide real-time feedback to house staff, physicians, nurses, or administrators on how they can improve patient care prior to discharge. The standardized survey does not ask patients about other important factors affecting their hospital experience, for example access to information and overall comfort of the environment. Yet another limitation of the HCAHPS survey is lacking retrospective analysis of which provider was involved in low scores for a specific survey section, hence formulating targeted quality improvement projects next to impossible. Another limitation of HCAHPS survey is its delayed administration. The survey is sent out 48 hours to 6 weeks after patients leave the hospital, with results reaching back to hospital well after patient care has ended. This renders any quality improvement efforts futile while the patient is still admitted to the hospital. Measuring responses in real-time can not only identify pitfalls but also drive interventions while inpatient. For example, hospitalist nurse can round on patients in the afternoon rather than having patients mail in surveys with comments after they have discharged. Other limitations stem from HCAHPS' basis as a customer satisfaction survey. Satisfaction surveys by nature focus on gathering a quantitative evaluation by the customer of past actions. Even when using advanced statistical analysis techniques, satisfaction surveys do not, and are not intended to, provide a deep understanding of why certain assessments take place or what alternatives might change those assessments in the future (9). Thus, a more robust marketing research methodology beyond HCAHPS is required which takes into consideration our regional patient population segment.
Consumer Values and Means-End Theory
From a business perspective, customer value is the customers' perception of what they want to have happen in a specific use situation, with the help of a service offering, in order to accomplish a desired purpose or goal (9). Often understanding customer value is easy but measuring it can be challenging. Means-end chain (MEC) theory facilitates the understanding of the consumer's expectations, choice, value, and how consumers link the attributes of products and services with particular consequences satisfying their personal values (10). Reynolds and Olson (2001) proposed a MEC approach focusing on consumers' knowledge in 3 key areas, product attributes, consequences, and values. Common application of the MEC approach has been in eliciting consumer motivations, and reasons for their choices (10). In marketing, frequently a hierarchical representation of customers' views of the service can be developed. It is represented on 3 levels by attributes, consequences, and desired end-states (11)(12)(13). At attributes level, the tangible service characteristics can include "I get to see doctor on time" and "'staff informed me of delays." Consequences are functional and physiological attractions like "my doctor understands me" and "makes me feel better." At the highest level, desired end-states, are characterized by consumers' deep-seated values, like "good health" and "trust in doctor." As one initiates a conversation about a service satisfaction, the interviewee will initially describe it frequently in terms of attributes. As the interviewer probes into asking why he or she likes that attribute, the conversation deepens and often consequences and end-states surface. The hierarchy suggests a top-down approach to understand patient needs. This approach is successful as it focuses on future states and is more stable (9). In marketing analysis, there are predominantly 2 forms of customer value interviews, laddering technique and grand tour. We explain the laddering technique as below.
Laddering Interviews and Hierarchical Value Maps
Laddering is a moderately structured interviewing method that is designed specifically to understand means-end associations that customers have toward a service or product (14). It is like a peeling process, in which you start peeling the outer layers of an onion until you get to the core. This process can be tedious, time-consuming but the benefits it provides far outweigh the costs. Gengler et al has described it as "reasons behind the reasons" (13). Beginning one attribute at a time, the interviewer asks a series of probing questions to determine the relationship between the attribute and higher order consequences and desired end-states (aka A-C-E sequence). Probing is an essential aspect of laddering interviews and helps elicit higher value states. Interviewers are suggested to as "how does that make you feel?" to elicit these higher consequences and end-states (10). After collecting all the value dimensions from different laddering interviews, a Hierarchical Value (HV) Map is created. Reynolds and Gutman pointed out that when the sample size is between 30 and 50 the correlation may be discovered through HV map (11).
Returning to HCAHPS survey, we see that it is predominantly an attribute-level survey and does not seem to address higher value hierarchy states like consequences or end-states for patients. Health care is a unique service industry and one which is very personable. It distinguishes itself from others by the very nature that is it essential but not necessarily desired. Consumers choose health-care service when they are ill and often emotionally vulnerable. Another distinctive characteristic of health care is that patient is a co-creator of the services, and an accurate description of symptoms of illness is essential for delivery of health services (15). Laddering interviews can be used to peel that layer and find hidden patient values, leading to a delighted customer. An example is seen in de Ruyter et al study which found empathy as the most important attribute in health care (16). Our systematic review was meant to further study if these techniques have been used in health care to assess patient experience.
Search Strategy and Selection of Studies
Literature search strategy. We focused on the research question of use of hierarchical value map (HVM) or laddering interviews for understanding patient values. We used PubMed, Web of Science, and EBSCOhost to conduct our systematic literature search. Of note, we were not able to find any meta-analysis or systematic reviews based on our research question.
Study selection. Following inclusion and exclusion criteria were used to select our studies (Table 1). Non-English and unpublished studies were not excluded to broaden literature on this scarcely studied topic. The data collection results are summarized in Table 2.
Results
After performing a literature search (PubMed) of "hierarchical value mapping" and "patient," 24 studies resulted. The PRISMA flow diagram is shown in Figure 1. In contrast, by using search term "laddering interviews" only excluding patient there were only 11 articles found. Literature search of Web of Science using "laddering interviews" revealed 160 articles whereas literature search of EBSCO Business Source Premier revealed 76 studies. A combined search of all the 3 terms on PubMed revealed zero studies. After applying exclusion criteria on PubMed articles only one study used laddering interviews for patient care (17) and that too not direct patient care but using laddering interviews to understand ideal medical doctor. Applying exclusion
1742
Journal of Patient Experience 7 (6) criteria on search of EBSCO Business Source Premier revealed one study employing laddering interviews to uncover desired qualities and behaviors of general practitioners (18). Exclusion criteria on search of Web of Science revealed a study by Lee and Lin which studied HVM modelling in the "healthcare service industry" in Taiwan (19). This study is the most comprehensive we found shedding light on consumer behaviors in the health-care industry. They interviewed consumers regarding motivations behind health-care choices and then developed a HVM. The 3 research studies are summarized in Table 3.
Discussion
Our study revealed only 3 studies exploring laddering interviews and HVMs in context of patient experience. Unfortunately, many of our selected studies had a small sample size of respondents. Some studies suggest etiquette-based communication and sitting at bedside may improve patient experience (20,21). Recently, providing real-time deidentified patient satisfaction results with education and incentive system to residents may help as well (22,23). Indovina et al employed real-time daily patient feedback to providers coupled with provider coaching. They used 3 providerspecific questions taken from a survey that was available on the US Department of Health and Human Services website and was not obtained via laddering interview techniques. They developed a "daily survey" and found hospitalists who received real-time feedback had a trend toward higher proportion of top box HCAHPS scores and overall rating of hospital, but this was not statistically significant (23). The strategies in this study to improve patient experience were only hypothesized and a deep dive into patient experience and core values was never undertaken.
Value Proposition
With the push from volume to value-based reimbursement models, hospitals are now motivated more than ever to achieve improvements in specific HCAHPS domains (24). Based partly on these scores, hospitals can either forgo or gain up to 1.5% of their Medicare payments for the fiscal year (FY) 2015 increasing to tied amount of 2% of reimbursement dollars at risk in FY 2017 (25). Estimates predict that patient satisfaction will determine 30% of the incentive payments, while improved clinical outcomes will decide 70%. It is also known that overall higher patient satisfaction scores are associated with lower 30-day risk-standardized hospital readmission rates after adjusting for quality (26). Hence, patient satisfaction scores is linked to both direct penalties as well as indirect readmission penalties which have steadily increased from 1% in FY 2013, 2% in FY 2014, and 3% in FY 2015 onward with FY 2017 CMS estimate of total penalties being US$528 million (27). Since 1990s, hospitals have recognized that customer service and provider-patient interactions are prudent in pursuit of successful outcomes, and have emphasized the measurement and reporting of patient satisfaction measures (28). Improving patient satisfaction scores has an even larger impact at university-level by facilitating more funding for further research from institutions like Robert Wood Johnson Foundation, The Beryl Institute (Patient Experience Grant Program), and AHRQ to mention a few.
Conclusion
Based on our systematic review of literature, we suggest further exploring laddering interviews as a tool to understand patient's core values that drive optimal patient experience. A personalized bedside survey derived from laddering technique has the potential to target specific quality improvement projects, which may encompass physicians, advanced practice professionals, nursing, nursing assistant, dietary department, and department of patient experience. The impetus for this study comes from limitations of using only HCAHPS scores to make patient experience assessments, including poor response rates of HCAHPS scores and frustration on implementation of quality improvement projects in inpatient settings. We plan to conduct laddering interviews and construct HV maps to understand our patient population better in upcoming future study.
Limitations
Due to scant research in this field, as well as involvement of overlapping marketing and advertising concepts, there may be several business journals which may not be represented completely in our search protocol. Being able to link patient feedback to individual providers is a limitation of most health-care patient experience surveys which also affects laddering interviews.
Author Biographies
Pankaj Kumar is medical director of Hospitalists, High Point Medical Center and Assistant Professor of Internal Medicine at Wake Forest Baptist Health. Physician Leader and an experienced hospitalist, his focus is on patient safety, quality improvement and teaching value-based reimbursement strategies to faculty, medical student and graduate medical education. Pankaj pursued specialization in healthcare management with a Master of Business Administration (MBA) from Haslam College of Business, University of Tennessee as well as completed Advanced Training Program from Intermountain Healthcare, Salt Lake City, Utah. Well versed with concepts of professional development, conflict management, organizational behavior and marketing, he is committed to improving the patient experience at High Point Medical Center through studying consumer and organizational research methods specifically probing, laddering interviews and hierarchical value mapping.
Michele Follen, MD, PhD, MBA, is the Director of Research and Director of Cancer Prevention and Cancer Services at Kings County Hospital part of New York City Health and Hospitals in Brooklyn, New York. Her research interests are in optical technologies and research design. She chaired the MEDI study section for the NIH for two years.
Chi-Cheng Huang is the executive medical director of General Medicine and Hospital Medicine at Wake Forest Baptist Health System, and the Chief of Hospital Medicine and an Associate Professor of Internal Medicine at the Wake Forest School of Medicine. He earned an undergraduate degree in biology from Texas A&M University and graduated cum laude in 1998 from Harvard Medical School. A physician leader and manager with experience in operations, strategy, and quality improvement. Dr. Huang is a highly motivated, proactive, team-oriented physician who aims to provide high quality, cost effective, efficient, patient-centric care through collaboration. He has a proven talent analyzing root causes of problems, restructuring systems, and successfully implementing solutions within complex health systems.
Amy Cathey is a senior lecturer and executive director of Graduate and Executive Education in the Haslam College of Business at University of Tennessee, Knoxville. She has a strong commitment to enhancing patient satisfaction and the patient experience through use of customer value techniques. Amy teaches marketing in the Physician Executive MBA and Executive MBA for Healthcare Leadership programs and has served as an advisor for over 70 executive MBA organizational action projects.
|
2020-07-23T09:09:24.212Z
|
2020-07-17T00:00:00.000
|
{
"year": 2020,
"sha1": "2177da8adfb6e27ba01560b3fd387cc4f4f4f5cc",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2374373520942425",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23826ccc427ff61c9a4b8e22fe73d28e8c9be3b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
256369290
|
pes2o/s2orc
|
v3-fos-license
|
Associations between stuttering, comorbid conditions and executive function in children: a population-based study
The aim of this study was to investigate the relationship between executive function (EF), stuttering, and comorbidity by examining children who stutter (CWS) and children who do not stutter (CWNS) with and without comorbid conditions. Data from the National Health Interview Survey were used to examine behavioral manifestations of EF, such as inattention and self-regulation, in CWS and CWNS. The sample included 2258 CWS (girls = 638, boys = 1620), and 117,725 CWNS (girls = 57,512; boys = 60,213). EF, and the presence of stuttering and comorbid conditions were based on parent report. Descriptive statistics were used to describe the distribution of stuttering and comorbidity across group and sex. Regression analyses were to determine the effects of stuttering and comorbidity on EF, and the relationship between EF and socioemotional competence. Results point to weaker EF in CWS compared to CWNS. Also, having comorbid conditions was also associated with weaker EF. CWS with comorbidity showed the weakest EF compared to CWNS with and without comorbidity, and CWS without comorbidity. Children with stronger EF showed higher socioemotional competence. A majority (60.32%) of CWS had at least one other comorbid condition in addition to stuttering. Boys who stutter were more likely to have comorbid conditions compared to girls who stutter. Present findings suggest that comorbidity is a common feature in CWS. Stuttering and comorbid conditions negatively impact EF.
Background
Disruptions in the fluent flow of speech are a hallmark of stuttering [1]. However, consequences of the disorder extend beyond speech. There is a growing body of evidence pointing to deficits in cognitive and metalinguistic skills in children who stutter [2][3][4][5]. CWS have been reported to show weaker executive function (EF; namely, phonological working memory [WM], attentional skills and inhibitory control) relative to children who do not stutter [CWNS; for a review see [6][7][8][9][10][11], with implications for fluency [12,13]. EF is the umbrella term used to describe the abilities needed to manage and allocate cognitive resources during cognitively challenging activities, such as switching between rules or tasks, controlling and focusing attention, ignoring distractions, and inhibiting impulses [11,14]. EF is fundamental for language, selfcontrol, emotional regulation, and goal-oriented behaviors [15][16][17].
EF in typical development
EF follows a predictable developmental timeline [18], emerging in infancy as the ability to direct attention and progressing into the complex abilities required for goaloriented behaviors in adulthood [11,[19][20][21]. EF supports language development (e.g., attention facilitates language learning), and (phonological) WM supports novel vocabulary acquisition by allowing children to attend to, analyze and hold linguistic representations and rules over time [for a review see [22][23][24][25][26][27]. In preschool-and school-age children, stronger WM, attention and inhibitory control are correlated with better expressive and receptive language skills [28][29][30][31]. This association may extend beyond childhood, as both children and adults with stronger EF are more successful in learning a new language [32]. The relationship between EF and language are likely bidirectional. Language may facilitate EF performance by helping children to construct representations by labeling conditions, allowing them to reflect on and use rule structures that underlie EF tasks [33,34]. In typically developing children, steeper vocabulary growth at age 3 years predicts EF abilities at age 5 years [25]. Higher inhibitory control is associated with greater task perseverance among children, and higher EF is positively associated with vocabulary [35]. Further evidence for the relationship between language and EF comes from children's self-talk during EF tasks. Four-and 10-year olds who use self-talk during the Tower of London task (a commonly used measure of EF) showed faster performance and required a smaller number of moves to completion [36][37][38].
EF is thought to be foundational to academic performance and success [for a review see [55][56][57]. Children must sustain attention, attend to important features of lessons, avoid distractions and hold information in memory in the classroom [58]. Perhaps not surprisingly, weaker EF is associated with lower academic progress, and lower teacher scores for working hard at school and learning skills [18, 35, 45-47, 49, 59]. Reading and writing skills are also subserved by EF; requiring phonological awareness, and the ability to hold, manipulate, and integrate visual, auditory and linguistic information in WM [11,16]. Children with lower self-regulation and attentional problems show poorer reading and writing abilities [52,53,60,61].
EF components while core to the development of selfregulation, socioemotional competence, and academic achievement are also crucial for fluency [62,63]. Typically developing children and adults with higher WM capacity produce more utterances and lower rates of disfluencies (e.g., part-word repetitions, revisions) during spontaneous speech and reading compared to their peers with lower WM capacity [63][64][65]. Conditions of divided attention where participants perform concurrent tasks result in higher frequency of repetitions and interjections compared to non-divided attention (e.g., speech only) tasks [66]. Similarly, adults and children with lower inhibitory control show higher rates of disfluencies (e.g., revisions) during production of sentences [67].
In general, measuring EF in young children has proved difficult [68,69]. The majority of assessments are adaptations of tests for adults, as such, children particularly those in preschool or younger, may lack the linguistic and motoric proficiency required for these tasks, resulting in floor effects [for a review see 70,71]. Further, the issue of ecological validity of these assessments, whether they are able to capture executive functioning in realword situations, have been challenged [72][73][74]. The use of validated and normed parent surveys and self-reports, such as the Behavior Rating Inventory of Executive Function (BRIEF), which measures the behavioral expression of EF provide a solution to some of these challenges [75]. Children's behavior at home or school provide settings for observing EF capacity, and there is accumulating evidence that parent and teacher ratings of everyday, real-world behaviors in these environments provide ecologically valid assessments in children [70,71]. EF manifests in everyday behaviors such as getting along with others (e.g., inhibitory control/socioemotional regulation), completing tasks (attention/self-regulation), and academic achievement (WM/attention) in both typically developing and clinical pediatric populations [71,76,77]. Deficits in EF are correlated with behaviors such as learning difficulty, inattentive behavior, poor task completion, and slower academic progress [43][44][45][46][47]54]. Accordingly, questions on the BRIEF such as: "Has trouble finishing tasks (chores, homework, etc.)", "Has trouble concentrating on tasks, schoolwork, etc. ", "Gets out of control more than friends", and "Has trouble getting used to new situations (classes, groups, friends, etc.)" rated on a Likert scale ("N" if the behavior is never a problem, "S" if the behavior is sometimes a problem, and "O" if the behavior is often a problem) offer multiple perspectives on a child's EF. Other parent surveys and self-reports such as the Child Behavior Checklist [CBCL; 78] and Strength and Difficulties Questionnaire [SDQ; 79] also offer insights into behaviors regulated by EF including socioemotional competence. The CBCL includes parent and teacher ratings (0 = Not true, 1 = Somewhat or sometimes true, and 3 = Very true or Often True) on questions for assessing challenges in socioemotional development such as "Worrying, Unhappy sad, or depressed", "Doesn't get along with other children", and "Doesn't know how to have fun, acts like a little adult".
EF and stuttering
Both parent reports and cognitive assessments have been used to evaluate EF in CWS and they suggest EF components are depressed in this population [6 for a review see 80]. WM underpins the ability to store and manipulate relevant information during complex tasks, and is proposed to be critical for fluency [64,[81][82][83]. Children and adults who stutter show lower performance (more errors, slower reaction time) in WM tasks (e.g., nonword repetition [NWR] and digit span tasks) compared to CWNS [e.g., [84][85][86][87][88][89][90][91][92]. However, WM deficits may be less evident in CWS during less complex tasks (e.g., 2-vs. 5-syllable NWR tasks), pointing to a compromised system unable to accommodate increased demands [7,89,[93][94][95][96]. Research suggests a correlation between WM capacity, stuttering severity, and recovery [8,97]. Close to stuttering onset, CWS who eventually recover show stronger WM compared to CWS who do not recover [8]. Additionally, CWS with lower WM capacity (indexed by higher error rates on NWR) show more severe stuttering compared to CWS with higher WM [97].
Executive attention oversees available resources for cognitive processes including speech production [98,99]. Both direct and indirect measurements suggest greater difficulty in managing attention for CWS compared to CWNS [for a review see 80]. Parent-and teacher-reports point to lower attentional flexibility and sustained attention in CWS [100][101][102]. These reports are consistent with findings of slower response times compared to CWNS, and a negative correlation between accuracy and speed in CWS using direct measures of attention (e.g., Dimensional Card Change Sort, Posner Test of Covert Attention Shift) which require target selection and shifting attention toward different cues [9,103,104]. Weaker attention control is also correlated with higher frequency of stuttering in CWS [105,106]. Similarly, in adults who stutter divided attention, i.e., managing concurrent tasks (e.g., speech and finger tapping), results in higher rates of stuttering [107 however, see 108]. Attentional training (using flanker tasks) have been reported to reduce stuttering severity in CWS [109]. Notably, the link between attention regulation and fluency may not be specific to stuttering. In the Felsenfeld, van Beijsterveldt, and Boomsma [102] study, both CWS and CWNS with higher rates of typical disfluencies were more likely to have attentional issues (based on parent report) compared to CWNS with lower rates of typical disfluencies. Attentional control may also have implications for recovery. Parents report shorter attention span in both CWS who recovered and CWNS compared to CWS with chronic stuttering [110], which could signal faster processing speeds or lower levels of perseveration in those who recover.
Inhibitory control underpins self-regulation and the ability to suppress interfering stimuli [62,111,112]. There has been growing interest in the development of inhibitory control in CWS but findings have been contradictory [for a review see 6]. Some studies using direct measures of inhibition (e.g., Go/NoGo tasks) report lower accuracy and slower reaction time in preschooland school-age CWS compared to CWNS [9,10,113,114]. However, others have failed to find differences (e.g., in the number of correct inhibitions) between CWS and CWNS using similar tasks [115]. Findings based on parent reports have been similarly varied. While some report lower inhibitory control and self-regulation in CWS relative to CWNS [116,117], others have found similar [85,[118][119][120][121] or stronger inhibitory control [122,123] in CWS relative to CWNS. Markedly, weaker inhibitory control in CWS is associated with more severe stuttering and chronicity [105,[124][125][126]. It is plausible that CWS with stronger inhibitory control may have greater ability to suppress overt expressions of incorrect speech programs resulting in lower rates of stuttering or higher probability of recovery [127].
EF in other developmental disorders and children with comorbid conditions
Deficits in EF are frequently reported in speech-language, and neurodevelopmental disorders such as attention deficit hyperactivity disorder (ADHD) and autistic spectrum disorder [ASD; for a review see 24,[128][129][130][131]. In preschool-and school-age children, specific language impairment (SLI) is associated with weaker EF [WM, attention and inhibitory control; 130,132]. Children with ADHD show lower performance (reflected by lower accuracy and slower response time) on tasks requiring WM, attention and inhibitory control compared to typically developing children [128,133]. The degree of EF deficits may vary across disorders. For example, parent-ratings of children with reading disability suggests higher EF than for children with ADHD or ASD [77].
Comorbidity is commonly reported in neurodevelopmental disorders, with potential implications for EF development. Children with comorbid conditions show more profound EF deficits compared to children without comorbidity [129,134,135]. For example, children with multiple diagnoses of ADHD and anxiety or conduct disorders show slower completion time, higher error rates and more perseveration on EF tasks (e.g., Wisconsin Card Sorting, Finger Windows) which necessitate WM, attention and inhibitory control, relative to children with ADHD without comorbidity [136,137]. These findings are consistent with parent reports [e.g., Behavior Rating Inventory of Executive Function (BRIEF); 75] of lower EF in children with comorbidity compared to children without comorbidity [134]. It is noteworthy that chronic health conditions are also associated with impaired EF. For example, children with medical conditions such as diabetes and sickle cell anemia show significant impairments in attention and EF tasks compared to children without those conditions [138].
The prevalence of comorbid conditions, such as learning disabilities and developmental delay, is higher for CWS relative to CWNS [139,140]. In clinical cohorts, concomitant language, speech, and behavioral disorders (e.g., expressive language, receptive language, articulation, phonology, and ADHD) are commonly reported with stuttering [141,142]. Prior studies also suggest higher rates of socioemotional, psychological distress and anxiety in CWS compared to CWNS [126,[143][144][145][146][147][148]. In a study of 2,628 CWS, a majority (62.8%) had comorbid disorders [149]. The most commonly reported comorbidity in CWS were learning (15.2%), reading (8.2%), attention deficit disorder (ADD, 5.9%) and behavioral disorders [2.4%; 149]. Medical diseases, such as diabetes, asthma, and sickle cell anemia have also been found at higher rates in CWS compared to CWNS [149][150][151]. Although CWS commonly show symptoms of other disorders, the intervening role of comorbidity on EF has not received as much attention. It is plausible that similar to children with other developmental disorders, CWS with comorbidity would show weaker EF compared to those without comorbidity.
Present study
Findings related to EF in CWS have been ambivalent [see 6,80]. Variability across studies may be a function of the tasks employed. CWS may perform within norm or equivalently to CWNS in less complex tasks (e.g., 2-string forward digit span) but show lower performance in more complex EF tasks (e.g. Dimensional Change Card Sort, backward digit span). In other words, deficits in EF (as a function of impairment or developmental timeline) may not be evident unless the system is sufficiently taxed; for example, involve EF domains which have not fully developed (attentional control in 3-year olds), or necessitate manipulation or transformation of information (e.g., Backward Digit Span). Findings from a study examining performance accuracy across multiple EF tasks in 602 typically developing preschool children between 3 and 6 years may shed some insight on ambivalent reports in CWS [152]. Carlson [152] found that performance was dependent on task complexity, whereby, outcomes (i.e., behavioral accuracy) were similar for tasks with equivalent levels of difficulty regardless of the task design (e.g., requiring a motor or verbal response). For example, 4-year olds show comparable accuracy on two tasks with equivalent complexity levels which tap into different EF domains: Whisper (inhibition: children must inhibit from shouting out names of cartoon characters but instead whisper them), and Motor Sequencing [WM: imitate sequence of pressing keyboard from left to right with index finger as fast as possible before the experimenter says "Stop"; 152]. However, these same 4-year olds showed poorer performance on the more complex Day/Night task where children must suppress the prepotent response, recall the correct answer, and generate a new response which conflicts with the dominant (say "night" for the sun picture, and "day" for moon picture). Tasks which tap into multiple EF domains (e.g., Dimensional Change Card Sort and Backward Digit Span which require both WM and inhibitory control) were found to be more difficult [152]. Collectively, these findings suggest comparing across studies utilizing disparate tasks will likely result in ambivalent findings. Studies which employ less demanding tasks may lack the sensitivity to detect EF differences between CWS and CWNS.
Notably, a study by Ntourou, Anderson and Wagovich [153] reported better sensitivity for detecting differences in EF between CWS and CWNS using an indirect measure, i.e., the BRIEF parent report [75]. CWS received lower parent ratings for WM, inhibitory control, and attentional control compared to CWNS [153]. Further, the likelihood of CWS meeting the clinically significant criteria for EF difficulties were 2.5 to 7 times higher than for CWNS. CWS also received particularly low ratings on questions related to behaviors involving a combination of WM, inhibitory control/self-regulation and attention: "Has trouble finishing tasks such as games, puzzles, pretend play activities", "Reacts more strongly to situations than other children", and "Resists change of routine, food, places, etc.". In contrast, a direct behavioral measure, Head-Toes-Knees-Shoulders (HKTS-which also involves WM, inhibitory control and attention) failed to detect differences between CWS and CWNS. Findings from this study point to the validity and sensitivity of behavioral manifestations to detect EF deficits in CWS.
The aim of this study was to investigate the relationship between EF, stuttering, and comorbidity by examining CWS and CWNS with and without comorbid conditions. To do this, we examined behaviors (such as inattention, self-regulation including emotional and social regulation, and task completion) underpinned by or closely associated with EF using a population-based data. Based on previous findings in CWS and CWNS, we hypothesize that: (1) CWS will show weaker EF compared to CWNS, (2) children with comorbid conditions will show weaker EF compared to children without comorbid conditions, and (3) children with stronger EF will also show higher socioemotional competence compared to children with weaker EF.
Sample
Data was accessed from the National Health Interview Survey (NHIS) from years 2006-2018. The NHIS is a nationally administered cross-sectional survey, conducted by the Centers for Disease Control and Prevention (CDC) to monitor the health of the U.S., including trends in illness and disabilities [154]. The survey has been administered annually since 1957, providing a nationally representative sample of households in all 50 states and the District of Columbia. For each household, data was collected from a randomly selected sample adult and child. Information about the child was collected from an adult, typically the parent or guardian. Data was collected face-to-face by trained interviewers who read questions on the survey to interviewees. Some segments of the population were excluded including U.S. citizens not residing in the country, active duty military personnel, incarcerated inmates, and long-term care facility patients. A total of 119,983 children (girls = 58,150; boys = 61,833) were sampled between 2006 and 2018.
Identification of CWS and CWNS
CWS were identified with a positive parent response, "Yes", to the question "During the past 12 months, has [SC 1 ] had any of the following conditions: Stuttering or stammering". Other possible responses were: "No", "Refused", "Not ascertained" or "Don't know". CWNS were identified by a "No" response to "Stuttering or stammering".
Socioemotional competence
To determine whether socioemotional competence was also correlated with EF as previously reported in typically developing children, responses to the following questions were included in the analysis: (1) "Many worries/often seems worried, past 6 m", (2) "Unhappy/depressed/tearful, past 6 m", and (3) "Gets along better w/adults than children/youth, past 6 m" 5 (see Table 2). Responses that did not provide estimates of socioemotional competence, i.e., "Refused", "Not ascertained" or "Don't know" were excluded from the analysis. Other responses were scored: 3 = "Not true", 2 = "Somewhat true", 1 = "Certainly true". A composite score with a maximum of 9 (high socioemotional competence) and minimum of 3 (low) were possible.
Data analyses
To combine the NHIS data from 2006 to 2018, we adjusted the weights and stratum according to the NHIS guidelines. Descriptive statistics based on the sample population, using SPSS version 25 [156], were used to describe the distribution of stuttering and comorbidity across group and sex. Subsequent regression analyses to test the three hypotheses were conducted with Mplus 8.0 [157], accounting for complex sampling design of the NHIS so that results are representative of the US population. For hypothesis 1 and 2, the dependent variable was EF, and for hypothesis 3 the dependent variable was socioemotional competence. The predictors included comorbid status (0 without comorbidity, 1 with comorbidity), and sex (0 if male, 1 if female).
Prevalence of stuttering and comorbid conditions
A total of 2258 CWS (girls = 638, boys = 1620), and 117,725 CWNS (girls = 57,512; boys = 60,213) aged between 3 and 17 years were identified in the sample. The overall prevalence of stuttering was 1.88% with a maleto-female ratio of 2.54:1 (Table 3). There was a higher prevalence of stuttering (4.19%) and male-to-female ratio (2.86:1) for children with comorbid conditions relative to children without comorbid conditions (1.02%; 2.14:1; Table 3). A majority (60.32%) of CWS had at least one other comorbid condition in addition to stuttering compared to CNWS where less than a third (26.44%) had one or more conditions.
For both CWS and CWNS, ADHD, asthma and autism were the most prevalent comorbid conditions (Table 4). Across both groups, rates of comorbidity were higher for males compared to females. Table 5 shows the mean EF across stuttering status (CWS vs CWNS), comorbidity status (with or without) and sex. EF was lower in CWS compared to CWNS, children with comorbid conditions relative to children without comorbidity, and boys compared to girls. Table 6 summarizes the results of the regression analysis with EF as the dependent variable. As shown in Model 1, stuttering was a significant predictor of EF (B = − 1.195, p < .001). EF was significantly lower for CWS compared to CWNS, supporting Hypothesis 1-CWS will show weaker EF compared to CWNS.
Prediction of EF by stuttering and comorbidity (Hypotheses 1 and 2)
As shown in Model 2, comorbid conditions was also a significant predictor of EF (B = − 0.950, p < .001). The EF for children with comorbidity was significantly lower than for children without comorbidity, supporting Hypothesis 2-Children with comorbidity will show lower EF compared to children without comorbidity.
In Model 3, stuttering and comorbidity remained significant when entered concurrently. Given the potential differences in EF between boys and girls, sex was included in Model 4. Females had slightly higher EF than males (B = 0.201, p < .001). In summary, Hypotheses 1 (CWS < CWNS) and 2 (with comorbidity < without comorbidity) were confirmed (see Table 5). Table 7 shows the mean socioemotional competence scores across stuttering, comorbidity status, and sex. Socioemotional competence was lower for CWS compared to CWNS, children with comorbidity compared to children without comorbidity, and girls compared to boys. Table 8 summarizes the results of the regression analysis with socioemotional competence as the dependent variable. As shown in Model 1, children with higher EF had significantly higher socioemotional competence, supporting Hypothesis 3-Children with stronger EF will also show higher socioemotional competence compared children with weaker EF. When EF was increased by 1, socioemotional competence increased by 0.453 (β = 0.512, p < .001).
Prediction of socioemotional competence by EF (Hypotheses 3)
To control for other relevant factors, sex, stuttering status, and comorbidity status were included individually (Models 2-4) and concurrently (Model 5) in the analyses. As shown in Model 5, sex had a significant (B = − 0.201, p < .010) but small effect on socioemotional competence; scores were slightly higher for males than females. Stuttering (B = − 0.694, p < .001) and comorbidity (B = − 0.476, p < .001) status had significant negative effects on socioemotional competence. Socioemotional competence score was lower for CWS compared to CWNS (Model 3) and lower for children with comorbidity compared to children without comorbidity (Model 4). In Model 5, when all predictors were entered in a single step, EF remained statistically significant. This further confirmed hypothesis 3, i.e., children with stronger EF will show higher socioemotional competence. Stuttering status remained a significant predictor (B = − 0.11, p = .042), although the effect was smaller than in Model 3. Sex was still significant (B = .041, p < .001), but with a larger effect than in Model 2. However, comorbidity status was no longer significant (B = − 0.019, p = 0.18) after inclusion of the other variables in the model.
Discussion
The aim of this study was to investigate the relationship between EF, stuttering, and comorbidity. To the best of our knowledge, this is the first study to examine EF in CWS with and without comorbidity on a large scale. Our findings point to a critical association between stuttering, comorbidity, and EF in both CWS and CWNS. First, weaker EF was correlated with having stuttering. Second, having comorbid conditions was also associated with weaker EF. Notably, CWS-WC showed the weakest EF among all groups of children. Third, higher socioemotional competence was associated with stronger EF and absence of stuttering. Our study also confirmed expected epidemiological trends on a large scale. We present evidence for a higher prevalence of stuttering and higher male-to-female ratio in children with comorbidity.
Prevalence of stuttering
The overall prevalence of stuttering was consistent with past reports [for a review see 1, [158][159][160]. Closer inspection of the data indicates higher rates of stuttering in children with comorbidity, particularly boys, compared to children without comorbid conditions.
Comorbidity
A majority of CWS had comorbid conditions in the present study, consistent with previous studies [e.g., [161][162][163]. Similar to past studies, ADHD and asthma were two of the most frequently reported comorbid condition in CWS [149,164,165]. Interestingly, ADHD has been identified as a risk factor for stuttering [166]. Several explanations have been offered to explain the high rates of comorbidity with other neurodevelopmental disorders in stuttering. First, stuttering and other neurodevelopmental disorders are thought to share a core deficit or similar risk factors [e.g., ADHD; 167], and as such, CWS could be at higher risk for developing other disorders and vice versa [168]. Second, stuttering may represent one outcome along a continuum of (common or overlapping) etiologies and disorders, with variability across severity, timing, and symptoms [169]; and children with comorbidity may represent a more severe end of the continuum. Alternatively, stuttering may be a distinct disorder that negatively impacts development, amplifying susceptibility to other disorders [170].
The high rates of asthma in CWS in the present study is in agreement with past reports [143,149,151,171]. In fact, another atopic disease, hay fever, was reported to correlate with an earlier onset of stuttering and chronicity [172]. The inflammatory response associated with atopic diseases is thought to affect the neurocircuitry including those involved in speech [172,173]. Markedly, adults with asthma show atypical gray [e.g., increased gray matter volume in the right superior temporal gyrus; 174], and white [e.g., lower white matter coherence in the inferior frontal gyrus; 175] matter in regions involved in speech production and reported to be affected in stuttering [176]. Although the mechanism of causality is unclear, the relationship between atopic diseases and stuttering, suggests that research on the impact of childhood health outcomes and stuttering is warranted.
Overall, the higher rates of comorbid conditions may be a corollary of symptoms that manifest more severely in CWS, reaching observable or clinical levels. In general, screenings and treatment across multiple conditions may be necessary in a majority of CWS.
Sex differences
Present findings also point to sex as a significant variable in susceptibility to stuttering and comorbid conditions. Overall, there was a higher male-to-female ratio of stuttering in this sample, a finding in line with the sexually dimorphic nature of this disorder. This sex bias has been attributed to increased vulnerability among males, i.e., a lower "stuttering threshold" and/or fewer required contributing factors to developing stuttering, compared to females where greater loading is required [177, p. 21]. Another proposed explanation is that differences in cognitive maturation and development between sexes might result in more severe manifestation of symptoms in males. According to this theory, females are equally at risk for stuttering, however, symptoms manifest less severely or below clinical levels [178]. In the current study, the male-to-female ratio was higher for CWS-WC relative to CWS-NC. This greater sex bias for CWS-WC compared to CWS-NC suggests increased vulnerability for males who stutter. It is worth mentioning that the preponderance of affected males is not limited to stuttering. Other disorders, such as autism and ADHD, show similar trends of greater male susceptibility [179,180]. It has also been suggested, however, that sex differences are due to discrepancies in diagnosis. For example, ADHD is more likely to be diagnosed in boys [181] and it is unclear if this is rooted in differences in ADHD presentation [i.e., boys may present in a manner such that diagnosis is more likely; 182] or actual differences in prevalence [for evidence of similar presentation between sexes, for example 183]. It is beyond the scope of the present paper to determine the mechanisms underlying this bias, specifically, if they are rooted in differences in prevalence or differences in diagnosis. We suggest this as a direction for future research; understanding the combination of these factors would not only inform how stuttering and comorbid disorders manifest, but also translate into optimal treatment for each sex.
Predictors of EF
Stuttering, comorbidity status, and sex were found to predict EF scores. Consistent with our hypothesis, CWS showed weaker EF compared to CWNS although the magnitude of the difference was relatively small. Specifically, CWS received lower parent ratings for statements addressing behaviors (see "EF" section in Methods) that necessitate holistic EF, WM, attention, and inhibitory control. Taken together, findings from the current and past studies suggest that weaker EF may be a feature of stuttering [e.g., 10,85,102].
There is accumulating evidence that EF is mediated by a wide network of circuity, with the (pre)frontal cortices and basal ganglia playing key roles [for overview see [184][185][186]. The (pre)frontal cortex is involved in manipulating and transforming information held in WM [187 involving Brodmann area [BA] 44-47, 188, 189]; inhibiting prepotent behavioral and neural responses, and activating representations in subcortical regions [190,191]; and top-down control of attention, i.e., bias attention to relevant information, and sustaining attention [192][193][194][195]. EF behaviors localized in the (pre)frontal regions are modulated by activity in the basal ganglia, which select and enable executive programs [184,[196][197][198][199]. These same regions, (pre)frontal cortex and basal ganglia, have been found to be aberrant in stuttering [200,201 for overview see 202]. It is highly plausible that EF deficits in CWS are related to these structural and functional abnormalities.
In typically developing children, EF components experience protracted development from infancy through late childhood and into early adulthood [for a review see 69,203]. Although many EF components are present in infancy, they grow exponentially in early childhood [16,26,55,97,[204][205][206][207]. Children show limited ability to manipulate or transform representations in WM until around age 2 years [208]. Before age 4 years, children perform below chance on inhibitory control tasks [e.g., Grass/Snow or Less is More; 152,209]. The ability to sustain and direct attention to relevant stimuli are limited until about age 5 years [210,211]. Presentation of neurodevelopmental disorders and stressors in early life have particularly profound impact on EF development [212,213]. The developmental timing of stuttering, with onset typically around 3 years of age [214], may have devastating effects on EF during this critical period of rapid growth. The presence of stuttering may delay, reduce or plateau EF development. A longitudinal study which maps EF growth would be needed to determine specific trajectories in CWS.
Previous studies have primarily focused on EF differences between CWS without comorbidity and typically developing children [85][86][87]89]. The present study extended this focus to CWS and CWNS with comorbidity. Although the magnitude of difference between groups were small, our findings of weaker EF in CWS and CWNS with comorbidity compared to their peers without comorbid conditions is consistent with prior reports of weaker EF in children with multiple conditions in other neurodevelopmental disorders (see "EF in other developmental disorders and children with comorbid conditions" section).
Nonetheless, our finding of stronger EF in CWS-NC compared to CWNS-WC was unexpected. Additionally, CWS-WC showed the weakest EF amongst all groups. These findings suggest that multiple conditions have a more robust negative effect on EF than stuttering alone, and further widen disparities between CWS and CWNS. A potential confound to understanding the effects of comorbidity is the severity of conditions, duration and sequence of appearance. In the present study, it is unclear whether stuttering is the core impairment in CWS-WC, and whether conditions occurred sequentially (and if so, in which order). Moreover, the duration of overlap between conditions was not reported. It is plausible that CWS (and for that matter, CWNS) with early onset or longer duration of multiple diagnoses would show weaker EF as a consequence of prolonged, increased burden. To gain a better understanding of the possible causal influences and directionality of stuttering, comorbidity and EF, a longitudinal study mapping the sequence, timing, and duration of conditions in conjunction with EF development along varying pathways to recovery or chronicity would be necessary. It should be noted that the standardized regression coefficient for comorbidity was larger compared to that for stuttering. This suggests that the presence of comorbid conditions may have a larger impact on EF development than stuttering alone.
The present study also found a significant association between sex and EF. When stuttering and comorbidity were controlled for, the EF for females was larger than the EF for males. Nonetheless, the magnitude of difference between sexes was small. In fact, the standardized regression coefficient for stuttering was larger compared to that for sex. This suggests that stuttering may have a greater practical importance than sex in determining EF. Nonetheless, prior research has demonstrated differences between the sexes for specific EF components during childhood [215], although differences lessen with age [216] and there is no evidence of systematic advantage across the lifespan [for a review see 217]. In general, typically developing girls outperform boys on inhibitory control and attention [217]. Girls are also less impulsive during childhood and show better WM [217], although differences are not observed on tasks of spatial WM [218,219]. Additionally, within-sex variability is likely greater than between-sex variability [217]. The current study addressed holistic EF measured through parent report of behavior; prior research indicates that sex differences are sometimes linked to EF task type, such that changing task features changes results in turn [217]. As such, findings of sex differences may be related to EF measurement in the current study (i.e., parent report of behaviors) and should be interpreted with caution.
Predictors of socioemotional competence
EF was a significant predictor of socioemotional competence, confirming our hypothesis. The standardized regression coefficient for EF was larger than that for sex or comorbidity status pointing to the crucial contribution of EF to socioemotional development. Stronger EF was correlated with better socioemotional competence, a finding in line with general consensus in the field. Social interactions involve EF skills, including the ability to remember social norms (WM), suppress socially inappropriate behaviors (inhibitory control), and direct and sustain attention on interactions [14,18,220]. Accordingly, children with stronger EF would be expected to have better socioemotional functioning and more prosocial behaviors. Socioemotional competence is a key predictor of social and academic success, and challenges with socioemotional functioning in early childhood have consequences for long-term social, academic success and mental health [221][222][223]. Early socioemotional competence in kindergarten is correlated with lower probability of mental health issues in adolescence; and higher probability of graduating from high school, attending college, being employed in adulthood [222,224]. Conversely, lower socioemotional competence in preschool is linked to higher internalizing (e.g., depressed mood, anxiety, social withdrawal) and externalizing (e.g., aggression, hyperactivity) symptoms in adolescence [225]. It is worth noting that these same challenges in social, academic and mental health are reported in those who stutter [226][227][228], and multifactorial models of stuttering cite emotion as a factor in the emergence and chronicity of stuttering [229,230]. However, the cross-sectional design of the current study does not allow for the examination of EF changes and related socioemotional competence over time in CWS. Moreover, other factors found to impact socioemotional status in children, such as socioeconomic status/household income, language minority status and parents' mental health [224], were not examined in this study. A longitudinal study which encompasses the aforementioned factors would be needed to provide a complete picture of socioemotional functioning in CWS.
Sex was also found to be a significant predictor of socioemotional competence. When EF, stuttering, and comorbidity were controlled for, girls were found to have lower socioemotional competence than boys, although the magnitude of difference was small. This suggests that the effects of sex may be less clinically significant than EF or comorbidity in the development of socioemotional competence. Disparities in sex-related findings between EF and socioemotional competence may be related to differences in social-evaluative concerns, perceptions of socioemotional competence between sexes, and the measures of socioemotional competence used in the present study. Girls have been found to show heightened socio-evaluative concerns compared to boys, as well as higher levels of depression related to these concerns [231]. Further, girls who show lower socioemotional competence exhibit higher externalizing symptoms [232]. Traditionally, perceptions of socioemotional competence may be impacted by the expected norms for sexes, where externalizing behaviors such as aggression are judged favorably in boys (i.e., aggressive boys are seen as more socially competent than less aggressive boys) but not girls [for overview see 233]. These differences may intersect with the specific items used in the current study, that is, higher levels of depression, worrying, and externalizing behaviors among girls may have disproportionate impact on parent ratings on the related survey items.
Stuttering was also a significant predictor of socioemotional competence, although the magnitude of effect was smaller than for EF or sex. This finding was not surprising in light of reports of social and emotional difficulties in those who stutter [147,234]. School-aged CWS between 7 and 12 years old, particularly girls, are six times more likely to have social anxiety disorder, and seven times more likely to have generalized anxiety disorder compared to typically developing children [235]. An overwhelming majority of CWS experience peer victimization, difficult in establishing friendships, negative self-perceptions, shame, and lower self-confidence with consequences for their socioemotional functioning [236][237][238][239][240]. Collectively, findings point to the burden of stuttering on socioemotional functioning, particularly for girls who stutter.
Theoretical implications for EF in stuttering
EF components feature prominently in some causal theories of stuttering, such as the EXPLAN model, Covert Repair Hypothesis and Vicious Circle Hypothesis [13,108,[241][242][243]. In the EXPLAN model, the phonological loop and WM are involved in accessing phonological information in memory, and lags between linguistic planning and motor execution are thought to produce disfluencies [12,13,244]. It is conceivable that deficits in phonological WM (as reflected by lower accuracy and slower response in WM tasks in CWS) could result in errors in activation or ordering of linguistic material, and result in linguistic planning delays [12,245]. The Covert Repair Hypothesis proposes that disfluencies are the product of covert detection and corrections of prearticulatory errors which interfere with ongoing articulation, and higher rates of disfluencies are due to multiple or excessive attempts at repairs [13]. First, weaker attention control as reported in CWS [e.g., 9,10,113,114] may result in excessive attention on prearticulatory errors or an inability to shift attention away from repaired segments, whereby, numerous repair attempts are made, contributing to high rates of disfluencies. Second, weaker inhibitory control would also prevent suppression of excessive corrections of speech plans, yielding high rates of disfluencies [127]. Similarly, the Vicious Circle Hypothesis posits that heightened monitoring and focus on speech errors, along with lower threshold for repairs underpin stuttering [108]. Reports of weaker attention control and lower flexibility in CWS [e.g., [100][101][102] could result in abnormal allocation of resources or an inability to redirect attention away from error monitoring. These theories posit a link between stuttering frequency and EF development. CWS with weaker EF would be predicted to show higher frequency of stuttering. These frameworks could also be extended to stuttering prognosis. For children who are experiencing development delays including in EF domains, stuttering may resolve as the cognitive system matures and catches up. Delays in phonological access may decrease as WM capacity increases [EXPLAN ; 12]. As inhibitory control strengthens, attempts at repairs may decline [Covert Repair Hypothesis; 13], and stronger attention control could reduce excessive monitoring of speech errors [Vicious Circle Hypothesis; 243]. WM models offer a unified framework for integrating EF components affected in stuttering. The unity/diversity theoretical model of EF proposes that underlying components (WM, attention, inhibition) are correlated but dissociable [112]. EF holistically results from the interaction of these distinct domains with each responsible for complimentary control [112,246]. Baddeley and Hitch's [247] three-factor model and more recently, Baddeley's [98] four-factor model conceptualize WM as a storage system for verbal/auditory information (i.e., phonological loop), and visuo-spatial information (i.e., visuospatial sketchpad). The phonological loop and visuospatial sketchpad are overseen by an attention controller (i.e., central executive), and episodic buffer for integration of material in the phonological and visuospatial subsystems [98,248,249]. There is robust evidence which suggest that the WM system (including attention and inhibitory control) is crucial for EF behavior, and deficits within this system in whole or within each domain, are correlated with behavioral issues [43, 250 for a review see 251]. Thus, the development of EF may related to specific behaviors (e.g., self-regulation) in CWS. In other words, CWS with behavioral challenges, including those related to self-and socioemotional regulation, and attention may have weaker EF. Nonetheless, how EF maps onto chronicity is unclear. It is plausible that CWS with EF deficits, reflected by behavioral challenges, that do not resolve with age may have a higher risk for chronicity.
Limitations
The strengths of the present study are the large sample sizes and avoidance of potential selection bias of subjects (e.g., recruitment via clinical referrals). Nonetheless, some limitations including the reliance on parent report need to be addressed. It is extremely probable that stuttering was not formally diagnosed in some children and others were misidentified. Further, parents' memory of their child's development including stuttering may be inaccurate. The heterogeneity of stuttering and variability of symptoms over time may also inflate the risk of misidentification, particularly for children with mild stuttering or those who may be experiencing periods of increased fluency at the time of the survey interview. EF was also measured using parent reports. Although previous studies have utilized behavior as a proxy for EF in CWS [e.g., 100,102,116] and CWNS [e.g., 70,71,75]; and found higher sensitivity for detecting differences between CWS and CWNS [153], more research is need to determine how parent reports map onto outcomes in standardized EF tests for CWS. Stuttering was not operationally defined in the survey. Thus, CWNS with higher rates of typical disfluencies may be misidentified with stuttering. The presence of comorbidity were also based on parent reports, and as such may be disproportionately (over-or under-) identified. Although parents were asked whether "a doctor or health professional ever told" them that their child had specific conditions, it is unclear if these were based on a formal diagnosis. Additionally, the direction of effects or causality between factors cannot be determined in the current study. The current study included children with a broad range of comorbid conditions. Future studies may benefit from examining the impact of specific conditions on EF development.
Conclusion
Findings from the present study points to the validity and sensitivity of parent reports on real-world behaviors as a means to measure EF in CWS. Nonetheless, it is still unknown if EF is malleable in CWS, and if so, what are the opportunities for remediation, such as targeted training, and/or authentic activities that support EF development [cf. musical training, mindfulness; 252]. Managing two languages concurrently is thought to enhance EF components in typically developing bilinguals [for a review see 253]. Bilinguals show higher performance for tasks requiring WM, attention and inhibitory control compared to their monolingual peers [253-255 however, see 256]. For example, 4-year old bilinguals outperform their age-matched monolingual peers in their capacity to focus and switch attention, demonstrating accuracy equivalent to 5-year old monolinguals during the Dimensional Change Card Sort [257]. It is unclear if this bilingual enhancement is affected in stuttering. Understanding how stuttering interacts with bilingualism could offer insight into the development of EF in CWS.
Current findings support the presence of subtypes in CWS based on EF and comorbidity, i.e., CWS with stronger EF without comorbid conditions, and CWS with weaker EF with comorbidity. These subtypes may have relevance for chronicity. Some children who are experiencing developmental delays, including in cognitive development, may experience periods of stuttering, that is, until EF deficits resolve with maturation of the cognitive system. It is possible that stronger EF, albeit weaker than in CWNS, and the absence comorbidity in CWS reflect more subtle deficits which could eventually resolve or attenuate. If so, the probability of recovery would be higher for this group. Conversely, CWS with weaker EF and comorbid conditions may be a subgroup with greater developmental vulnerability and increased risk for chronicity. Weaker EF and comorbidity may simply signal a higher degree of impairment, surpassing the ability of the cognitive system to compensate. More fine-grained research is needed to disentangle the relationship between EF, comorbidity, and stuttering prognosis. Present findings also have implications for clinical practice. Deficits in EF and high rates of comorbidity in CWS underscore the need for multi-dimensional, multidomain approaches to the diagnoses and treatment of stuttering. Such an approach would better address the complexity of stuttering, and variability across individuals and sex across a wide spectrum of symptoms leading to improved outcomes.
|
2023-01-30T15:14:50.804Z
|
2020-10-31T00:00:00.000
|
{
"year": 2020,
"sha1": "be982139ca9753f292ba3e979796955afc94e956",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40359-020-00481-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "be982139ca9753f292ba3e979796955afc94e956",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
55502391
|
pes2o/s2orc
|
v3-fos-license
|
Risk Assessment Methodology to Support Shutdown Plant Decision
Nowadays one of the most important decisions in safety issues in Brazilian Oil and Gas industry is that it’s necessary to shut down plant because one specific failure or required maintenance in protection system makes influence on risk level. Most of time, experienced operators make decisions based on their background despite carrying out a risk analysis to support their decision. Therefore in so many cases, refinery plants work on catastrophic risk level due to subjective decisions. In order to improve the operator decision, a specific methodology was established to apply risk assessment using PRA (Preliminary Risk Analysis), LOPA (Layer of Protection Analysis) and FTA (Fault Tree Analysis) in order to check risk level or layer of protection availability. By this way, as the first step, the Preliminary Risk Analysis will be carried out in order to qualify risk and mainly define consequences severity. The second step will carry out the LOPA in order to find out the failure probability of all layers of protection and without one of those layers of protection which requires maintenance or even failure. In addition, when is necessary to check that contingency systems availability FTA will be carried out? In the first case, it is possible to substitute the layer of protection for another in order to keep risk on acceptable level. In the second case, it is necessary to check if contingency system is available and assess if consequence gets worse or keeps on the same level. In both cases, the final risk will be assessed and compared with the previous one defined on PRA. In case of risk, it is unacceptable that the final decision will shut down plant. The refinery study case will be shown as an instance of such methodology.
Introduction
Nowadays, despite many risk assessment methodologies being applied in enterprises' projects in order to mitigate risk, in many cases, during operation, the layers of protection fail down and many plants in Brazil work on untolerable risk level for a period of time.
Mostly, the decision not to shut down plants is based on employee's background.In other cases as in Brazilian Oil and Gas industry, a risk analysis tool as PRA (Preliminary Risk Analysis) is applied in order to check qualitatively if risk level is catastrophic and if some implemented recommendations are enough to control and maintain risk on acceptable levels.
The motivation for this research was to develop the best risk analysis approach in order to provide support to plants' shutdown decision whenever a layer of protection takes place due to maintenance action or even failure.Consequently risk analysis must be carried out to check if risk is under intolerable level in such circumstances.In fact, we got a conclusion that PRA is not enough to assess risk level whenever one layer of protection takes place.Therefore, the best approach is a combination of risk analysis methods like PRA, LOPA and FTA as will be demonstrated.
In Nuclear industries all unsafe conditions which bring unacceptable risk to systems are known, and so is the time of not allowing running the nuclear plant if unsafe conditions take place.Actually, depending on situation, the plant stops by itself in safe condition due to BPCS (Basic Process Control System).
Unfortunately, that's not the same in Brazilian Oil and Gas industry which depends on specialists' analysis and this study proposes one methodology to assess risk in the case of failures of layers of protection or required maintenance.Indeed, the main objective is to support the decision to shut down plants to avoid operateing on unacceptable risk level.Basically the main steps of methodologies are: 1) To carry out PRA of system with layer of protection to define risk qualitatively; 2) To carry out LOPA analysis to find out probability of accident without layer of protection; 3) In contingency system, to carry out FTA analysis to find out availability without layer of protection; 4) To check if risk without layer of protection or contingency system is acceptable; 5) If risk is unacceptable, to propose some preventive actions, new layer protection or new contingency system to maintain risks on acceptable level; 6) If not possible, to maintain risk on acceptable level, shutdown plant.
The next sections will provide clear explanations about risk analysis methodologies (PRA, LOPA and FTA) and risk assessment methodology to support shutdown plant decision.
Preliminary Risk Analysis
The PRA analysis come from military industry application as a reveal technique applied to check missile system launch.In that case, 4 of 72 missiles intercontinental atlas were destroyed with high cost.
Nowadays, the PRA is applied in many industries in project conceptions [1] but it can be applied in operational activities along enterprise lifecycle.
So, no matter this application phase, that main objective is support decision in order to avoid accident elimi-nating unsafe conditions.
In most of cases in Brazil, that kind of analysis has a specific focus on environment or safety issues.It's good in terms of faster problems solutions in operational areas but in case of project that unique focus applications may increase project cost associate with so many recommendations or in some cases less preventive actions than necessary would permit unsafe conditions and environment impacts.
In order to save money, time and integrate problems solutions the integrated PRA was implemented in Brazilian Oil industry.In onset application is used to focus on safety or environment issues separately but nowadays is used to assess hazard integrating social, safety and environment issues [2].The Figure 1 below shows PRA matrix as instance.
Based in risk matrix it's possible to assess hazard in PRA file which comprise cause, consequence, risk, detection and recommendation column.
The preliminary analysis is one of the most spread out risk analysis tool in Brazilian Oil and Gas industry because some advantages that are: Simple application and understanding; Supply direct information about hazards involved in process along enterprises lifecycle to other studies; Support other qualitative and quantitative analysis with qualitative information.
Despite advantages, such risk analysis methodology requires some cares to avoid a superficial risk understanding and sub estimate risks involved in process.Even though such drawbacks, that is a very good methodology to assess risk involved in process and quantify risk qualitatively but in most of cases, due risk criticality, is required to implement other risk analysis methodology as effect and consequence analysis do investigate the hazard consequences.
Layer of Protection Analysis
The risk analysis tool called LOPA (Layers of Protection Analysis) is a special form of event tree analysis that is optimized for the purpose of determining the frequency of an unwanted event regarding such layer of protection [3].As doing so, is used to define the accident probability regarding initiate event and failure in all layers of protection.
In the late 90s international standards for control systems on computer controlled facilities emerged [4].The task of compliance with these standards in a consistent manner led to the introduction of Layer of Protection Analysis (LOPA) for determination of Safety Integrity Levels (SILs) for computer operated production facilities.This was conceived and promoted by the Center for Chemical Process Safety (CCPS).
The layer of protection is any equipment which avoids accident by itself, it means without human intervention [5].Some author consider human action as layer of protection and in so many cases it avoid many accident but in most serious case, when a catastrophic event would be triggered, more than one layer of protection further human intervention is projected to guarantee an acceptable risk level.
The remarkable point when a safety system is projected is how far reliable is required to each layer protection and how many of them are necessary.In fact, engineers project system based in their background about safety system.Therefore, specific analysis like LOPA is required to check if risk is acceptable or not.In this case, LOPA will provide the frequency or probability of accident and further combining it with consequences in terms of death result in risk value.Thus it's possible to check if risk is tolerable or not based in risk criteria.
In negative case, is necessary to improve project and propose more layers of protection or increase layer protection's reliability.In order to make up safety system, the SIL (Safety Integrity Level) analysis is carried out based in specific international procedures (IEC-61508).That methodology consist in define how much reliable is required to SIF (Safety Instrumented Function) to reduce risk to acceptable region.
Indeed, no matter the methodology applied, the main idea is to certify that system is reliable enough to be acceptable in terms of risk [6].
Accordingly to LOPA methodology is required to know the layer protection probabilities and frequency (or probability) of initiate event.In fact, to accident occur is necessary that initiate event set up and all layer of protection fail down.In order to find out its probability is necessary to multiply initiated event frequency with all layers probabilities as shown is equations below (IE = initiate event, Ln = layer of protection n, F = frequency, P = probability).
It is also possible to define the probability of accident occur when is defined triggered event probability as shown in equation below.
Afterwards, the frequency or probability of accident is combined with consequences resulting in risk that is compared with risk acceptable values.
In order to be clearest there will be made some additional comments about layer of protection.In fact, there are some types of layers of protection like: Operator intervention; Basic process control system (BPCS); Mechanical equipment integrity; Physical relief device; External risk reduction facilities.
The operator intervention is one of the most doubtful layers of protection and is not considered for all analysts as an effective layer of protection.
Notwithstanding that fact, an employee if prepared to act preventively may avoid an accident if interventions succeed correctly on time.In so many cases, is necessary to operator see visual alarms or hear it sound or even read some measure in operational equipment control.Such alarms are not considered a layer of protection because it not avoids accident but they are projected to support employee's decision and alert then to unwanted unsafe process condition.
Even though existence of alarms and layers protections is necessary to have a low human error probability and to achieve that is necessary to trainee employees to emergency situation.
Fault Tree Analysis
After the text edit has been completed, the paper is ready for the template.Duplicate the template file by using the Save As command, and use the naming convention prescribed by your journal for the name of your paper.In this newly created file, highlight all of the contents and import your prepared text file.You are now ready to style your paper.
In 1960s US air force applied FTA (Fault Tree Analysis) studies of Launch Control system on it first application, then Boeing Company recognized the value of this tool and lead a team to apply such methodology in commercial aircraft design project.In 1966 Boeing developed a simulation program called BACSIM for the evolution of multiphase Fault Tree Analysis [7].
The FTA (Fault Tree Analysis) Methodology is an appropriated approach like LOPA and ETA (Event Tree Analysis) to assess combined event which triggered unwanted top event and may cause an accident.Despite similar characteristics in Fault Tree Analysis is enabling to assess many combinations of events which triggered top event causes.In some cases, Fault Tree is input for some event probability calculation in LOPA or ETA diagram.This approach is known as Hybrid Risk analysis and required a high level of information about hazard assessed.
The main objective of FTA is define the probability of top event occur and show it vulnerabilities relating minimum cut sets.In other words, is to assess event combination which triggers Top Event.The mathematics statics concept application is Boolean methods which consider possibility for two or more events occur simultaneously or not and trigger top event.In first case, if one top event depends on many events occurrence to be triggered such event probability is result from multiplication of events probability.In second case, if only one event is enough to top event be triggered the top event probability result from sums of events probabilities based on Bayesian method.Those two cases are represented in equations below.
Is this two cases, there are basically two gates which represent each situation, "and "on first case and "or" in second case as show in Figure 2, furthermore there are top event and basic event.
Those symbol are the simplest representation, there are other types which is more appropriated to other types of combination among different events as instance at least two of five events (K/N).In fact FTA is inverse of Diagram Block methodology which regards blocks in series or in parallels configuration depends on impact caused in system assessed.The FTA is build up from the TOP to the base regarding events combinations depends on in formations available.In some cases, specialist opinions about events probabilities are regarded to succeed FTA or calculate based on historical data.An example of FTA is a tank explosion comprised by six basic events and gates as represented in Figure 3.
The The top event occurs if occur event 0 and 1 and occur event 2 and 3 and also occur event 4 or 5.In other words, is necessary occur failure in BPCS and operator do not close manual valve, in addition is required that increase flow of product to tank and retention valve failure.Moreover, bypass valve or relief valve have to fail down.
As mentioned before is necessary to have event probabilities values and it possible to have discrete values or continues probabilities.In case of continuous values of probabilities to each event is regarded PFD to each event as shows in equation below.
As doing so, simulating FTA, after one year there is 3.1% that Tank explode.Its permit manages risk, it means base event in order to avoid top event.The remarkable point is to stand that there's not allowed to repeat any event in FTA.In some case, makes Fault Tree diagram configuration might be complicate.Unfortunately, on Brazilian Oil and Gas industry, on most of cases, the FTA is not included in Preventive Analysis like other risk analysis tools but by the other way round, it may be utilized to support information about event probabilities as will be show in item 6.
Preventive Risk Analysis Method
Nowadays the usual methodology applied to assess risk in case of layer protection shutdown in Brazilian Oil and Gas industry is PRA.As mentioned before is a good risk analysis tool because employees are familiar with that and it is easy to be implemented.By the other way round, it's not possible to know quantitatively if risk is under control or not after preventive actions implementation.
The second remarkable point is that in some cases consequences are clear and others are not but even not clear it's possible to check some information in historical accident data or risk analysis reports.The real problem is estimate the probability of unwanted event happens that depends on initiate event combined with layer protection failures.Because of that, most of cases the analyst are conservative in their decisions and super estimate risk.In this case, the plant is shut down to avoid catastrophic accident but in fact it would be not necessary.The opposite also happen and system operates under not acceptable risk level when layer of protection is unavailable.
In order to reckon the probability of unwanted event occur with and without layer of protection is propose to use layer of protection methodology.
With probability of unwanted event is possible to find the risk level and check if it is acceptable or not.The proposed preventive methodology to support decision in case of layer protection is unavailable due to maintenance or failure is based in following steps: 1) To carry out PRA of system with layer of protection to define risk qualitatively; 2) To carry out LOPA analysis to find out probability of accident without layer of protection; 3) To check if risk without layer of protection is acceptable; 4) If risk is unacceptable, propose some preventive action or new layer protection to reduce risks to acceptable region; 5) If not possible to reduce risk to acceptable condition, shutdown plant.
Based on those five steps, it's possible to take better decision when layer of protection fail or is necessary to make preventive maintenance in layer of protection.Figure 4 shows risk analysis methodology to support decision to shut down plant or not.
Actually there are two approaches to compare risk without layer protections with tolerable risk.The first one is reckon the frequency of accident without layers of protection and combine with consequence based in risk matrix.The second way is compare the final risk with Individual Risk (ALARP) is cases where consequences of deaths is estimated by consequences and effect analysis.level based on risk matrix (Figure 5) or individual risk (Figure 6) the activity can be carry out and is not necessary to input additional layer of protection or shut down plant and stop production.Such methodology has the main objective to avoid Plant or operational activities take place in unacceptable risk level.Individual Risk = 10 × 1 × 10 −4 = 1 × 10 −3 .On first case, the first step is carry out PRA based in qualitative risk matrix and define risk.Further, the probability of unwanted event without layer of protection will be defined using LOPA methodology and the new risk will be assessed on risk matrix.
That is on unacceptable region as shown Figure 6.By the other way round, if regards values on risk matrix on Figure 5 the risk can be considered moderate (severity Category-III and frequency category A).That's shows that more than one risk criterion must be took into account whenever it's possible in order to take more reliable decision.
On second cases the frequency defined in LOPA is multiplied by expected number of deaths estimated on Consequence and effects analysis and compare such risk with individual risk tolerable values.
The remarkable point to consider is that whenever decision are take based on risk matrix is possible to consider tolerable risk in order to not shut down Plants.When LOPA is carried out the frequency is calculate and not estimate qualitatively, so risk has more realistic value.
A simple example is, when occur excess of gas on furnace, there is unsafe condition and to avoid furnace explosion layer of protection like human action (P(f1) = 0.1), Manual Valve (P(f2) = 0.01) and BPCS(P(f3) = are triggered.Such incident (excess of gas on furnace) has a frequency of per year.Thus, the frequency of explosion is: In addition to layer of protection, the contingency system also takes some influence is risk level because if those systems are on preventive maintenance or fail, the consequence would get worse as expected if accident happen.It means that, consequence without contingency system would get worse the risk level.Therefore, when there will be maintenance or shut down in contingency system (splinkers, fire system pumps, and chemical showers) it necessary assess consequence which if get worse without contingency maybe would take influence in risk level.Figure 7 summarizes steps used to assess risk in case of maintenance or failure in contingency f (Furnace explosion) = f(excess of gas) × P(f1) × P(f2 If such accident happen is expected at least ten deaths into plant so based on risk matrix on Figure 5 the risk is moderate (severity category III and frequency category A).
Whenever risk is evaluated and achieve acceptable
system.
Similar to layer of protection case, whenever risk is evaluated and achieve acceptable level based on risk matrix (Figure 5) or individual risk (Figure 6) the activity can be carry out and is not necessary to input additional layer of protection(contingency) or shut down plant and stop production.Such methodology has the main objective to avoid Plant or operational activities take place in unacceptable risk level.
Fire Protection Pumps System Case Study
An example of application of such methodology was on preventive maintenance in fire pump system in refinery.
That contingency system provides water to combat fire and if it's failure or in maintenance when fire occur the consequence will be worse, in other words, based in matrix in Figure 1 the consequence goes from critical to catastrophic.Aware this fact, the maintenance team will keep system available during maintenance service and take out only one pump for maintenance.The fire protection pump system is comprised for five pumps if electric system failure tree pumps stop.At least one pump is required to keep fire system pump available.In order to define fire pump system availability the Dynamic Fault Tree Analyses were applied to find out if fire pumps system availability and probability of failure without one pump.In order to model fire pump system availability a dynamic Fault Tree Analysis was made up as show Figure 8.
The Dynamic Fault Tree Analysis is a quantitative risk methodology applied in combination of event which cause unwanted event that in this case is fire pump system unavailable.on top event, to fire pump system be unavailable is necessary failure in Electric energy supply and failure two others pumps (D and E).Pump E is active redundancy of Pump D. The failure pumps rates is 0.5 per year and electric system failure rate is 1 per year.The Dynamic Fault Tree probability of failure is described by equation.Regarding 2 hours to reestablish Electric Energy System if it shut down and 8 h for each pump the Simulations on Figure 9 shows system has 100% of availability until 5 years despite pumps failures.
If one of pump is under maintenance service (Pump D) and it is out for 1 h (maintenance service time duration) on 4th year on 11th month is necessary to check System fire pump availability and the probability of failure.The Figure 10 below represents the fire pumps system with pump D in maintenance.
In this case were regarded exponential functions to represent PDF failure along time to both pumps as previous simulation and also to electric system.Thus, the Dynamic Fault Tree probability of failure is described by equation.
In terms of failure probability of System, the situation is the same with or without pump D. Regarding maintenance action on pump D performed on 11th month of 4th year and takes only one hour, it's also make no difference because system will have 100% of availability as well as with pump D as shows Figure 11 and if some accident occur, the consequence will not get worse than expected.
The final conclusion is that maintenance in pump D is allowed because the whole fire pump system has 100% availability in 1 h (maintenance service duration) and probability of failure is similar with or without pump D (0.06).
Conclusions
The preventive Risk Analysis Methodology propose will provide information to employees to make better decisions in respect to unsafe conditions when layer of protection or contingency system fails or is out of operation due to maintenance service.The huge challenge nowadays in Brazilian Oil and Gas industry is to achieve safe behavior and that employees internalize preventive safety 0,000 87600,000 175200,000 262800,000 350400,000 43800,000 Open Access OJSST E. CALIXTO ET AL.
Open Access OJSST 124 value to apply such methodology.Despite some difficulties in the beginning application cases, regarding that such methodology and risk analysis tools like PRA and LOPA do not spread out in all workforce, most of employees recognize that it is a feasible methodology and it's good to keep process under control.Whenever such applied methodology is required to formalize the analysis using forms and reports in order to supply future analysis for employees who work on shift they have not so many conditions in many cases to carry out a complete risk analysis.The first methodology version is being applied and improvement will be carried out in a long time.It is mostly linked to quantitative methods and using more individual risk criterion than risk matrix.
It is expected that such analysis be carried on following years providing historical data in which situations are allowed operating without layer of protection.
The exponential PDF applied on fire pumps was just to demonstrate the application of preventive Risk Assessment Methodology case study.In real case, life cycle analysis is carried out and the most common PDF to represent pump failures' modes is normal (bearing, shaft and seal) or Gumbel (axel and impellers).
Figure 3 .
Figure 3. Tank explosion FTA.E3 = Retention valve inlet tank; E4 = Bypass valve failure; E5 = Relief valve failure.The top event occurs if occur event 0 and 1 and occur event 2 and 3 and also occur event 4 or 5.In other words, is necessary occur failure in BPCS and operator do not close manual valve, in addition is required that increase flow of product to tank and retention valve failure.Moreover, bypass valve or relief valve have to fail down.As mentioned before is necessary to have event probabilities values and it possible to have discrete values or continues probabilities.In case of continuous values of probabilities to each event is regarded PFD to each event as shows in equation below.
Figure 1. PRA Matrix Author-Calixto, 2007.
Therefore the PRA drawbacks are: Simplify risk understanding and sub estimate it; To forget to take into account some hazard involved;
Figure 9. Top event failure rate.
P Fire.Pump.System.Out P FES P PE Figure 10.The fire pump system without pump D.
|
2018-12-13T11:59:21.144Z
|
2013-12-13T00:00:00.000
|
{
"year": 2013,
"sha1": "81bb8e6894957064351a94f12b5486ffd3ebc56e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=41373",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "81bb8e6894957064351a94f12b5486ffd3ebc56e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
219741293
|
pes2o/s2orc
|
v3-fos-license
|
Human–Vehicle Integration in the Code of Practice for Automated Driving
: The advancement of SAE Level 3 automated driving systems requires best practices to guide the development process. In the past, the Code of Practice for the Design and Evaluation of ADAS served this role for SAE Level 1 and 2 systems. The challenges of Level 3 automation make it necessary to create a new Code of Practice for automated driving (CoP-AD) as part of the public-funded European project L3Pilot. It provides the developer with a comprehensive guideline on how to design and test automated driving functions, with a focus on highway driving and parking. A variety of areas such as Functional Safety, Cybersecurity, Ethics, and finally the Human–Vehicle Integration are part of it. This paper focuses on the latter, the Human Factors aspects addressed in the CoP-AD. The process of gathering the topics for this category is outlined in the body of the paper. Thorough literature reviews and workshops were part of it. A summary is given on the draft content of the CoP-AD Human–Vehicle Integration topics. This includes general Human Factors related guidelines as well as Mode Awareness, Trust, and Misuse. Driver Monitoring is highlighted as well, together with the topic of Controllability and the execution of Customer Clinics. Furthermore, the Training and Variability of Users is included. Finally, the application of the CoP-AD in the development process for Human-Vehicle Integration is illustrated.
Introduction
The European research project L3Pilot focuses on different activities with regard to automated driving. Split into seven subprojects, the main objective of the L3Pilot subproject 2 is to define a Code of Best Practice for Automated Driving (CoP-AD). The CoP-AD shall provide comprehensive guidelines for supporting the automotive industry and relevant stakeholders in the development of the automated driving technology. Thus, the CoP is meant to provide best practice guidance that can be used by designers and engineers throughout the lifecycle of automated driving systems. The guidelines are derived from knowledge gained in the industry as well as best pra ctices collected on this topic.
Previously, for systems up to and including SAE level 2 [1], the Code of Practice for advanced driver assistance systems, derived from the Response project [2], served as a guideline for the development of such functions. With the advent of SAE level 3 systems and above, its application is no longer appropriate. Nonetheless, the existing code of practice was analyzed in order to apply the lessons learnt and to make use of the aspects, which remain appropriate for SAE level 3.
In order to define the scope of the document, a framework for the Code of Practice for Automated Driving was defined at the beginning of this project. It serves as a baseline for the work to be done for creating the CoP-AD. In the second section of this paper, the development process is outlined, culminating in the definition of the topics to be addressed, which were classified into four different categories. It also includes the applicable development phases and furthermore, the geographical regions, operational design domains, and SAE levels affected. The template on how to phrase and execute the questions that will form the checklist of aspects to consider when developing an Automated Driving Function (ADF) is also outlined and explained.
The third section shows the draft content of the Human-Vehicle Integration (HVI) category. This is one of the four main categories in the CoP-AD. It focuses on the topics related to the interaction between the vehicle and the user. This ranges across a broad area covering human factors, user experience, usability, and cognitive ergonomics. The section is divided into the areas of Guidelines for HVI, Mode Awareness, Trust and Misuse, Driver Monitoring, Controllability and Customer Clinics, and finally Driver Training and Variability of Users. The topics are explained, and examples are given on how to apply them as part of the CoP-AD.
In the final section, some general conclusions have been drawn, and further conclusions are highlighted with a focus on the HVI category. This paper is based on the L3Pilot deliverab le D2.2 [3], which is a draft of the CoP-AD used to gather feedback from external partners outside of the L3Pilot consortium.
Development Process
At the beginning of the L3Pilot project, a survey was distributed to all L3Pilot partners in order to collect the requirements of all key stakeholders for the CoP-AD. This includes experts from both industry and research institutes. The relevant topics to be covered in best practices were derived using this feedback. The topics collected as part of the survey were selected based on predefined criteria during a subsequent workshop. The criteria for inclusion of a topic are listed in Table 1. Table 1. Criteria for inclusion of topics into the Code of Practice for automated driving (CoP-AD).
The topic or process poses a common challenge in the development process that requires cooperation. A wrongly applied approach for the topic or process would lead to serious consequences (e.g., malfunctions in certain traffic situations leading to non-release of the function). A frequent misapplication of an approach for a topic or process is highly likely.
The topic/process has already been identified as relevant by others. The topic or process can be described in a general way that does not lead to unreasonable limitations in the development process (company independent). And the optional criteria: the topic or process is of relevance for L3Pilot prototype vehicles and can be evaluated in this project.
With regard to the actual process of applying the CoP-AD, the decision was made to use the existing Code of Practice for Advanced Driver Assistance Systems as a baseline. Figure 1 shows the selected development phases for the CoP-AD. Compared to the Code of Practice for Advanced Driver Assistance Systems, the number of phases was reduced from six to four during the actual development. The second and fourth phase originally consisted of two separate stages, but these were condensed into the Concept Selection Phase and the Validation and Verification Phase for greater simplicity. An additional phase for the time post start of production was added to cover the entire lifecycle of the ADF. The conceptual stage consists of the Definition Phase and Concept Selection Phase, while the Design Phase and the Validation and Verification Phase constitute the series development stage. During the Definition Phase, the basic requirements are defined and based on this, the best concept is chosen in the Concept Selection Phase. The Design Phase requires the detailed design of the system. Then, it is validated and verified in the final phase before the start of production. Post start of production, further data can be gathered and improvements can be applied. This process is not necessarily linear; iterative improvements with repetitions of important steps might be possible. The process has been designed to remain abstract on purpose, so that the CoP-AD can be applied to the many different development processes in place in the industry at various companies. In order to clearly summarize the topics that were collected, a number of categories were defined to cluster them. Table 2 shows the categories finally chosen with the pertaining topics. They are based on extensive expert discussions, clustering all the available topics in a meaningful way. The last row on Human-Vehicle Integration is the key focus of this paper. The first category is quite generic and focusses on overall guidelines and recommendations, such as a minimal risk manoeuver. The Operational Design Domain (ODD) on the Vehicle Level offers a description of the function and scenarios at the level of the vehicle. The category ODD on the Traffic System Level, including Behavioral Design, offers a description of the function on the level of the overall environment and a description of the behavior of other road users. Safeguarding Automation is about how to ensure a safe operation of the function, primarily the functional safety, but also the cybersecurity and data privacy aspects. Human-Vehicle Integration is the interaction between the driver and the vehicle's displays and control elements.
The topics within each of the categories were distributed along the development process phases in a workshop. In order to better address the topics derived from previously held expert sessions, a thorough literature review was done to back up the topics with research results and existing best practices. Based on this, the questions for the CoP-AD checklist were phrased. These questions underwent a rigorous iterative improvement process, improving overall quality and reducing the overall number of available questions to the most important ones. This enabled the deliverable D2.2 [3] to be written, which is a draft used to gather feedback from external partners outside the L3Pilot consortium. This will culminate in the deliverable D2.3, the final CoP-AD, to be presented in 2021.
In order to apply the CoP-AD appropriately, a template was defined for all questions; this can be seen in Table 3. The reference number for each question can be found in the top left cell of the table, and the development phases associated with the question have been marked in the top right. In the body of the table, the main question is on the left, supported where applicable by sub-questions on the right. Only the main question needs to be answered directly with yes or no. Ideally, independent evaluators (e.g., individuals from other departments or external sources such as research institutes) who have formal training or experience in the subject matter of the topics are also involved in the application of the CoP-AD. For example, for the Human-Vehicle Integration topic, the evaluator should have experience in human factors, usability engineering, or cognitive ergonomics.
Following the CoP means that all of the questions should be answered positively, or, that the issue raised by the items has been solved in another way. The sub-questions serve as an elaboration. The main question is phrased in a way that an answer with yes always means that the question has been addressed sufficiently. However, even in case a no is given as an answer, this may still be appropriate, as there might be good reasons why something could not be done or answered, or is simply not applicable in a given case, as long as the underlying problem is solved and documented. For some of the items, accepted pass/fail criteria are available (such as the number of participants that need to pass a controllability confirmation test), others are relying on norms (e.g., legibility of displays) or expert assessments if these kinds of thresholds are not available. In a further step, the questions may be transferred to an Excel file or another software tool for easy application and editing.
The CoP-AD was scoped to cover motorway and parking scenarios for SAE level 3 and level 4 functions. Although only EU markets are currently in scope, it is assumed that the CoP-AD may also be applied to non-EU regions, as well as urban or rural traffic scenarios, and even driverless robot taxis. This needs to be investigated in further research.
In the third section of this paper, the HVI category is explained in detail. This also includes examples of the questions asked.
Draft Content Human-Vehicle Integration
The HVI category comprises all factors related to the interaction between the vehicle and the user. This ranges across a broad area covering human factors, user experience, usability, and cognitive ergonomics. The introduction of automated driving systems that allow fallback-ready users to disengage from driving and engage in non-driving-related tasks introduces a range of potential human factors problems that must be considered in the development process. First, the transitions from automated driving to manual driving must be supported so that users are capable of taking over the driving task in a safe way in case of system limits and malfunctions. Furthermore, the possibility of different automated driving modes being available within the same vehicle, each requiring different levels of responsibility from the user, creates the need to communicate the active driving mode unambiguously. Thus, the design of the Human-Machine Interface (HMI) is a central element in the design process to ensure proper mode awareness and controllable transitions to manual driving. Secondly, the "availability" of the driver to react to requests to intervene needs to be ensured, which is mainly a function of non-driving-related tasks carried out during the automated ride. Thus, the design of the ADF should be made with foreseeable non-driving-related tasks that might likely be carried out by users during the automated ride. Thirdly, whether the ADF will be used in accordance with the intended usage, or whether users will misuse it (possibly because of over trust in the ADF) will depend on the training and information users receive.
Display and control concepts, i.e., the HMI, must be developed in a way that they are easily and safely operated by the user of an ADF. The HVI is about the harmonious interaction between the user and the vehicle in a broader sense, whereas the HMI is more specifically about the hardware an d software interface between them. In order to streamline the various aspects related to HVI, this category is divided into five different topics. The first topic covers the general guidelines on how to design the HMI. This includes the acceptance of the ADF, as well as the usability and the user experience-related aspects. The Mode Awareness, Trust, and Misuse topic is primarily about the driver's awareness of the ADF's current driving mode. This also relates to the users' trust in the ADF and their potential for misuse. Driver monitoring is about assessing the user's state when operating an ADF, which is a topic closely related to the users' mental models and their workload. An important aspect of this is the impact of non-driving-related tasks (in the following referred to as secondary tasks) carried out while driving with a highly automated function. The Controllability and Customer Clinics topic refers to the question of an ADF's controllability from the user's perspective on the one hand and how to conduct a study with participants to test the controllability and other properties of the ADF on the other. Driver Training and Variability of Users is the final topic. It covers the area of user training required for an ADF. Furthermore, it also relates to the variability of users to be taken into account. Together, these topics, comprising 39 main questions, form a comprehensive overview on the overall category of HVI. All the main questions from this and all other categories are available in [3].
Guidelines for Human-Machine Interface
Guidelines for the ADF's HMI are prominently addressed as a topic in the CoP-AD. Following appropriate guidelines is key to producing a well-executed user experience and usability, which in turn will create a much higher level of underlying safety in the ADF [4]. On a generic level, this topic is about using HMI design guidelines to define, assess , and validate an HMI concept. They should be followed during the whole development process of the HMI for an ADF. There are various HMI guidelines available (e.g., [5,6]), and the guidelines used during the ADF development should be selected carefully to ensure they are suitable for the SAE level 3 systems. Guidelines adapted to HMIs for conditionally automated vehicles were presented by Naujoks et al. [7] and validated in empirical studies [8,9]. The HMI should be standardized where possible following industry standards that are consistent with the user's mental models [10,11]. This will minimize the time required to familiarize oneself with the HMI, therefore improving the experience of first-time users. Still, guidelines may differ for certain demographics, as different groups of people may prefer different communication methods such as symbols or color coding. Table 4 shows an example question from the Guidelines for HMI topic. The question aims to determine whether unintentional activations and deactivations of the ADF are prevented or not. Unintentional deactivation of an ADF by the user is an event that needs to be avoided. The driver may be focusing on a secondary task and will not be ready to take over control of the driving task if necessary. The HMI concept should be designed so that it is not possible for the driver to inadvertently initiate a transfer of control. At the same time, it is important to prevent unintentional activations of the ADF. Unexpected longitudinal or lateral input from the ADF may have a detrimental effect on the user's trust in the ADF. Furthermore, the visual interface shall be designed to be easy to read and interpret [12]. This item focuses on the importance of having a clear strategy for the visual HMI. Guidelines and standards need to be followed to ensure that the visual feedback is easy and intuitive to understand. Icons can be designed to be interpreted quickly if standard symbols and colors are used where possible. Where icons cannot be used, text messages shall be applied. However, it is important that the text can be understood in short glances, so that the driver is not forced to remove the eyes from the road for extended periods of time [6,13,14]. Finally, it is important to cluster relevant HMI elements in similar locations so that the driver can intuitively understand where a n HMI should appear [5,14,15].
The HMI shall be designed to portray the urgency of the message to be conveyed [11,12,16]. During the use of an ADF, the user may be subject to many types of HMI feedback with various levels of urgency. It is important that the driver understands which HMI elements are of high priority and are conveying urgent feedback to the driver [17]. Equally, it is important that the driver understands that other messages are provided primarily for informational purposes and therefore do not require immediate action. Assessing the user acceptance is also a key point. Customer clinics, heuristic expert assessments, and various other user trials can be carried out to gain both subjective and objective data on user acceptance.
Mode Awareness, Trust, and Misuse
This topic addresses the correct understanding of the role shared between the driver and the ADF, concerning the active mode, as well as the correct usage of and the trust in the ADF.
An example question is given in Table 5. This is about ensuring the drivers fully understand their responsibilities and the function's capabilities during each of the defined ADF modes. They may be informed by several means, such as in-product advertisements and written explanations in the owner's manual [18]. Drivers may get explicit information from the in-vehicle HMI, before, during, and after activation of the ADF itself. They may of course also learn by experience [19]. Additionally, a simple and intuitive HMI can improve the driver's situational awareness and help them to take the correct actions when necessary. Is a process defined on how the user will be informed about any new potential functionality of the ADF based on software updates?
All possible automated driving modes shall be explicitly defined in terms of how the driver should acknowledge them. The goal of this item is to ensure that the possible ADF modes are clearly defined from a user's perspective. It is important that a user is aware of the possible automated driving modes of the ADF to avoid any misunderstanding.
It is key to know whether the HMI modalities to communicate the relevant active (automated) driving modes are described. This item focuses on how the active automated driving modes are communicated to both the driver and the other road users, in terms of modalities (visual, auditory, haptic, etc.).
All reasonably foreseeable mistakes and misuse cases of the ADF in relation to the HMI shall be described. The purpose of this question is to ensure that possible driver mistakes, failures and misuses have been addressed in the best possible way, in order to be able to define countermeasures for them [2,20].
Communicating the automated driving modes to the driver in an appropriate and clear way shall be investigated and confirmed. For an ADF, a clear communication of the mode is crucial. This question focuses on the HMI to communicate the ADF modes, the consideration of a permanent display of the modes, how to communicate the mode changes, and how well these HMI elements are recognized by both the driver and other road users. A test procedure to assess whether basic mode indicators are capable of informing the driver about relevant modes and transitions has been proposed by Naujoks et al. [21]. Additional information regarding this topic is provided by JAMA [22], Albers et al. [23], and Schömig et al. [24].
A multimodal HMI to improve driver alertness and minimize the time to get back in the loop should be investigated. However, it should also be ensured that the HMI is no more intrusive than necessary. Therefore, it is necessary to find a balance between the effectiveness of the HMI and the level of annoyance that it may cause the users [25]. Speech is another possibility to communicate a take-over request. The impact of the HMI on relevant driver indicators such as eyes -on-road time should be investigated [26].
Information shall be provided to the driver about an ADF-initiated minimum risk manoeuver [27]. A minimum risk manoeuver typically happens if the driver fails to appropriately take over the controls, or if the function does not have enough time to make a proper take-over request (for example, due to a sudden unexpected situation). This item aims to consider how to inform the driver in the event that the function has initiated the minimum risk manoeuver in order to provide the driver with the necessary information, such as what is going on, why, and what action the driver should take.
The communication to the driver, of the driver's responsibilities in each defined automated driving mode should be investigated and confirmed. It shall be considered how and to what extent the operational design domain information will be displayed to the driver. The driver awareness of automated driving modes shall be investigated as well.
Driver expectations regarding the ADF's features need to be considered. It is crucial to confirm whether user expectations are met. This is a broad subject that would need to be narrowed down to precise specifications, and this question is there to make sure that this process will be considered. For example, in terms of HVI, the balance between the amount of information and its conciseness or simplicity should be investigated.
The driver's trust in the ADF is an important aspect to consider [28]. It is necessary that the users trust the function, in order for them to feel comfortable using it. On the other hand, it is necessary to avoid over-trust, as this may lead to unintended misuse of the function [29]. Again, a good balance should be targeted in order to ensure the correct amount of trust. The appropriate usage of the ADF should be assessed and confirmed, encouraging the intended use and preventing misuse.
Long-term effects of the ADF on the users shall be investigated. Typically, the main risks of longterm effects are skill degradation and building over-trust in the function [30]. The impact of the HMI on driver workload and other aspects over long journeys shall be investigated as well.
Driver Monitoring
This topic addresses the correct application of driver monitoring, specifically the identification and classification of the driver's status and the recognition of the actions made inside the vehicle. Monitoring a driver's attention is a crucial topic, especially when discussing automated driving [31]. Since driving is a complex phenomenon, involving the performance of various tasks (including simultaneous quick and accurate decision making), fatigue, workload, and distraction drastically increase human response time, which may result in an inability to drive correctly or to respond properly to a take-over request [32]. Table 6 shows an example item for this topic. The question is assessing whether all relevant secondary tasks are considered when defining the driver monitoring requirements. T his item addresses which secondary tasks are allowed during automated driving. The idea is to consider what is currently available and what will become available in the future. In addition, one sub -question focuses on metrics that shall be taken into account when a driver monitoring function is present within the vehicle. Moreover, the possibility to add additional apps or secondary tasks to the HMI in the future shall be considered as well. A further important question is whether the HMI is connected with the driver monitoring function. It is essential to provide crucial information on driver's state directly to the driver, as an impairment may compromise the safety of the situation. Thus, unsafe driver states such as drowsiness need to be communicated effectively [33].
Furthermore, it should be taken into account whether it is possible to mirror the user's devices on the HMI [34,35]. If it is legally allowed, then it is important to consider how to prompt the driver to take back control of the vehicle while their device is being mirrored. For example, this could be done by overlaying a take-over request on the user's device. This way, the driver can be taken back into the control loop in an effective manner. Device-pairing offers further benefits; for instance, the larger in-vehicle screens may be used as opposed to the relatively small smartphone screens . Due to the use of dedicated controls and displays, driver distraction is also minimized. The impact of typical secondary tasks on take-over time and quality should be identified as well. It is useful to measure the impact of secondary tasks on the take-over request.
After the start of production, data may be gathered to assess the types of secondary tasks, the amount of time users spend doing them, and their impact on driving behavior, traffic safety, etc. This is related to measuring the long-term effects of secondary tasks on driver behavior.
Controllability and Customer Clinics
SAE level 3 automated driving will still require the driver to take over the driving task in case of system failures and malfunctions. Thus, it has to be ensured that drivers are able to control transitions to manual or assisted driving and avoid safety critical c onsequences. Driver-initiated transitions should also be considered from this perspective. This topic is one of the key elements in the existing Code of Practice for Advanced Driver Assistance Systems [2]. Table 7 shows an example question for this topic. It is about the suitability of testing environments for controllability. In the verification phase, controllability assessments should be carried out in suitable test environments, ranging from laboratory to test tracks , etc. When these controllability assessments are carried out on test tracks or on public roads, precautions regarding the safety of participants and other road users should be taken. During the definition phase, it shall be ensured that user needs regarding controllability are taken into account. For example, the design of the HMI should consider the transition from automated driving to lower levels of automation with respect to function failures and system limits as well as driver-initiated transitions. Relevant and applicable guidelines for the design of the HMI should be considered in the design phase in order to ensure that they are in line with generally accepted standards and best practices in view of the targeted user population [7,36,37].
Limitations of the human driver should be taken into account. Careful consideration of the driver's sensory and motor limitations (e.g., inability to move freely) need to be applied. The concept selection should thus consider topics such as color-blindness, general vision, sensory-motor, and hearing impairments.
The development should also account for a clear and understandable description of the ADF and its limits. Most importantly, if the driver is informed about function limits , that will trigger requests to intervene [38]. These should be described in the user manual and other available multimedia-based information, together with a description of the expected reaction. It also comprises the selection of a transition-of-control concept. Furthermore, it shall be tested if the vehicle is controllable in the case of a malfunction or by overruling or switching off the function.
The behavior of the ADF should not lead to uncontrollable situations from the perspective of other road users. The design should also consider the limitations and perception of other traffic participants that are not equipped with an ADF. The automated vehicle's behavior shall be designed in a way that it is controllable for these traffic participants and does not exceed the motion ranges of drivers who are driving manually in non-emergency situations.
Even in the early design phase, a preliminary assessment of the controllability can be carried out, which is normally based on expert assessments. A suitable prototype should be used that allows for an assessment of function limits and failures, but also normal driver-initiated transitions [39,40]. The final controllability verification can be based on different evaluation methods such as expert assessments, controllability verification tests, or customer clinics [40].
A suitable post-production evaluation strategy should be implemented that assesses the impact of the ADF on possible negative behavioral adaptations such as skill degradation and misuse. This way, the ADF is adequately evaluated from a human factors perspective after the start of production.
Driver Training and Variability of Users
This topic covers the training required for ADF users and the variability of these users , which needs to be considered. The training aspect is about the issue of providing users with the appropriate knowledge and skills to operate an ADF. As there is a huge variability of users , different age groups, gender, cultural backgrounds, and different levels of previous experience need to be considered. Both topics are combined here, as they share various aspects. Table 8 shows an example question for this topic, asking if the information that the user needs to operate the ADF is available to create a training course. Creating the user training for the ADF requires a specification of the ADF's operation to serve as a baseline. Due to the complexity of ADFs generally, a user training course may be required or at least recommended. Ideally, this is unnecessary due to a well-executed intuitive system design. The training methods shall be defined in more detail to produce a course that could use one of many of the following mediums: a training course provided by the dealer, user manuals integrated within the vehicle, online material for home training, or the use of digital assistants. A reasonable combination of training methods shall be considered taking individual learning preferences into account [20,41,42]. There may be huge differences between user groups. The questions in the CoP-AD target the difference between countries and geographical regions. Infrastructural differences with regard to roads and traffic control functions as well as driver behavior in general have a huge impact on the design of ADFs and so these differences need to be handled appropriately. An ADF designed for only a specific country or geographical region without taking into account the local infrastructure and the requirements of their user groups must be avoided. Another factor to be taken into account are elderly drivers. Due to their degrading physical abilities, driving becomes more cumbersome. Therefore, during the definition of ADFs, the physical impairments of elderly drivers should be addressed. There is also a significant variability in users' physical dimensions and anthropometry. Size and strength differences between genders can play a role, and so the ADF shall be designed to be operated by a variety of different users, including those with non-age-related disabilities.
There shall also be a representative test sample for user studies. Depending on the exact user study to be conducted, this may range from age, gender, and socio-cultural background to test candidates with previous experience with ADFs or technology in general. The test participants in a sample should be selected accordingly.
A solid mix of customer education and information shall be made available to the users post start of production. Developers need to ensure that there is enough information available for the users of an ADF to properly operate it. There should be sufficient training material available inside the vehicle to provide users with the required knowledge to operate the ADF safely on the road. To reduce the likelihood of people over-estimating the possibilities offered by the ADF, the marketing shall support user information and training with realistic information regarding its abilities.
Conclusions
The introductory part gave an overview on the development process applied to finalize the draft of the CoP-AD. This comprises all the different main categories such as the ODD Vehicle Level, the ODD Traffic System and Behavioral Design as well as Safeguarding Automation. The draft results of the CoP-AD presented here with a focus on the HVI category offer the first insight on how the interaction between the driver of the vehicle and the automated driving system shall be part of a standardized development process. Whereas the first category focuses on available guidelines in general, the other topics concentrate more specifically on topics of interest for designing an appropriate interaction between the driver and the vehicle equipped with an automated driving system. Mode awareness, including the aspects of trust and misuse is a cornerstone on how to make people aware of the automated system's abilities, improving trust and at the same time preventing misuse. Driver monitoring plays a major role when taking into account the state of the driver and its importance for a safe operation of the automated driving function. Controllability and customer clinics actually focus on two distinct but interrelated topics. Ensuring the controllability of a system is key, especially in case of minimum risk manoeuvers. This shall be tested in user studies, which in turn serve as a primary method to test many of the guidelines and assumptions mentioned in this text. Driver training again emphasizes the importance of giving drivers the education they need, and in a medium that they can consume and learn from most effectively. In addition, the variability of users is taken into account, including the cultural and infrastructural differences between different cultures and geographical regions.
It must be emphasized that the proposed CoP-AD is based on current best practices, research, and applicable norms. Many of the published studies have been conducted using driving simulators or proving grounds; however, as automated vehicles have not been deployed, final proof that the proposed CoP-AD will be able to eliminate all possible design issues is not yet possible. The current publication is meant to stimulate the ongoing discussion in the technical and scientific community to further improve and converge current research and evaluation practice. It should also be noted that the current paper lays out a draft version of the CoP-AD that will be further refined based on available feedback. This does not only include the HVI, but also the other categories mentioned in this paper. The final CoP-AD needs to be available in an easy-to-use way, preferably as some kind of software application, either Excel-based or standalone. During the development process of an ADF, the questions presented here as examples, and those being part of the final document will guide the engineers from the concept phase up to the time post start of production.
The scope of this document is currently on highway driving and parking, primarily on SAE Level 3 and to a certain extent on SAE Level 4, for the European regions. Further work is required to see if it may be applied to other regions outside of the EU as well. Of particular interest are the USA and China. Automated driving systems that operate within the city or in rural areas shall also be applicable to the CoP-AD. Otherwise, future iterations will have to be adapted to be also applicable to other areas. This is also true for applications regarding robot taxis, reaching from geo-fenced SAE Level 4 up to SAE Level 5 systems. Until then, the CoP-AD will serve as an important guideline for the development of automated driving functions.
|
2020-06-04T09:05:44.925Z
|
2020-05-27T00:00:00.000
|
{
"year": 2020,
"sha1": "a04e1b3f71c823fe06fff3f41978c9c7f35c8d57",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-2489/11/6/284/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7250c4a916a6b5315d45576f417e41b48e37a21b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
11275439
|
pes2o/s2orc
|
v3-fos-license
|
Stagnation-point Flow of the Walters' B' Fluid with Slip
The steady two-dimensional stagnation point flow of a non-Newtonian Walters' B' fluid with slip is studied. The fluid impinges on the wall either orthogonally or obliquely. A finite difference technique is employed to obtain solutions. 1. Introduction. Some rheologically complex fluids such as polymer solutions, blood, paints, and certain oils cannot be adequately described by the Navier-Stokes theory. For this reason, several theories of non-Newtonian fluids were developed. One important and useful model which has been used to describe the non-Newtonian behavior exhibited by certain fluids is the Walters' B' fluid [16]. The equations of motion of non-Newtonian fluids are highly nonlinear and one order higher than the Navier-Stokes equations. Due to the complexity of these equations, finding accurate solutions is not easy. One class of flows which has received considerable attention is stagnation-point flow. In a stagnation-point flow of a Newtonian fluid, a rigid wall occupies the entire x-axis, the fluid domain is y > 0, and the flow impinges on the wall either orthogonally [6, 7] or obliquely [4, 14, 15]. In a study of Newtonian fluid impinging on a flat rigid wall obliquely, Dorrepaal [4] found that the slope of the dividing streamline at the wall divided by its slope at infinity is independent of the angle of incidence. Beard and Wal-ters [2] used boundary-layer equations to study two-dimensional flow near a stagnation point of a viscoelastic fluid. Rajagopal et al. [11] have studied the Falkner-Skan flows of an incompressible second grade fluid. Dorrepaal et al. [5] investigated the behavior of a viscoelastic fluid impinging on a flat rigid wall at an arbitrary angle of incidence. Labropulu et al. [9] studied the oblique flow of a second grade fluid impinging on a porous wall with suction or blowing. In a recent paper, Wang [17] studied stagnation-point flows with slip. This problem appears in some applications where a thin film of light oil is attached to the plate or when the plate is coated with special coatings such as a thick monolayer of hydropho-bic octadecyltrichlorosilane [3]. Also, wall slip can occur if the working fluid contains concentrated suspensions [13]. When the molecular mean free path length of the fluid is comparable to the system's characteristic length, then rarefaction effects must be considered. The Knudsen number K n , defined as the ratio of the molecular mean free path to the characteristic length
1. Introduction.Some rheologically complex fluids such as polymer solutions, blood, paints, and certain oils cannot be adequately described by the Navier-Stokes theory.For this reason, several theories of non-Newtonian fluids were developed.One important and useful model which has been used to describe the non-Newtonian behavior exhibited by certain fluids is the Walters' B' fluid [16].The equations of motion of non-Newtonian fluids are highly nonlinear and one order higher than the Navier-Stokes equations.Due to the complexity of these equations, finding accurate solutions is not easy.
One class of flows which has received considerable attention is stagnation-point flow.In a stagnation-point flow of a Newtonian fluid, a rigid wall occupies the entire x-axis, the fluid domain is y > 0, and the flow impinges on the wall either orthogonally [6,7] or obliquely [4,14,15].In a study of Newtonian fluid impinging on a flat rigid wall obliquely, Dorrepaal [4] found that the slope of the dividing streamline at the wall divided by its slope at infinity is independent of the angle of incidence.Beard and Walters [2] used boundary-layer equations to study two-dimensional flow near a stagnation point of a viscoelastic fluid.Rajagopal et al. [11] have studied the Falkner-Skan flows of an incompressible second grade fluid.Dorrepaal et al. [5] investigated the behavior of a viscoelastic fluid impinging on a flat rigid wall at an arbitrary angle of incidence.Labropulu et al. [9] studied the oblique flow of a second grade fluid impinging on a porous wall with suction or blowing.
In a recent paper, Wang [17] studied stagnation-point flows with slip.This problem appears in some applications where a thin film of light oil is attached to the plate or when the plate is coated with special coatings such as a thick monolayer of hydrophobic octadecyltrichlorosilane [3].Also, wall slip can occur if the working fluid contains concentrated suspensions [13].
When the molecular mean free path length of the fluid is comparable to the system's characteristic length, then rarefaction effects must be considered.The Knudsen number K n , defined as the ratio of the molecular mean free path to the characteristic length of the system, is the parameter used to classify fluids that deviate from continuum behavior.If K n > 10, it is free molecular flow, if 0.1 < K n < 10, it is transition flow, if 0.01 < K n < 0.1, it is slip flow, and if K n < 0.01, it is viscous flow (see Wang [17], Kogan [8]).Flows in the slip-flow region have been modeled using the Navier-Stokes equations and the traditional nonslip condition is replaced by the slip condition where u t is the tangential velocity component, n is normal to the plate, and A p is a coefficient close to 2(mean free path)/ √ π (see Sharipov and Seleznev [12]).This condition was first proposed by Navier [10] nearly two hundred years ago.
In the present study, we follow Wang [17] and investigate the behavior of the Walters' B' fluid impinging on a rigid wall with slip.The fluid impinges on the wall either orthogonally or obliquely.In particular, we study the effects of the slip condition and the effects of viscoelasticity of the fluid.
Flow equations.
The two-dimensional flow of a viscous incompressible non-Newtonian Walters' B' fluid, neglecting thermal effects and body forces, is governed by (see Beard and Walters [2]): is the pressure, ν = µ/ρ is the kinematic viscosity, and α is the viscoelasticity of the fluid.The star on a variable indicates its dimensional form.We nondimensionalize the above equations according to where β has the units of inverse time.The flow equations in nondimensional form are (2.4) where Having obtained a solution of (2.7), the velocity components are given by (2.6) and the pressure can be found by integrating (2.4) and (2.5).
The shear stress component τ 12 is given by (2.8) 3. Orthogonal flow.We assume that the infinite plate is at y = 0 and that the fluid occupies the entire upper half-plane y > 0. Furthermore, we assume that the streamfunction far from the wall is given by ψ = xy (see Hiemenz [7]).Thus, the nondimensional boundary conditions are given by The slip condition in (1.1) is where γ = A p βν is a parameter representing the slip to viscous effects.
where the prime denotes differentiation with respect to y. Integration of (3.4) once with respect to y and use of the condition at infinity yields (3.6) The above system with γ = 0 has been solved by many authors for various values of W e (see Beard and Walters [2], Ariel [1]).When W e = 0, the above system has been solved numerically by Wang [17] for various values of γ.Using the shooting method with the finite difference technique described by Ariel [1], we find that F (0) = 1.23259 when W e = 0 and γ = 0. Numerical values of F (0) for different values of W e and γ are shown in Table 3.1.Figure 3.1 shows the profiles of F for γ = 0 and various values of W e .Figure 3.2 depicts the profiles of F for γ = 1 and various values of W e .Figure 3.3 shows the profiles of F for W e = 0.2 and various values of γ. Figure 3.4 depicts the profiles of F for W e = 0.2 and various values of γ.We observe that as the elasticity of the fluid increases, the velocity near the wall increases and as the slip parameter γ increases the velocity near the wall increases as well.
For large y, we find that where the numerical values of C are shown in Table 3.2 for various values of W e and γ.
The numerical results are in good agreement with those of Wang [17] if W e = 0 and those of Ariel [1] if γ = 0.The Maclaurin series expansion for F(y) is given by where the values of F (0) = s are given in Table 3.1.
Oblique flow.
Following Stuart [14], we assume that the streamfunction far from the wall is given by ψ(x, y) ∼ ky 2 + xy, (4.1)where k is a constant.The dividing streamline which comes from the wall from infinity is defined by ψ(x, y) = 0 and its slope at infinity is −1/k.Equation (4.1) suggests that ψ(x, y) has the form The boundary conditions for F(y) and G(y) are
7), we obtain an equation which contains terms of O(x) and O(1). The terms of O(x) yield an ordinary differential equation for F(y) and the terms of O(1) yield an equation for G(y).
After one integration the boundary-value problem for F(y) is Numerical solutions of this system were obtained in the previous section for various values of W e and γ.The boundary-value problem for G(y) is given by Integration of (4.5) once with respect to y using the conditions at infinity yields where the values of C are given in Table 3.2.
Letting G (y) = 2kH(y), we obtain The boundary conditions for H(y) are Equation (4.8) with boundary conditions (4.9) is solved numerically using the same numerical technique as in the previous section.The numerical values of H (0) are given in Table 4.1 for various values of W e and γ.These values are in good agreement with those obtained by Wang [17] for W e = 0. where H (0) = λ are given in Table 4.1 for various values of γ and W e .
Mathematical Problems in Engineering
Special Issue on Time-Dependent Billiards
Call for Papers
This subject has been extensively studied in the past years for one-, two-, and three-dimensional space.Additionally, such dynamical systems can exhibit a very important and still unexplained phenomenon, called as the Fermi acceleration phenomenon.Basically, the phenomenon of Fermi acceleration (FA) is a process in which a classical particle can acquire unbounded energy from collisions with a heavy moving wall.This phenomenon was originally proposed by Enrico Fermi in 1949 as a possible explanation of the origin of the large energies of the cosmic particles.His original model was then modified and considered under different approaches and using many versions.Moreover, applications of FA have been of a large broad interest in many different fields of science including plasma physics, astrophysics, atomic physics, optics, and time-dependent billiard problems and they are useful for controlling chaos in Engineering and dynamical systems exhibiting chaos (both conservative and dissipative chaos).We intend to publish in this special issue papers reporting research on time-dependent billiards.The topic includes both conservative and dissipative dynamics.Papers discussing dynamical properties, statistical and mathematical results, stability investigation of the phase space structure, the phenomenon of Fermi acceleration, conditions for having suppression of Fermi acceleration, and computational and numerical methods for exploring these structures and applications are welcome.
To be acceptable for publication in the special issue of Mathematical Problems in Engineering, papers must make significant, original, and correct contributions to one or more of the topics above mentioned.Mathematical papers regarding the topics above are also welcome.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable:
Figure 4 .
1 shows the profiles of H for γ = 1 and various values of W e .Figure 4.2 depicts the profiles of H for W e = 0.2 and various values of γ.It can be observed that as the slip parameter γ increases the values of H near the wall decreases.The Maclaurin series for G(y) is given by G(y) = 2kγλy + kλy 2 + k 3 1 − W e γs C + γ 2 sλ − W e λ s + γ 1 − γ 2 s 2 + W e s 2 1 − 2W e γs y 3 , (4.10)
Table 3 .
1. Numerical values of F (0) for various values of W e and γ.
Table 3 .
2. Numerical values of C for various values of W e and γ.
Table 4 .
1. Numerical values of H (0) for various values of W e and γ.
|
2015-03-20T15:25:33.000Z
|
2004-01-01T00:00:00.000
|
{
"year": 2004,
"sha1": "23b4026e45ca88b89af4cb32e4d70d90517e894d",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijmms/2004/126967.pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "5488b2e03479712bdd1db1f99b7f96af777acd95",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
25265199
|
pes2o/s2orc
|
v3-fos-license
|
Integrating Human and Ecosystem Health Through Ecosystem Services Frameworks
The pace and scale of environmental change is undermining the conditions for human health. Yet the environment and human health remain poorly integrated within research, policy and practice. The ecosystem services (ES) approach provides a way of promoting integration via the frameworks used to represent relationships between environment and society in simple visual forms. To assess this potential, we undertook a scoping review of ES frameworks and assessed how each represented seven key dimensions, including ecosystem and human health. Of the 84 ES frameworks identified, the majority did not include human health (62%) or include feedback mechanisms between ecosystems and human health (75%). While ecosystem drivers of human health are included in some ES frameworks, more comprehensive frameworks are required to drive forward research and policy on environmental change and human health. Electronic supplementary material The online version of this article (doi:10.1007/s10393-015-1041-4) contains supplementary material, which is available to authorized users.
The relationship between natural processes and human needs
To highlight the importance of nature conservation for human well-being.
Links natural processes and components with human needs and activities through a two-way functional process, which emphasises their respective influence on each other. Types of natural and human-made capital stocks, good and service flows, and their interdependence To encourage maintenance of a total natural capital stock at or above the current level, which is needed to achieve 'strong sustainability'.
Illustrates the interdependence between natural capital (non-renewable and renewable), human capital and manufactured capital, with economic and ecosystem goods and service flows. To support the analysis and operationalization of the sustainable livelihoods approach.
Consists of several key features for livelihoods analysis: context, livelihood resources, institutional processes, livelihood strategies, and livelihood outcomes. The economy as a part of its life-supporting environment To encourage maintenance of biodiversity at a level that ensures ecosystem resilience, in order to provide for human consumption and existence. Also to stress the importance of an interdisciplinary approach to biodiversity.
The economic system sits within the ecological system, and receives ecosystem services, natural resources and energy from that ecological system. The economic system expels degraded energy, resources and pollution.
Framework for integrated assessment and valuation of ecosystem functions, goods and services To provide a standardised ecosystem services framework in order to improve assessment of goods and services.
Illustrates the relationship between ecosystem structure and process, ecosystem functions, ecosystem goods and services, different values to society, and decisionmaking processes. To help show how sustainability standards can be derived, and to enable policy makers to determine and reduce the 'sustainability gap' and thus move towards environmental sustainability.
Illustrates the relationship between influences (e.g. social, economic etc.) and natural capital (both its elements and functions), with 'functions for people', which in turn affects human welfare. Consists of an ecological and a social subsystem, which affect 'human actors', which in turn affect the subsystems through four types of institution (resourceharvest, hazard-reduction, resource-conservation and ecological-externality producing). To help stakeholders to implement effective on-the-ground management that will lead to ecological resilience and safeguarding of ES.
Consists of three project phases (assessment, planning and management) in relation to spatial scale, status of the socio-ecological system and level of stakeholder collaboration. Three types of assessment are depicted -social, biophysical and valuation. To support a diagnostic approach for evaluating stability and change in ES using principles of computational thinking and services-orientated architecture.
Links ecosystem services with consumer and provider behaviour, through four pathways: service viability, service execution context, service interactions and service outcomes. Multi-scale relationship between governance and ecosystem services To provide guidance for matching the scales of ecosystem governance to those of ecosystem dynamics, and to aid integration of social and ecological disciplines.
Depicts multi-scale linkages (local, regional, global) between governance and ecosystems, with emphasis on local feedbacks between local ecosystems and services, local governance, and community and individual wellbeing. Possible spatial relationships between service production areas and service benefit areas Could be used to inform where management interventions should be concentrated.
Depicts four scenarios of service production/benefit areas, including service provision and benefit occurring in the same area, and services providing benefits omni-directionally to the surrounding landscape. To demonstrate how natural capital fits within the ecosystem services framework and how it can be evaluated through integrated valuation and process-based models. Overall context of the research relates to soil natural capital.
Depicts the relationship between ecosystem natural capital, ecosystem services and valuation, and regional decision-making, with particular reference to valuation/modelling tools. Three spatial scales (regional, national and international) are depicted as interacting, in terms of information transfer. To categorise ES (through a participatory process) to provide structure for the quantification of management priorities. To build transdisciplinary knowledge of social-ecological systems and contribute to the development and testing of theory within these disciplines.
Consists of a social template (human behaviour and human outcomes) and a biophysical template (community structure and ecosystem function), linked together through ecosystem services and by pulse and press events ('press' referring to extensive, pervasive, and subtle change, and 'pulse' referring to sudden events). The systems are influenced by external drivers such as climate and globalisation. To aid the assessment of the effects of environmental change on ecosystem services provision.
Based on a Driver-Pressure-State-Impact-Response (DPSIR) framework, and thus comprised of each of these components. The 'state' component includes the supporting system as well as ecosystem services beneficiaries and providers. A stepwise implementation strategy for the conceptual framework is also shown. Efficiency framework for an ecosystem services approach to sustainability To provide a conceptual basis for assessing the components of socialecological systems and the links between them, based around magnitudes and efficiencies of conversion between states.
Consists of three broad sub-systems: ecosystem functions, ecosystem services, and social development and well-being. These systems interact with each other (e.g. through impacts, consumption, and trade-offs), representing a transfer of state (e.g. from ecosystem functions to ecosystem services). Within each subsystem feedback loops and mechanisms (e.g. governance, incentives etc.) are depicted. The DPSIR framework and the ecosystem services and societal benefits set within an overall framework of The Ecosystem Approach To integrate the DPSIR framework with ecosystem services and societal benefits to help support decision-making in environmental management (with particular reference to the marine environment).
Consists of two frameworks sitting within an overall ecological approach: (i) the DPSIR (Drivers-Pressures-State Change-Impact-Response) approach, which can 'protect the natural system & benefits for society', and (ii) ES and social benefits, which consists of biota-ecological structure, physico-chemical, and biota-ecological functioning, which interact and deliver benefits for society. To support the assessment of ecological management goals and interlinkages with poverty reduction and sustainable livelihoods, in the context of wetland management.
Consists of several components-central being the ecosystem settings (comprised of 'capitals' and 'ecological character'), which sits within institutions and freedoms. This is linked to livelihood outcomes, via livelihood strategies. The ecosystem settings are also affected by the vulnerability context (drivers of change To aid the transition between conceptual frameworks/ theory, to practical integration of ES into decision-making, through translating a conceptual framework into a practical toolkit.
Consists of four key components: the resource, the resource users, public infrastructure, and public infrastructure providers. The components are linked by four steps -Step 1 -ES supply and demand assessment; Step 2 -future estuary roles identification; Step 3 -enterprise opportunity identification; Step 4enterprise risk assessment. Estuarine example used but easily generalised. Adapted from a social Framework for characterizing ES that might be affected by management or planning.
To facilitate the incorporation of diverse values associated with ecological and socioecological change, in order to aid decision-making, management and planning.
Five steps to aid ES management and planning are illustrated: 1-obtain consent, 2-determine the decision context, 3-determine the socio-ecological context, 4determine the ES, benefits and values, and 5-influence diagrams and scenarios. Firm-level ecosystem service valuation framework To incentivise firms to incorporate environmental considerations into project development or management decisions, thus encouraging more sustainable management and development.
Depicts the relationship between four assessment/ valuation stages to be undertaken by a firm: (i) determine lifecycle inventory, (ii) assess ecosystem functions, (iii) perform functional substitutability and (iv) determine economic value. This is followed by decision analysis. Adapted from Comello & Lepech The EPPS framework aims to provide an improved approach for ES assessment, and for linking ecosystem services and potentials to management practice -this extended version adds additional aspects such as benefits/values, beneficiaries and management, as well as spatial and temporal aspects.
Consists of five inter-related pillars -properties, potentials, services, benefits/values and users/beneficiaries. These sit within three overlapping levels-physical, intermediate and socio-economic. The use and management of ES impact upon the five pillars. The temporal aspect of assessment (time scale, driving forces, changes and scenarios) is illustrated through a number of processes, and spatial aspects (spatial scales, dimensions, patterns) are also depicted. The type of evidence/assessment (factual/valuation) is also shown. To provide guidance for a participatory/ deliberative approach to ecosystem services valuation through involvement of different stakeholder groups.
Comprised of three main stages within the overall 'policy formation and assessment process': 1) set the scene, 2) deepen understanding and 3) articulate values. This leads to decision-implementation. Set within marine/ coastal context, but the framework is not itself habitat specific. Evolution, The direct and indirect (cultural) pathways from biodiversity to human health To illustrate the link between biodiversity change with human cultural values, well-being, and health, which may be important for biodiversity conservation and public health.
Illustrates the direct effects of biodiversity on human health (e.g. the regulation of the emergence and transmission of disease and pollution control) and also the indirect effects of biodiversity on human health via cultural pathways -biodiversity loss affects the provision of cultural goods, which reduces their value and, consequently negatively impacts upon human well-being and health. A multi-scale conceptual framework on nature, the productive base of societies and human well-being To address the mis-matches and the multiple spatial scales of ES provision, to improve the understanding and assessment of key inter-linkages between nature and human well-being.
Consists of an interlinking social and ecological system at multiple scales. Both systems feed into the productive base and are affected by institutions and governance. The productive base contributes to human well-being, via human, productive and natural capital, the latter via ES provision. Human well-being also affects the productive base and nature's systems. To highlight the difference between the full potential of ecosystems to provide final services and the current use of itand thus provide information for more sustainable land management. Framework to analyse and quantify ecosystem service flows To aid the analysis of the spatial connections between ES provisioning and benefiting areas, which is often lacking in ES assessments.
Consists of three overlapping circles. In the centre is P -representing ES provisioning areas. This sits within a larger circle -F, the flow area where ES can be potentially delivered from P. Two smaller ES 'benefiting' areas are also shown, which are spatial units in which ES are needed or readily used/ consumed.
|
2017-08-02T21:36:59.323Z
|
2015-09-24T00:00:00.000
|
{
"year": 2015,
"sha1": "84f0ce3be279676963e7b7f1ba237073121b4529",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10393-015-1041-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fb794ccb1b9c85e8db7e10f973aec79d4cf0d9f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
}
|
29639229
|
pes2o/s2orc
|
v3-fos-license
|
Frontometaphyseal dysplasia
Frontometaphyseal dysplasia is a disorder involving abnormalities in skeletal development and other health problems. It is a member of a group of related conditions called otopalatodigital spectrum disorders, which also includes otopalatodigital syndrome type 1, otopalatodigital syndrome type 2, and Melnick-Needles syndrome. In general, these disorders involve hearing loss caused by malformations in the tiny bones in the ears (ossicles), problems in the development of the roof of the mouth (palate), and skeletal abnormalities involving the fingers and/or toes (digits).
Frontometaphyseal dysplasia
Frontometaphyseal dysplasia is a disorder involving abnormalities in skeletal development and other health problems. It is a member of a group of related conditions called otopalatodigital spectrum disorders, which also includes otopalatodigital syndrome type 1, otopalatodigital syndrome type 2, and Melnick-Needles syndrome. In general, these disorders involve hearing loss caused by malformations in the tiny bones in the ears (ossicles), problems in the development of the roof of the mouth (palate), and skeletal abnormalities involving the fingers and/or toes (digits).
Frontometaphyseal dysplasia is distinguished from the other otopalatodigital spectrum disorders by the presence of joint deformities called contractures that restrict the movement of certain joints. People with frontometaphyseal dysplasia may also have bowed limbs, an abnormal curvature of the spine (scoliosis), and abnormalities of the fingers and hands.
Characteristic facial features may include prominent brow ridges; wide-set and downward-slanting eyes; a very small lower jaw and chin (micrognathia); and small, missing or misaligned teeth. Some affected individuals have hearing loss.
In addition to skeletal abnormalities, individuals with frontometaphyseal dysplasia may have obstruction of the ducts between the kidneys and bladder (ureters), heart defects, or constrictions in the passages leading from the windpipe to the lungs (the bronchi) that can cause problems with breathing.
Males with frontometaphyseal dysplasia generally have more severe signs and symptoms of the disorder than do females, who may show only the characteristic facial features.
Frequency
Frontometaphyseal dysplasia is a rare disorder; only a few dozen cases have been reported worldwide.
Causes
Mutations in the FLNA gene cause frontometaphyseal dysplasia.
The FLNA gene provides instructions for producing the protein filamin A, which helps build the network of protein filaments (cytoskeleton) that gives structure to cells and allows them to change shape and move. Filamin A binds to another protein called actin, and helps the actin to form the branching network of filaments that make up the cytoskeleton. Filamin A also links actin to many other proteins to perform various functions within the cell.
A small number of mutations in the FLNA gene have been identified in people with frontometaphyseal dysplasia. These mutations are described as "gain-of-function" because they appear to enhance the activity of the filamin A protein or give the protein a new, atypical function. Researchers believe that the mutations may change the way the filamin A protein helps regulate processes involved in skeletal development, but it is not known how changes in the protein relate to the specific signs and symptoms of frontometaphyseal dysplasia.
Inheritance Pattern
This condition is inherited in an X-linked dominant pattern. The gene associated with this condition is located on the X chromosome, which is one of the two sex chromosomes. In females (who have two X chromosomes), a mutation in one of the two copies of the gene in each cell is sufficient to cause the disorder. In males (who have only one X chromosome), a mutation in the only copy of the gene in each cell causes the disorder. In most cases, males experience more severe symptoms of the disorder than females. A characteristic of X-linked inheritance is that fathers cannot pass Xlinked traits to their sons.
Diagnosis & Management
Genetic Testing Information
|
2019-08-17T04:46:31.449Z
|
2020-02-10T00:00:00.000
|
{
"year": 2020,
"sha1": "2359553a8ea709ed463a57e45022a08a31d2e5cf",
"oa_license": "CCBY",
"oa_url": "https://www.qeios.com/read/BXY92E/pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a73a07b55ea317ce8b275c7b09672aaa88ff95cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
253112460
|
pes2o/s2orc
|
v3-fos-license
|
A multi-network comparative analysis of whole-transcriptome and translatome reveals the effect of high-fat diet on APP/PS1 mice and the intervention with Chinese medicine
Different studies on the effects of high-fat diet (HFD) on Alzheimer’s disease (AD) pathology have reported conflicting findings. Our previous studies showed HFD could moderate neuroinflammation and had no significant effect on amyloid-β levels or contextual memory on AD mice. To gain more insights into the involvement of HFD, we performed the whole-transcriptome sequencing and ribosome footprints profiling. Combined with competitive endogenous RNA analysis, the transcriptional regulation mechanism of HFD on AD mice was systematically revealed from RNA level. Mmu-miR-450b-3p and mmu-miR-6540-3p might be involved in regulating the expression of Th and Ddc expression. MiR-551b-5p regulated the expression of a variety of genes including Slc18a2 and Igfbp3. The upregulation of Pcsk9 expression in HFD intervention on AD mice might be closely related to the increase of cholesterol in brain tissues, while Huanglian Jiedu Decoction significantly downregulated the expression of Pcsk9. Our data showed the close connection between the alterations of transcriptome and translatome under the effect of HFD, which emphasized the roles of translational and transcriptional regulation were relatively independent. The profiled molecular responses in current study might be valuable resources for advanced understanding of the mechanisms underlying the effect of HFD on AD.
Different studies on the effects of high-fat diet (HFD) on Alzheimer's disease (AD) pathology have reported conflicting findings. Our previous studies showed HFD could moderate neuroinflammation and had no significant effect on amyloid-β levels or contextual memory on AD mice. To gain more insights into the involvement of HFD, we performed the whole-transcriptome sequencing and ribosome footprints profiling. Combined with competitive endogenous RNA analysis, the transcriptional regulation mechanism of HFD on AD mice was systematically revealed from RNA level. Mmu-miR-450b-3p and mmu-miR-6540-3p might be involved in regulating the expression of Th and Ddc expression. MiR-551b-5p regulated the expression of a variety of genes including Slc18a2 and Igfbp3. The upregulation of Pcsk9 expression in HFD intervention on AD mice might be closely related to the increase of cholesterol in brain tissues, while Huanglian Jiedu Decoction significantly downregulated the expression of Pcsk9. Our data showed the close connection between the alterations of transcriptome and translatome under the effect of HFD, which emphasized the roles of translational and transcriptional regulation were relatively independent. The profiled molecular responses in current study might be valuable resources for advanced understanding of the mechanisms underlying the effect of HFD on AD.
Introduction
Alzheimer's disease (AD) was a progressive neurodegenerative disease related to aging, characterized by the pathological hallmarks of extracellular accumulation of amyloid-β (Aβ) plaques and intracellular accumulation of neurofibrillary tangles. AD was caused by the complex interaction of multiple mechanisms, and the etiology was still unclear. AD was generally considered to be related to genetic and environmental factors (1). Diet and nutrition displayed potential for non-pharmacological AD prevention. However, different studies on the effect of high-fat diet (HFD) on AD pathology in AD models reported conflicting conclusions. For instance, the HFD feeding induced Aβ accumulation and cognitive decline in APP/PSEN1 mice. Systemic inflammation and obesity could be reversed by a low-fat diet (2). Aβ and HFD had a synergic effect, leading to the impairment of endoplasmic reticulum and mitochondrial functions, glial reactivity status alteration and inhibition of insulin receptor signaling. These metabolic alterations would favor neuronal malfunction and eventually neuronal death by apoptosis, hence causing cognitive impairment (3). However, other studies found that HFD might promote better cognitive function by improving blood-brain barrier function and attenuating brain atrophy in AD, but it didn't seem to affect Aβ levels (4)(5)(6). Therefore, the influence of HFD on the progression of AD was controversial. A clear understanding of the HFD role in AD pathology would help improve the quality of life and relieve the demand pressure of aging population on the overall resources of society.
Huanglian Jiedu Decoction (HLJDD) was composed of Rhizoma coptidis, Radix scutellariae, Cortex phellodendri, and Fructus gardenia at a ratio of 3:2:2:3. HLJDD was a classic prescription for clearing away heat and toxic materials in past dynasties. Alkaloids, flavonoids and iridoid glycosides were the mainly active ingredients in the prescription (7). Modern research showed that HLJDD had many pharmacological effects, such as anti-inflammatory, antibacterial, antioxidant, lipidlowering and hypoglycemic, antitumor, neuroprotection and so on (8). The literature researches and previous experiments of our research team showed HLJDD could reduce the accumulation of Aβ and Tau in central of APP/PS1 mice, improve cognitive ability, and ameliorate the lipids and inflammatory environment in the center and periphery (9). Furthermore, HLJDD could regulate the metabolism of central neurotransmitters, amino acids, peripheral bile acids, and relieve AD symptoms in combination with intestinal flora (10). In this study, we would continue to explore the curative effect and mechanism of HLJDD on HFD plus AD model mice.
Competing endogenous RNAs (ceRNAs) were RNAs in the complex network of transcriptional regulation in organisms, including protein-coding mRNA, long non-coding RNA (lncRNA), pseudogene, and circular RNA (circRNA). The regions of these RNAs could be bound by systematically functionalizing microRNA (miRNA) response element (MRE)harboring non-coding RNAs. Competing to bind common miRNAs through common MREs, the RNAs interacted and regulated the expression of target gene transcripts. Thus, through the miRNA, these RNAs could interact with each other to form complex miRNA-mediated ceRNA networks. The interaction relationship showed the possible functions of the lncRNAs and circRNAs. Significant changes in lncRNA were also observed in AD models, with studies reporting the upregulation of MRAK088596, MRAK081790, and MAPK10 and downregulation of BC092582, MRAK050857, and S100A8 in AD rats (16). CircRNA had been shown to play an important role in the development of AD by affecting neurogenesis and injury, Aβ deposition, neuroinflammation, autophagy and synaptic function through miRNA sponging. Large number of differentially expressed circRNAs were presented in the brains of AD patients (17). The association of various human miRNA with disease had been experimentally validated. A large set of miRNA-mRNA associations that were found in AD patients (18) and played important roles in the regulation of Aβ precursor protein expression, lytic enzyme activity and APP pathway-related signaling molecules. They also regulated tau protein expression, tau phosphorylationrelated kinase and phosphatase function. The study showed that a decrease in miR-29a/b could contribute to increased BACE1 and Aβ levels in sporadic AD (19). MiRNAs also had the effect on learning and memory processes, regulating L-LTP, excitatory glutamatergic systems and other synaptic transport (20).
Gene expression in currently studies was mainly at the transcriptional level, largely ignoring translational regulation. However, translation regulation was accounted for more than half of all regulation in biological genetic information transfer and was the most important form of regulation in the cells. Ribosome profiling (Ribo-seq), in which next-generation sequencing used to identify ribosomeprotected mRNA fragments, thereby revealing the positions of the full set of ribosomes engaged in translation, has emerged as a transformative technique for enabling global analyses of in vivo translation and coupled, translational events (21). Ribo-seq had been widely used in different species (22)(23)(24)(25). The researchers analyzed gene expression in cerebral cortex of two AD model mouse strains, CVN (APP Sw DI/NOS2 −/− ) and Tg2576 (APP Sw ), by tandem RNA-seq and Ribo-seq. AD model mice had similar levels of transcriptome regulation, but differences in translatome regulation (26).
Previously, we detected that long-term HFD intervention altered the levels of cholesterol and polyunsaturated fatty acids in the brain tissue of APP/PS1 mice and influenced the secretion of peripheral bile acids (10). Translational regulation was considered to play a vital role in gene expression, but whether HFD functions through the regulation of gene translational level was still unclear. The mechanism linking HFD in the regulation of transcriptome and translatome in APP/PS1 mice had not yet been systematically elucidated. In order to analyze the overall effects of HFD on the AD mice, whole-transcriptome sequencing (mRNA-seq, lncRNAseq, circRNA-seq, and miRNA-seq) and Ribo-seq were used to explore. In addition, the associations between transcriptional and translational levels corresponding to this phenotype further screened out some known target genes and new functional genes, followed by functional interaction prediction analysis. In summary, our analysis could reveal distinct roles of translational and transcriptional regulation in HFD intervention on AD mice. This study aimed to provide a new direction for the treatment of AD through the joint analysis of transcriptome and translatome. ). APP/PS1 mice were randomly allocated into 3 groups: one was fed a normal chow diet (the AD group), and one was fed a HFD diet (the AD_HFD group), and another was fed HFD diet and the powder of HLJDD (the H_H group). The HLJDD powder was prepared in our laboratory as previously described (7). Our research was a preventive protocol, and the gavage dose was 344 mg/kg/d (HLJDD) for 3 months. Animal weights were recorded every week.
Morris water maze test
The Morris water maze (MWM) test was performed to detect spatial memory as previously described with a slight modification (27). Mice participated in a navigation test for four consecutive days. Four sequential training trials began by placing the animals facing the wall of the pool but changing the drop position for each trial. If the mouse found the platform before the 90 s cut-off, allowing the mouse to stay on the platform for 10 s then return it to its home cage. Otherwise, we placed the mouse on the platform and allowed it to stay there for 20 s. The mouse was trained in different direction. We repeated the training for all mice in the trail in the next 4 days. In probe trial, we removed the platform from the pool and the test time was 60 s. Escape latencies, time spent or distance traveled in the target quadrant and platform-crossing times were recorded and analyzed using the analysis management system (Beijing Zhongshi Kechuang Co., Ltd.).
Brain sample collection
After the MWM test, all mice rested for 4 days under normal conditions. After anesthesia with 10% chloral hydrate, serum was collected from the heart, followed by removal of brain tissue on a sterile table, rinsing with pre-cooled RNAse-free saline at 4 • C, blotting up. Then put the sample into 1.5 mL labeled RNase-free EP tubes, which were rapidly frozen in liquid nitrogen for 30 min and stored at −80 • C in the refrigerator until use. Whole brain had been ground in liquid nitrogen.
Western blot assay
Western blot (WB) analysis for brain tissues were lysed in precooled RIPA buffer with the protease inhibitor PMSF (Amresco), and protein concentrations were determined using a BCA protein assay kit. Protein samples were separated on 12% sodium dodecyl sulfate polyacrylamide gels electrophoresis (SDS-PAGE) and transferred onto NC membranes. Then, membranes were blocked in 5% non-fat milk for 30 min at room temperature and incubated with primary antibodies overnight at 4 • C. Membranes were then washed and incubated with HRP conjugated goat anti-rabbit and HRP-conjugated goat anti mouse (1:10,000) secondary antibodies for 40 min at room temperature followed by development using ECL detection. The obtained bands were then scanned and analyzed using ImageJ software, and band density was assessed using Total Lab Quant V11.5 (Newcastle upon Tyne, United Kingdom).
RT-PCR
Total RNA was extracted from brain tissue using TRIzol reagent (ELK Biotechnology, China) according to the manufacturer's instructions. RNA concentrations were equalized and converted to cDNA using the EntiLink TM 1st Strand cDNA Synthesis Kit (ELK Biotechnology, China). Gene expression was measured using a StepOne TM Real-Time PCR system. The sequences of primers used in these experiments were listed in the Supplementary material.
RNA-seq and Ribo-seq
The experimental procedure and data analysis were listed in the Supplementary material.
Data availability
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found here: http://bigd.big.ac. cn/gsa/, CRA007307.
Statistical analysis
The results were expressed as the mean ± standard error of the mean (SEM). The significance of difference among the groups was assessed by Student's t-test for two groups and one-way ANOVA for more than two groups, followed by the LSD and Games-Howell post-test. Statistical calculations were performed using SPSS 20 software. Differences with statistical significance were denoted by P-value less than 0.05.
Results
Evaluation of high-fat diet intervention in the APP/PS1 mice After 3 months of HFD administration, there was an obvious weight gain in AD_HFD mice at a rapid pace compared to normal chow diet mice ( Figure 1A). HFD significantly accelerated the percentage of weight gain, and HLJDD could slow down the pace of weight gain caused by HFD. In the MWM test, the sequential changes in the average escape latency during spatial acquisition training were shown in Figure 1B.
With the increase of training times, the incubation period of each group gradually was shortened. AD mice displayed longer average escape latency compared to Nor mice on day 2, 3, and 4. AD_HFD and H_H mice were not exhibited significantly different compared to AD mice. In the spatial probe test, the platform crossing number in AD mice was lower compared to the Nor group. Meanwhile, the percent distance and time spent in the target quadrant were significantly lower than those in the Nor group (P < 0.05). The platform crossing number, percent distance and time spent in the target quadrant in the AD_HFD group were higher than those in the AD group, indicating that it had improvement tendency toward the cognitive impairment with the HFD intervention. There is no significant difference between the H_H group and AD group ( Figure 1C). The trajectory map of AD mice was disorganized and purposeless ( Figure 1D). WB analysis revealed that the levels of Aβ 42 were increased in AD group compared to the Nor group and decreased in the AD_HFD and H_H groups compared with the AD group. The levels of PPAR-γ were decreased in AD mice compared to the Nor group, while increased in AD_HFD and H_H groups compared with AD group ( Figure 1E). The mRNA expression of different proinflammatory cytokines, IL-1β, IL-6, TNF-α, MCP-1, IL-12A, IL-12B, and IFN-γ, were upregulated in the AD group compared to the Nor group. The mRNA levels of proinflammatory cytokines were slightly reduced in response to the HFD intervention. Inflammatory cytokine levels were also detected by enzyme-linked immunosorbent assay (ELISA). The expression of TNF-α and IL-1β was decreased in the AD_HFD and H_H groups compared to the Nor group, and IL-1β levels was significantly reduced (P < 0.001) ( Figure 1F). Taken together, the HFD intervention might have the effects of relieving inflammation in AD model mice.
Overview data of mRNA-seq and Ribo-seq
At the mRNA sequence profiling, 18490 detected genes were identified. At the Ribo sequence profiling, 17433 detected genes were identified. The gene expression levels for both the transcriptome and the translatome were similar with normal distribution. The distribution of expression abundance among the samples was shown in Supplementary Figures 1A,B. The peaks of the samples were generally consistent, indicating that there was little difference in the overall expression of the genes at the transcriptional and translational levels among the samples. The heat maps were shown in Figures 2A,B. Pearson correlation coefficient (R) between Ribo sequence abundance and mRNA abundance was calculated, and the scatter plots ( Figure 2C) were drawn to analyze the correlation at the translational and transcriptional levels. The R-values of Nor, AD, AD_HFD, and H_H groups were 0.62, 0.63, 0.65, and 0.7, indicating a moderate correlation between mRNA abundance and Ribo sequence abundance in four groups. Principal component analysis (PCA) of mRNA-seq and Ribo-seq were shown in Supplementary Figures 1C,D. The ribosome-protected fragments (RPFs) length distribution peaked at 28 nt in both groups (Supplementary Figure 1E). The mRNAs protein-coding sequences (CDS) contained the majority of RPFs in four groups, with an average distribution ratio of 89.26, 88.03, 88.02, and 87.95%, for the Nor, AD, AD_HFD and H_H groups separately. The 5 UTR and 3 UTR distribution ratio was less than 3%, respectively (Supplementary Figure 1F). These data demonstrated the reproducibility and reliability of this analysis. The identification and quantification information for the transcriptome and translatome were shown in Supplementary Tables 2, 3.
Differential transcriptome analysis
Analysis of differently expressed genes in Alzheimer's disease mice with high-fat diet Based on the HISAT2 comparison results, we reconstructed the transcripts using StringTie and calculated the expression of all genes in each sample. Using the reads count data of gene expression levels of each sample, we analyzed the difference between groups using DESeq2 software with P < 0.05 and | log 2 FC| ≥ 0.585 as significant differentially expressed genes (DEGs). 87 genes were up-regulated and 125 genes were down-regulated in the AD group compared to the Nor group. Compared to the AD group, 116 and 120 genes were upand down-regulated in the AD_HFD group, respectively. 30 genes were significantly differentially expressed in both AD and AD_HFD groups compared to the Nor group. 13 of them were reduced and 16 were significantly increased in AD mice. In addition, Gpr151 was reduced in the AD group but increased in the AD_HFD group. The volcano plot and Wayne plot of differentially expressed genes between the groups were shown in Figures 3A,B.
KEGG analysis (Nor vs. AD group) identified the significant enrichment pathways including apoptosis, MAPK signaling pathway, neuroactive ligand-receptor interaction, purine metabolism, dopaminergic synapse, serotonergic synapses, etc. The AD and AD_HFD groups were significantly enriched in The expression of inflammatory cytokine detected by RT-PCR and Elisa method in brain tissue. All the results are expressed as the mean ± SEM; # P < 0.05, ## P < 0.01, ### P < 0.001 (compared to the Nor group); *P < 0.05, **P < 0.01, ***P < 0.001 (compared to the AD group).
the KEGG at dopaminergic synapse, synaptic vesicle cycle, cholinergic synapse, estrogen signaling pathway, neuroactive ligand-receptor interactions, MAPK signaling pathway, TNF signaling pathway, galactose metabolism, serotonergic synapse, starch and sucrose metabolism (Figures 3D,F). DEGs among Nor, AD and AD_HFD groups found by KEGG enrichment analysis were focused on the regulation of a variety of synapses, including dopamine, choline and serotonin, which indicated that the expression levels of regulatory neurotransmitter genes were altered in the brain tissue of AD mice and HFD intervention also had a greater effect on these genes.
The results of GO enrichment analysis results were shown in Figures 3C,E. The differential genes in the brain tissue of normal and AD mice were mainly enriched in behavioral, nervous system, neurotransmitter, immune and chemotactic terms in biological process ontology. The result suggested that neurotransmitter metabolism and inflammatory responses were disturbed in the brain tissue of AD mice compared to normal mice, and HFD intervention could affect neurotransmitter metabolism and inflammatory responses in AD mice.
Dusp1, Gpr151, Th, Ddc, and Npas4 were upregulated, while Ccl21b and Slc1a1 were downregulated in the AD_HFD group compared with the AD group. Dual specific phosphatase (DUSP) played an important immunomodulatory function through the DUSP-MAPK phosphatase pathway (28). DUSP1 played an important negative regulatory role in the inflammatory immune response of macrophages induced by Toll-like receptor ligand stimulation (29). Increased Overview of genes identified by transcriptome and translatome. Heatmap of cluster analysis of DEGs in the transcriptome (A) and translatome (B). N-1, N-2, and N-3 represent the Nor group biological repetition. AD-1, AD-2, and AD-3 represent the AD group biological repetition. AD_H-1, AD_H-2, and AD_H-3 represent the AD_HFD group biological repetition. Blue represents the lowest and red represents the highest. Dusp1 expression in the HFD group suggested a close association with partial remission of inflammation in brain tissue. Tyrosine hydroxylase (TH) was a catecholamine ratelimiting enzyme that catalyzed the conversion of tyrosine to dihydroxyphenylalanine and regulated the production of dopamine, noradrenaline and epinephrine neurotransmitters.
Aromatic L amino acid decarboxylase (DDC) catalyzed the conversion of dopa to dopamine. Both were more highly expressed under HFD, indicating that HFD intervention would strengthen the dopamine neurotransmitter synthesis in the brain tissue of AD mice. Neuronal PAS domain binding protein 4 (NPAS4) mRNA was upregulated in the AD_HFD Frontiers in Nutrition 08 frontiersin.org group, suggesting that HFD could enhance the regulation of glutamatergic and GABAergic synapses. In addition, GPR151 was associated with pineal synaptic function and nicotinic uptake.
GSEA analysis
In order to further understand the effect of HFD in AD, GSEA analysis was performed. GSEA analysis of KEGG revealed that 94 of the 324 gene sets were upregulated in the AD group compared to Nor group, including glycosphingolipid biosynthesis-ganglio series; 9 gene sets were downregulated in the AD group, including the specific pathways of folate biosynthesis, antigen processing and presentation, phenylalanine metabolism, tryptophan metabolism, alanine metabolism, ascorbate and aldarate metabolism, DNA replication, GPI-anchor biosynthesis, and carbon metabolism. 170 gene sets were upregulated in the AD_HFD group compared to AD group, while 154 gene sets were downregulated in the AD_HFD group. The results of the GSEA analysis with P < 0.05 were shown in Figures 3G,H and Supplementary Figure 2.
Differential genes in AD patients were mainly enriched in immune and metabolic pathways (30). Meta-analysis found the levels of acetylcholine and GABA were significantly lower and the levels of glycine were slightly higher in the cerebrospinal fluid of AD patients. Meanwhile, anaerobic glycolysis and the pentose phosphate pathway and the tricarboxylic acid cycle pathway were enhanced as well (31). Methionine, tryptophan and tyrosine purine metabolic pathways were altered in mild cognitive impairment (MCI) and AD patients (32). Phenylalanine, tyrosine and tryptophan levels were reduced in the serum of AD patients (33). Analysis of mRNA expression levels showed the downregulation of multiple metabolic pathways in AD mice, including phenylalanine, tryptophan and alanine metabolism. The effect of HFD on metabolism in AD mice was more extensive, with 18 of the 23 significantly upregulated gene sets being related to metabolic pathways, mainly involving the metabolism of amino acids and carbohydrates. Compared to normal mice, AD mice had metabolic abnormalities in lipid and amino acid metabolism. The metabolic pathways of phenylalanine, tryptophan and alanine were down-regulated in the AD group, while significantly up-regulated in the AD_HFD group, suggesting that HFD could regulate amino acid metabolism in the brain tissue of AD mice. The AD_HFD group could modulate the ascorbate and aldehyde metabolism pathways. In addition to substance metabolism, HFD also up-regulated PPAR signaling pathway and neuroactive ligand-receptor interactions pathway.
Differential gene expression trend analysis
A total of 8 patterns of gene trends among the Nor, AD and AD_HFD groups were plotted (Figure 3I), with profile 2 being significant and containing 72 genes. The analysis of the trends suggested that HFD intervention could callback profile2 genes, which might be associated with moderating the process in AD pathology. The profile2 genes were significantly enriched in KEGG pathways such as cholinergic synapses, dopaminergic synapses, MAPK signaling pathways, synaptic vesicle recycling, purine metabolism, serotonergic synapses, etc. The involved genes were Slc6a3, Chrnb4, Fos, Hspa1b, Igfbp3, Gm45837, Dusp1, Slc18a2, Hspa1a, Itk, Gucy2c, Wnt9b, and Chrna6.
Slc6a3 encoded the dopamine transporter and its variant carriers reduced cognitive performance and were at greater risk of developing dementia (34). In mouse models the activation of the endogenous Nlrp3 promoter was catalyzed only by the dopaminergic neuron specific Slc6a3 promoter. Dopaminergic neurons could accumulate NLRP3 inflammatory activators such as reactive oxygen species, dopamine metabolites, and misfolded proteins along with organismal aging. Activation of NLRP3 could induce inflammation and improve the cognitive impairment during normal aging and neuropathological processes (35). Heat shock protein (HSP) protected cells from oxidative stress, while HSP70 inhibited tau protein aggregation (36), effectively treating AD types with aging-related conditions (37). The expression of mRNA encoding HSP70 was increased in AD patients (38,39), and APMAP levels were reduced. Nevertheless, HSPA1A and CD-M6PR levels, which controlled Aβ production, were increased (40). Proteomic studies found that HSPA1A levels in cerebrospinal fluid extracellular vesicles could monitor the course of AD (41). HSPA1B was associated with non-cognitive alterations in AD, and HSPA1B genes had significant AD non-cognitive symptoms (42). ITK regulated the signaling network downstream of T cell receptor signaling and influenced the differentiation of effector T cells. Itk could promote autoimmunity and central nervous system (CNS) inflammation (43). Suppression or deletion of Itk resulted in a decrease in Tr1 and TH17 cells and an increase in Treg cells (44).
Analysis of differently expressed genes in Alzheimer's disease mice combined with Huanglian Jiedu Decoction and high-fat diet
In comparison with the AD_HFD group, the level of 95 genes were up-regulated and 108 genes were down-regulated after HLJDD administration (Supplementary Figure 3A). 52 genes were significantly changed among the AD, AD_HFD and H_H groups, of which 26 were decreased and 26 were increased in the AD_HFD group, while in the H_H group the gene levels were back-regulated. A total of 27 genes were significantly altered among the Nor, AD_HFD and H_H groups, of which 17 were reduced and 9 were increased in the AD_HFD group, while the H_H group significantly modulated the changes of these genes.
The AD_HFD and H_H groups were significantly enriched in neuroactive ligand-receptor interaction, tyrosine metabolism, folate biosynthesis, galactose metabolism, Th1 and Th2 cell differentiation, etc. Term was mainly enriched in the GO database for nervous system, behavior and neurotransmitters (Supplementary Figures 3B,C).
GSEA analysis of the KEGG pathway revealed that 136 of the 324 gene sets were upregulated in the H_H group compared to the AD_HFD group, involved the oxidative phosphorylation pathway. GSEA analysis of genes that changed between the two groups were shown in Supplementary Figure 3D. HLJDD up-regulated carbohydrate digestion and absorption, the phospholipase D signaling pathway, the longevity regulation pathway and axon regeneration, and down-regulated tyrosine metabolism, neuroactive ligand-receptor interactions, Th17 cell differentiation, the IL-17 signaling pathway, cholesterol metabolism and MAPK signaling pathway. The transcriptional level also indicated that HLJDD could inhibit the inflammatory response and regulate lipid metabolism, in addition to suggesting a regulatory effect on neuronal regeneration and neurotransmitter-like metabolism.
Analysis of genes related to cholesterol metabolism in different intervention methods
Using the transcriptome sequencing data as a benchmark, genes related to cholesterol transport, cholesterol biosynthesis, low density lipoprotein receptor (LDLR) gene family, bile acid biosynthesis, transport, secretion and metabolism were screened (Figure 4). Low expression genes were filtered out. The expression level between groups for the gene sets were compared, combined with the previous quantitative results, the effect of HFD on gene expression in the brain tissue of APP/PS1 mice was further analyzed.
Analysis of the above screened genes revealed that Pcsk9, a cholesterol transport-related gene, was significantly decreased in the H_H group. PCSK9 promoted low density lipoprotein (LDL) degradation. The upregulation of Pcsk9 expression in the AD_HFD group might be closely related to the increase in cholesterol in brain tissues, while HLJDD significantly downregulated Pcsk9 expression. The expression of Slc10a4, a bile acid transportrelated gene, was significantly reduced in the AD group and significantly increased in the AD_HFD group and significantly reduced in the H_H group. SLC10A4 was a family of bile acid sodium cotransport proteins that were activated by proteases to transport bile acids (45) and could be involved in the transport of bile acids in brain tissue. SLC10A4 was significantly reduced in brain tissue at highly phosphorylated tau protein lesions, suggesting its close association with AD pathology (46). CYP27A1 regulated the synthesis of primary bile acids in the alternative pathway, and the results of our previous experiments on serum bile acids in mice also showed that HFD intervention increased the level of CDCA produced by the alternative pathway, once again confirming that HFD intervention could cause a significant increase in bile acid synthesis in AD mice.
Structural analysis of transcripts
The main variants type of single nucleotide polymorphism (SNP) was non-synonymous SNV, while the main variants location of SNP was in intronic. The SNP mutation types were transition (80.22%) and transversion (19.78%). A->G in transition accounted for the largest proportion. G->T in transversion accounted for the largest proportion. Among the analysis of alternative splicing, skipped exon accounted for the most in four groups (Supplementary Figure 4).
Differential translatome analysis
Differential translation genes (DTGs) between groups were performed using edgeR software. Compared to the Nor group, 336 DTGs were significantly up-regulated and 881 DTGs were down-regulated in the AD group; 603 and 194 DTGs were significantly up-and down-regulated in the AD_HFD group compared to the AD group, respectively; while H_H resulted in 851 and 292 DTGs being significantly up-and downregulated, respectively.
A total of 382 differentially translated genes were covaried among the Nor, AD and AD_HFD groups, of which 59 were up-regulated and 323 down-regulated in the AD group, while the HFD intervention significantly back-regulated changes in translated gene expression in the AD group. A total of 201 differentially translated genes were co-varied among AD, AD_HFD and H_H groups, 90 translated genes were down-regulated and 111 translated genes were up-regulated in the AD_HFD group, respectively. Except for Zbtb16 and Tmem121b, all genes were significantly modulated by HLJDD. Heatmap of cholesterol and bile acid-related genes in the transcriptome. Blue represents the lowest while orange represents the highest.
Joint analysis of transcription and translation
Analysis of differentially expressed genes and differential translation genes There were 212 DEGs and 1217 DTGs between the Nor and AD groups, and 25 genes that changed at both levels. The combination of transcriptional and translational analysis revealed that Sgk1, Myo1f, Oip5, and Cst7 were both up-regulation; Iqschfp, Gm45837, Itga2b, Alb, Npas4, Fos, Ccn1, and Dusp1 were both down-regulation; Npy, Ptchd4, Clcc1, Thbs4, and Cdh12 were up-regulation in transcriptome and down-regulation in translatome; Grid2ip, Gucy2c, Th, Eva1a, Ngb, Slc10a4, Hs3st3b1, and Hspb1 were down-regulation in transcriptome and up-regulation in translatome. Homodirectional genes were enriched in learning, memory, cognition, regulation of cell death, response to lipid, response to cAMP, negative regulation of p38MAPK cascade, negative regulation of microglial cell activation, nervous system development and regulation of neuroinflammatory response. The pathways of homodirectional genes were significantly enriched in fluid shear stress and atherosclerosis and MAPK signaling pathway. Gene ontology-biological process (GO-BP) of opposite genes were enriched in neuron development, negative regulation of response to oxidative stress, neuron differentiation and regulation of cellular response to oxidative stress. The pathways of opposite genes were enriched in tyrosine metabolism, VEGF signaling pathway, regulation of lipolysis in adipocyte, adipocytokine signaling pathway and dopaminergic synapse.
There were 236 DEGs and 797 DTGs between the AD and AD_HFD groups, and 19 genes that changed at both levels. The combination of differences based on transcriptional and translational analysis revealed that Ecm1, Reep4, and Cmtm3 were both up-regulation; Gbp5, H1f3, and H1f4 were both down-regulated; Sspo, Hoxb5, and Ccm2 were up-regulation in transcriptome and down-regulated in translatome; Slc1a1, Glt8d2, Serinc2, Cd34, C1ra, Thbs4, Lct, Gm45208, Ltf, and Cnpy1 were down-regulated in transcriptome and up-regulated in translatome. The GO-BP of homodirectional genes had function at cellular process and positive regulation of biological process. H1f3, H1f4 were closely associated with histone modification. The pathways of homodirectional genes were significantly enriched in nucleotide-binding oligomerization domain (NOD)-like receptor signaling pathway. The GO-BP of opposite genes had function at metabolic process, cellular process, biological regulation and developmental process. Ccm2, Cd34, Thbs4, and Slc1a1 were closely associated with blood vessel development. Opposite genes were enriched in phagosome, galactose metabolism, carbohydrate digestion and absorption, synaptic vesicle cycle.
There were 203 DEGs and 1143 DTGs between the AD_HFD and H_H groups, and 18 genes that changed at both levels. The combination of differences based on transcriptional and translational analysis revealed that Alms1, Lcmt2, Ryr3, and Ppp1r10 were both up-regulated; Zfp968, Ccn1, Npas4, Fos, Dusp1, and Mpeg1 were both downregulated; C1ra, Mpp4, and Thbs4 were up-regulated in transcriptome and down-regulation in translatome; Otof, Abl2, Glra1, Hoxb5, and Nrap were down-regulated in transcriptome and up-regulated in translatome. The GO-BP of homodirectional genes were enriched in response to endogenous stimulus, learning, positive regulation of ceramide biosynthetic process, regulation of ceramide biosynthetic process, regulation of metabolic process, response to lipid and cognition. The pathways of homodirectional genes were significantly enriched in MAPK signaling pathway, Th1 and Th2 cell differentiation, IL-17 signaling pathway, TNF signaling pathway and dopaminergic synapse. The GO-BP of opposite genes were enriched in endothelial cell-cell adhesion, negative regulation of transmission of nerve impulse and behavior. The pathways of homodirectional genes were significantly enriched in ErbB signaling pathway and ECM-receptor interaction ( Figure 2D).
Some of the genes were regulated differently in transcriptome and translatome. These outcomes suggested that regulation of translation had a relatively isolated role in regulating gene expression compared to regulation of transcription, and suggested sometimes translational regulation might completely reverse the effects of transcriptional regulation.
Analysis of differentially expressed genes and DTEGs
Using Ribo-seq and mRNA-seq data from the same sample, the translation efficiency (TE) of each gene was calculated. It exhibited a very weak correlation between TE and transcription abundance in four groups. 63 genes between the Nor and AD groups were significantly different at TE and transcription and had the opposite trends (opposite). The opposite genes were mainly enriched in the aspects of aging, neurotransmitter loading into synaptic vesicle, response to endogenous stimulus and dopamine metabolic process terms. The pathways were significantly enriched in cocaine addiction, amphetamine addiction, alcoholism, tyrosine metabolism, dopaminergic synapse and caffeine metabolism.
In total of 62 genes between the AD and AD_HFD groups were opposite. The genes were mainly enriched in neurotransmitter loading into synaptic vesicle, aminergic neurotransmitter loading into synaptic vesicle, response to nicotine, neurotransmitter transport, regulation of neurotransmitter levels terms in biological process ontology. The pathways were significantly enriched in neuroactive ligandreceptor interaction, dopaminergic synapse, synaptic vesicle cycle, alcoholism, and tyrosine metabolism.
Zfp968 and Ccn1 between the AD_HFD and H_H groups were significantly different at both levels and had the same tendency. 60 genes between AD_HFD and H_H groups were opposite. The genes were mainly enriched in skeletal system morphogenesis, embryonic skeletal system morphogenesis and neuropeptide signaling pathway terms. The pathways were significantly enriched in neuroactive ligand-receptor interaction, galactose metabolism, carbohydrate digestion and absorption.
Differential expression analysis of no coding RNAs
The number of lncRNA transcripts reconstruction using StringTie was 7777. A total of 24717 circRNAs were identified in brain tissue samples, including 713 existing circRNAs and 24004 newly predicted circRNAs. A total of 1516 miRNAs were identified in mouse brain tissue samples. The length distribution obtained by miRNA sequencing of all samples was only one peak at 22bp. LncRNAs, circRNAs and miRNAs with P < 0.05 and | log 2 FC| ≥ 0.585 were screened as significant DEGs. According to the screening criteria, 114 or 137 were up-or down-regulated differentially expressed lncRNAs (dif-lncRNAs), 154 or 159 were up-or down-regulated differentially expressed circRNAs (dif-circRNAs), 11 or 5 were up-or down-regulated differentially expressed miRNAs (dif-miRNAs) in the AD group compared to the Nor group. There were 126 and 138 dif-lncRNAs, 193 and 202 dif-circRNAs, 17 and 4 dif-miRNAs up and down regulated in the AD_HFD group compared to the AD group. 150 and 129 dif-lncRNAs, 174 and 207 dif-circRNAs and 7 and 29 dif-miRNAs up and down regulated in the H_H group compared to the AD_HFD group (Figures 5A-C).
Long non-coding RNA analysis
Because of the complex origin of lncRNAs and the large variation in lncRNAs produced by different transcripts of the same gene, lncRNAs would be analyzed by transcript. The coding ability of new transcripts was predicted by CPC2 and CNCI software (Supplementary Figure 5A). The intersection of these non-coding potential transcripts was taken as a reliable predictor of the outcome. 734 transcripts with no coding ability were predicted. We performed de novo lncRNA prediction (Supplementary Figure 5C). Long non-coding RNA-mRNA association analysis Long non-coding RNAs were involved in the regulation of many post-transcriptional processes, and were similar to small RNAs such as miRNAs and snoRNAs. These regulations were often associated with complementary pairing of bases. A fraction of antisense lncRNAs might regulate gene silencing, transcription and mRNA stability due to binding to mRNAs of the righteous strand. To reveal the interactions between antisense lncRNAs and mRNAs, we used RNAplex (47) to predict complementary binding between antisense lncRNAs and mRNAs.
We predicted antisense effects to obtain 3718 lncRNA-mRNA target gene pairs and cis effects to obtain 14398 lncRNA-mRNA target gene pairs. The pathways that were significantly enriched in KEGG of antisense effects were pentose and glucuronate interconversions, MAPK signaling pathway, apelin signaling pathway and metabolic pathways ( Figure 5D). The pathways that were significantly enriched in KEGG of cis effects were oxidative phosphorylation, metabolic pathways, Alzheimer's disease, Parkinson's disease and mTOR signaling pathway. Cis effects of lncRNA-mRNA might be more involved in this AD experiment process ( Figure 5E).
Circular RNA analysis
Trend analysis was used to observe the tendency in circRNA variation among the Nor, AD and AD_HFD groups. There were 820 dif-circRNAs in three groups, with significance in profile 2 and profile 5 (Figures 6A,B). These genes had a change trend of callback, suggesting that such circRNA source genes might be involved in the influence process of the HFD intervention on AD. KEGG enrichment analysis revealed that the pathways significantly enriched in genes of profile 2 were glutamatergic synapse, synaptic vesicle recycle, Rap1 signaling pathway, alanine metabolism, propanoate metabolism and GABAergic synapses ( Figure 6C). Pathways significantly enriched in profile 5 were Rap1 signaling pathway, cholinergic synapse, cAMP signaling pathway, long-term depression, longterm potentiation and RAS signaling pathway ( Figure 6D). The enrichment circle diagrams of GO enrichment analysis were shown in Figures 6E,F.
Competing endogenous RNA analysis
Screening of mRNAs, lncRNAs, miRNAs and circRNAs in the AD and AD_HFD groups yielded 164, 225, 50, and 309 differential genes, respectively. The miRNA-target gene pairs were predicted and screened for target gene pairs with Spearman's correlation coefficient less than or equal to 0.5, combined with ceRNA pairs with the positive expression correlation (Pearson's correlation coefficient more than 0.7) to obtain potential ceRNA pairs, and then screened for ceRNA pairs with P-value less than 0.05 as the final ceRNA pairs using hypergeometric distribution test.
Discussion
Among all the relationship networks of ceRNAs, mmu-miR-450b-3p and mmu-miR-6540-3p regulated the expression of Th and Ddc, respectively. Both of them were closely related to the regulation of catecholamine neurotransmitters. A variety of lncRNAs and circRNAs were also involved in the regulation of their gene expression, and the specific mechanisms needed to be further validated and discovered. Insulin-like growth factor binding protein (IGFBP) was a family of proteins with high affinity for insulin-like growth factor (IGF). IGF-1 and IGFBP-3 were associated with oxidative stress and longevity (48). IGF-1 was thought to be a typical neuronal pro-survival factor in various brain injuries, promoting the clearance of Aβ and suppressing inflammatory responses. It could also affect cognitive performance by regulating synaptic plasticity, synaptic density and neurotransmission (49).
In addition to regulating IGF activity, IGFBP3 could also independently regulate cell growth and survival. IGFBP3 could bind and regulate retinoid X receptor α, upregulate proapoptotic signaling pathways such as TNFα and TGFβ (50). Current experimental studies and epidemiological findings on its relevance to AD were controversial, with some studies suggesting that higher serum total IGF-I levels and higher total IGF-I/IGFBP-3 ratios were associated with less cognitive decline (51). Low serum levels of IGF-1 and IGFBP-3 in male individuals were associated with AD (52). IGFBP-3 inhibited Aβ 42 -induced apoptosis and long-term exposure to Aβ 42 could induce IGFBP-3 hypermethylation (53). In contrast, study suggested that Aβ 42 upregulate the expression of IGFBP3 (54), and the increased IGFBP3 expression was seen in senile plaques and neurofibrillary tangles (55). Aβ could activate calcium-regulated phosphatases in astrocytes, causing the release of IGFBP3, which in turn induced tau protein phosphorylation (56).
In this study, Igfbp3 expression was reduced in the AD group, and its expression was significantly upregulated by HFD intervention, and its gene expression level was further increased by HLJDD administration. Further studies on the IGFBP3 were still needed to clarify its effect on the course of AD. LncRNA Rmst-208 (ENSMUST00000219444), MSTRG.3992.1 in the ceRNA network were competed with Igfbp3 to bind mmu-miR-551b-5p. In addition, Pou4f1 in the competition network inhibited neuronal apoptosis, Slc18a2 negatively regulated neurotransmitter transport, Shox2 and Irx5 were associated with neurodevelopment, and the transmembrane protein TMEM also played an important role in human immune-related diseases as well as tumor development (57, 58). Slc18a was associated with the regulation of neurotransmitter transport, which was regulated by mmu-miR-6540-3p and miR-551b-5p. ceRNA analysis revealed that Lhx9, lncRNA Acbd5 competed with Slc18a2 to bind mmu-miR-6540-3p. Tmem265, Gbx2, Lhx9, lncRNA Rmst-208 and MSTRG.3992.1 competes with Igfbp3 to bind mmu-miR-551b-5p. The prognostic value and underlying mechanisms of the miRNAs, lncRNAs and circRNAs that we identified needed to be further studied.
Interestingly, we also had a group of normal mice giving the HFD intervention in animal housing. However, this group was not performed the transcriptional and translational experiments. Compared to the normal diet, the HFD intervention resulted in a reduction in the number of platform penetrations, the percentage of platform quadrant distances and times in the normal mice. Whereas in the AD mice, on the contrary, the HFD intervention tended to ameliorate the cognitive impairment in the transgenic mice. This result meant that HFD had different effects on the animal. In the mRNA trend analysis and GSEA pathway enrichment results, most of the pathways enriched by DEGs in the AD group were related to the metabolism of neurotransmitter-like substances. Our laboratory examined the concentration of amino acids and neurotransmitters in mice brain tissue and found significant changes in acetylcholine, GABA, glutamine, phenylalanine, lysine, arginine, proline and alanine in the AD and AD_HFD groups (10). The result indicated the DEGs were involved in the metabolism of amino acids and neurotransmitters in the brain tissue of AD mice. We confirmed the HFD modulated brain tissue levels of serotonin, choline, tryptophan, GABA, glycine, phenylalanine, methionine, hypoxanthine and homovanillic acid in AD mice. In the present study, we also found that HFD could modulate the gene changes in profile 2 (transcriptome), and affect the metabolism of neurotransmitters in the brain tissue. In addition, IGFBP was associated with apoptosis and tau protein phosphorylation. The increased transcription of Igfbp3 in the AD_HFD and H_H groups might be related to its cognitive impairment.
PCSK9 was found to promote LDL degradation. The upregulation of Pcsk9 expression in the AD_HFD group might be closely related to the increase of cholesterol in their brain tissues, while HLJDD significantly downregulated the expression of Pcsk9. SLC10A4 was a family of sodium bile acid cotransport proteins that were activated by proteases to participate in the transport of bile acids in brain tissue. The expression of Slc10a4 was significantly decreased in AD group and significantly increased in AD_HFD group. HLJDD could significantly reduce the expression of Slc10a4. CYP27A1 regulated the synthesis of primary bile acids in the alternative pathway. The results of the previous experiments on serum bile acids in mice also showed that HFD increased the level of CDCA produced by the alternative pathway in mice. This result once again confirmed that HFD intervention could cause the transformation in the bile acid synthesis pathway in AD mice.
During the imposed remodeling of gene expression, transcription level alterations of certain mRNA didn't closely correlate with those of the encoded proteins, which could partially depend on the differential recruitment of mRNAs to translate ribosomes. Translatome could provide vital information for the translational regulation, to study the process of protein production from mRNA translation. The translational response helped to establish complex genetic regulation that couldn't be achieved by controlling transcription alone. This suggested that the roles of translational and transcriptional regulation were relatively independent. A large amount of data still needed to be mined in depth to discover more valuable regulatory networks, which would provide a basis and direction for later studies on AD and HFD intervention mechanisms, thus providing a more comprehensive understanding of the occurrence and development of AD disease. In summary, our analysis revealed distinct and related roles for translational and transcriptional regulation in HFD on AD mice, highlighting a critical role of translational regulation on AD.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found here: http://bigd.big.ac. cn/gsa/, CRA007307.
Ethics statement
The animal study was reviewed and approved by Institutional Animal Care and Use Committee of the Beijing Animal Science Co., Ltd., and the animal ethics approval number was IACUC-2018100605.
|
2022-10-26T15:04:48.538Z
|
2022-10-24T00:00:00.000
|
{
"year": 2022,
"sha1": "0b5bdae09f98c142964f6e7cb441825d353e7c14",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "130b827b59064c1953fb69923718ac44dcdd81ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255878425
|
pes2o/s2orc
|
v3-fos-license
|
Risk factors influencing tunnel construction safety: Structural equation model approach
At present, the global tunnel construction industry is developing rapidly, but construction accidents are also common. A large number of casualties and property losses are alarming people. It is urgent to pay attention to the causes of tunnel construction accidents, ensure the safety of construction sites, and reduce tunnel construction accidents. Through literature and case analysis, we have sorted out 35 typical tunnel causative factors for research and analysis, which are divided into 7 types. Based on the variable system, we prepared a measurement questionnaire, and 536 valid questionnaires were collected. The structural equation model (SEM) was used to study the relationship between these variables. The influence mechanism and interaction relationship between the variables are analyzed in depth in terms of influence intensity and path coefficient. The results showed that the following six latent variables significantly influence tunnel construction accidents: human factors, material factors, geological exploration design, technical management, safety management, and natural conditions. Natural conditions have the most significant impact, followed by human factors and safety management. Particular attention should be paid to education, training, and safety management in construction risk control. The structural model and research results are helpful to establish the cause theory of tunnel construction accidents, and guide the formulation of safety management policies for tunnel construction projects, reduce tunnel accidents and ensure construction safety.
Introduction
Compared with aboveground engineering, the tunnel construction project is limited by the geological environment, an advanced degree of machinery, the construction method, and other factors; as such, the probability and severity of tunnel construction accidents are higher than other geotechnical construction accidents [1]. Once a tunnel construction site accident occurs, such as gushing water, mud bursting, or collapse of the tunnel face, it will affect the construction progress and even cause casualties and severe economic losses. Therefore, by analyzing and finding the influencing factors of tunnel construction accidents and studying the mutual influence of various factors, how to control the risk source of tunnel construction to improve construction safety has become an urgent theoretical and practical problem.
In the early 1990s, scholars began studying tunnel construction's safety risk. Nowadays, many researchers are also actively exploring the factor system that affects tunnel construction safety. It is a standard analysis method to analyze the risk sources of tunnel construction through mathematical modeling. Hu et al. (2021) constructed the safety risk system from risk sources and construction units. They developed a safety risk assessment model for Large-sized deep drainage tunnel project construction based on the materials element expansion method. Yu et al. (2021) used the multi-objective particle swarm optimization algorithm to comprehensively consider the risks faced by the project and the dynamic environment. They used a multi-objective genetic algorithm to optimize the decision scheme. Yu and Wang (2019) analyzed the influence of several participating units on the safety risk of tunnel construction and determined the evaluation index of construction safety risk from the aspects of owners, construction units, and design units. Yang et al.
(2021) used the work and resource breakdown structure methods to identify tunnel construction risk and used the fault tree theory to qualitatively and quantitatively analyze the identified risk sources. Lin et al. (2020) combined fuzziness and randomness into the cloud model of risk assessment and constructed a risk-level evaluation model. Zhang et al. (2021) used the fault tree method to identify the correlation between the shield's main construction risks and the shield machine's fault alarm data and established a risk prediction model based on the Bayesian network to control the subsequent risks when such faults occur strictly. The research of these scholars has its focus. Still, the comprehensive exploration from the whole tunnel construction project is the scope of these methods is limited, and they focus on selecting schemes under different factors or different factors. Therefore, it is a critical supplement to the current study of tunnel construction accident factors to explore as completely as possible the influencing factors in the system of tunnel construction accidents and to prove the hypothesis of their relationship. All these studies have identified and analyzed safety risk factors. However, there are still fewer studies on the relationship between safety influencing factors in tunnel construction. Therefore, we use the SEM model to comprehensively study the interaction between risk factors under tunnel construction conditions. In recent years, the structural equation model (SEM) (a statistical method) has been applied to reveal and test hypothetical models and to discover the interactions that exist between variables [2]. The SEM method can handle complex relationships between variables while estimating all coefficients in the model [3]. The SEM method is also used in various disciplines, including the humanities and engineering. For example, the impact of technological and social lean practices on the performance of SMEs in the automotive industry [4]; the relationship between work attitudes and business values [5]; and the interaction between factors influencing tunnel construction accidents [6]. Therefore, the present study adopted the SEM approach to investigate the risk factors and their relationships that influence tunnel accidents.
This study's activities are shown in Fig. 1. The risk factors (latent and observation variables) and research hypotheses are determined based on existing literature and expert opinions. Next, questionnaires were prepared according to the variable system, and the questionnaire data were collected. The data were analyzed using SPSS software. Then, an SEM was developed using the AMOS software to verify the hypothesized relationships between risk factors for tunnel construction accidents. The analysis of impact intensity and path coefficients revealed the causal relationships and interactions within each variable. The results show the importance and relationships among the risk factors affecting tunnel construction safety and make relevant recommendations to improve tunnel construction safety.
Human factors
In Homans' social exchange theory [7], there are four components of any group that are interconnected: activities, interactions, thoughts and emotions, and group norms, and this also applies to tunnel construction projects. BIRD believes that unsafe human behavior is the cause of most accidents [8]. According to statistics, 30% of accidents are purely due to human error, while 60% of accidents occur due to a combination of human causes and natural factors. Human risks are frequent, including poor skills and professionalism, weak safety awareness, and poor physical and mental state. Tunnel construction projects involve personnel, and their professional quality and business ability are directly related to the quality of the project. Studies have shown that the lower the awareness of safety and the more stressful the job, the more likely violations will occur, creating hazards [9]. On the other hand, workers' attitudes toward risk also significantly impact several dimensions of project performance [10]. Therefore, we make the following hypothesis: H1: Human factors have a positive effect on tunnel construction accidents.
Materials factors
There are many construction machinery, materials, and other items at the tunnel construction site. First, the reasonable degree of materials storage at the construction site primarily affects the safety of the site environment. In addition, it is essential to ensure the quality of materials. Machinery and equipment are the main tools of the construction unit, which are vital to realizing the project's construction and are essential for the sustainable development of the construction unit. The large-scale and specialized tunnel construction equipment plays a critical role in the tunnel boring construction progress and project quality, and the quality and operation status of the equipment must be strictly controlled. In addition to mechanical equipment, the quality of construction materials directly affects the quality and safety of the entire construction process. Therefore, we make the following hypothesis: H2 -Materials factors positively affect tunnel construction accidents.
Safety management factors
The focus of tunnel construction safety management is to control the unsafe behavior of people and the unsafe state of things, implement the decisions and goals of safety management, eliminate all accidents, avoid accidental injuries and reduce accidental losses. As analyzed above, tunnel construction technology and mechanical equipment are constantly upgrading. At the same time, it is essential to do an excellent job of digesting and absorbing new technologies and continuously improve the level of operators, technicians and managers through learning and training. On the other hand, it was found that most engineering construction accidents are caused by careless workers and management problems [11]. Accident prevention in construction is not only about developing a list of rules and conducting safety inspections, but also requires a health and safety management system that complies with legal requirements [12,13]. Also, when employees perceive that management cares about their safety, their safety performance will be higher [14]. Frontline managers and supervisors are vital figures in accident prevention [15]. In summary, special training and attention to the technical level, safety awareness and psychological state of personnel can effectively reduce the risk caused by personnel operation errors. Therefore, we make the following hypothesis: H3 -Safety management positively affects the human factor.
As we all know, construction sites are one of the most dangerous places, and safety risks are potentially present in all aspects of the construction process [16]. "Safety first" should be one of the main objectives of any tunnel construction project and must be focused on health, safety and environmental issues to ensure a safe environment at the construction site. The construction site is arranged with various mechanical equipment, construction materials, and other materials. In such a complex environment, there are many sources of risk. Construction accidents such as injuries and explosions caused by improper management of mechanical failures, improper storage of sharp tools, and equipment instability are everywhere [17]. Therefore, we make the following hypothesis: H4-Safety management positively affects material factors.
According to the accident cause theory [18], safety management failures are the root cause of most accidents, and management is the best way to reduce construction accidents. Once the safety management is omitted, it will likely lead to construction accidents [19]. Safety management is an activity to achieve safe production for tunnel construction projects, which directly affects the safety and order of tunnel construction site. Therefore, we make the following hypothesis: H5-Safety management positively affects tunnel construction accidents.
Technical management factors
Current tunnel construction projects are often characterized by large scale, long lead time, and high risks, which require high construction technology and safety management. Furthermore, with the continuous advancement of construction information and automation technology, various new methods and technologies are gradually applied to the construction process, which also increases the difficulty of technical management. During the whole process of tunnel construction, the surrounding rock and support system should be monitored and measured dynamically to adjust the construction parameters in time [20,21], which is vital to guide the construction of the tunnel. Therefore, we make the following hypothesis: H6-Technical management positively affects tunnel construction accidents.
Natural conditions
The complex natural environment inherently contains a lot of unpredictability and variability, so tunnel construction is also exposed to numerous risk factors that can lead to accidents [23]. Natural conditions are influenced by climate, weather, and other factors, such as alpine regions in the spring temperature rise caused by the melting of permafrost, ice and snow melt water and other phenomena; monsoon regions in the rainy season rainfall caused by the sudden increase in the water content of the surrounding rock caused by mudslides and other phenomena. This variation in the stability of the surrounding rock caused by climate and geographic region may pose significant construction risks and is a major source of safety risk in tunnel construction [24]. The probability of accidents is high under poor geological conditions [25]. In the actual operation process, the survey work is often limited by the topography and geology of the project, resulting in incomplete survey [26]. In summary, we propose the following two hypotheses: H8 -Natural conditions positively affect tunnel construction accidents and H9-Natural conditions positively affect technical management.
Geological exploration and design factors
Geological survey investigates and studies different geological conditions such as rocks, stratigraphic structures, minerals, groundwater and geomorphology in the tunnel construction area. The tunnel design and construction methods are based on the results of the geological survey. The geological exploration for tunnel construction is limited by the technology, cost and natural conditions, and there are problems such as lack of detail and lowexploration accuracy [27]. The geological survey results significantly impact the design and construction links and strongly support the construction project [28]. In particular, the exploration plan design, advanced geological forecast and construction drawing and design should be refined to ensure that the design intent is implemented throughout the construction process. Therefore, we make the following hypothesis: H10-Geological exploration design positively affect tunnel construction accidents and H11-Technical management positively affect geological exploration design.
The surveyor is the critical factor in determining the quality of underground engineering geological survey. In the process of tunnel engineering investigation, a large number of non-professional worker are usually matched with a small number of professional and personnel technicians. However, the overall quality of these investigators, as well as professional knowledge, safety awareness are very serious deficiencies, it is difficult to ensure the quality of the investigation [26]. Therefore, we make the following hypothesis: H12-Human factors positively affect geological exploration design.
Special detection methods and construction techniques are required for areas with poor natural conditions to overcome the impact of poor geological conditions. Especially when the tunnel has to pass through landslide, debris flow, soft soil and other poor geological areas, or other special terrain areas, the necessary engineering and technical measures must be taken to deal with them [29]. Therefore, we make the following hypothesis: H13-Natural conditions positively affect geological exploration design.
Measurement sub-model
The measurement sub-model is the confirmatory factor analysis (CFA), which is used to describe the relationship between the observed and latent variables and to measure the observed variables' effect on the latent variables. The measurement sub-model consists of two equations. which represent the relationship between exogenous and endogenous latent variables and measurement variables, respectively. The sub-model is given by where, x is an exogenous latent variable; Λ x is the relationship between explicit exogenous variables and exogenous latent variables; ξ is an exogenous latent variable; δ is the error term of the exogenous manifest variable; y is an explicit endogenous variable; Λ y is the relationship between explicit endogenous variables and endogenous latent variables; η is the relationship between endogenous latent variables; ε is the error term of the endogenous manifest variable. Note that the latent variables cannot be obtained directly, and they need to be described by measurement variables. As mentioned earlier, many risk factors can affect tunnel construction safety. We finally included 33 observed variables to be measured using the questionnaire. We combined them with expert opinions, as shown in Table 1. Each question item is scored using a 5-point Likert scale from 1 to 5, indicating that the variables were very unrelated, relatively unrelated, uncertain, definitely related, and significantly related to the security event. These questions allowed the respondents to assess the extent to which each factor influenced the safety of the tunnel construction project. The questionnaire items are shown in Table 2.
In addition to the initial 33 risk factors, two additional items, 'objective hazards' (TCA1) and 'subjective hazards' (TCA2), were assigned to determine how respondents weighed the impact of different types of risks on tunnel construction accidents. These two items were also assessed using a Likert scale (i.e., 1 for very unrelated to a safety incident and 5 for very related to a safety incident). Therefore, 35 observed variables were selected to measure the factors influencing tunnel construction safety, as shown in Table 1.
Structural sub-model
The structural sub-model describes the causal structural relationship between latent variables, which can test and estimate whether the causal relationshipss between latent variables are reasonable (also called the causal model). The model is given by η = βη + Γξ + ς [3] where, η is the relationship between endogenous latent variables; β is the relationship between endogenous latent variables; Γ is the effect of exogenous latent variables on endogenous latent variables; ξ is an exogenous latent variable; ς is the residual term of the structural equation.
Based on the analysis above, six factors, including human factors, can lead to tunnel construction accidents. Moreover, according to Heinrich's chain theory of accident causation [30], an accident is not an isolated event. Instead, a sudden accident at a particular moment may result from a series of factors affecting each other. To assess the interaction between the factors, a preliminary conceptual model was constructed, as shown in Fig. 2.
Combined measurement and structural sub-models
The SEM path diagram of the factors affecting construction accidents was obtained, as shown in Fig. 3, based on the measurement and structural sub-models. Through formula [1][2][3] and questionnaire data, the causal relationship between variables is measured, and the influence path coefficient is calculated for verification and analysis.
Questionnaire design
According to the above variable system, we designed the measurement questionnaire after the revision of expert opinion, finally as shown in Table 2.
Participants
The questionnaire was released online through the Sojump platform, 581 copies were distributed, 536 valid questionnaires were returned. Among all returned questionnaires, 45 were considered invalid because the scores of each question were precisely the same, with an efficiency rate of 92.3%. The number meets the validity requirement of a sample size greater than 10 times the observed variable [31]. The study was approved by the Human Research Ethics Committee of Fuzhou University. And every respondent signed the informed consent form. The survey respondents were conducted mainly by tunnel construction experts, construction managers, construction technicians, and construction workers, details of which are shown in Table 3.
Table 1
Variables affecting tunnel construction accidents. What is the degree of obedience to command when tunnel construction site personnel carry out operations, and which is the degree of quality tunnel construction accidents related to operations according to the prescribed processes and methods of operation? 2 HF2 What is the degree to which the level of responsibility and safety awareness of construction site personnel is related to tunnel construction accidents? 3 HF3 What is degree of correlation between whether tunnel construction site personnel have undergone pre-job training, assessment, certification, and business proficiency to meet construction organization and safety management requirements and tunnel construction accidents? 4 HF4 What is the degree of correlation between the age distribution and health of technical and operational personnel on site and tunnel construction accidents? 5 HF5 What is the degree of correlation between the adequacy of rest time and conditions for regulating the physical and mental state of tunnel construction personnel when facing work stress and tunnel construction accidents? 6 MF1 What is the degree of correlation between the degree of massification and specialization of tunnel construction equipment and tunnel construction accidents? 7 MF2 What is the degree of correlation between the quality of inspection work on machinery and equipment and the operational status of major equipment and tunnel construction accidents? 8 MF3 What is the degree of correlation between the installation of escape safety channels and safety signs, the provision of safety protective equipment and spare equipment and tunnel construction accidents? 9 MF4 What is the degree of correlation between the rigor of sampling, testing and retention of all materials on site, the strictness of the acceptance process and tunnel construction accidents? 10 MF5 What is the degree of correlation between the choice of materials management warehouses and materials storage sites, the sorting and storage of all materials (according to e.g. type, origin, size, batch) and tunnel construction accidents? 11 GED1 What is the degree of correlation between the results of combining various types of surveys to accurately discern the level of the surrounding rock and other results and tunnel construction accidents? 12 GED2 What is the degree of correlation between the quantity, point design and quality of completion of pre-drill and physical surveys and tunnel construction accidents? 13 GED3 What is the degree of correlation between the reasonable degree of design route selection, the ability to avoid adverse geological locations and tunnel construction accidents? 14 GED4 When carrying out construction drawing design, which is the correlation between the quality and depth of the design unit's design to meet construction needs and tunnel construction accidents? 15 GED5 What is the degree of relevance of the experience of the design representative assigned to the design unit during on-site construction and the ability to ensure that the construction will carry out the design intent to the tunnel construction accident? 16 TM1 What is the degree of correlation between the quality and implementation of Expanded Education and Training and tunnel construction accidents? 17 TM2 What is the degree of correlation between the construction unit's fulfillment of its responsibility to prepare special construction technology plans for sub-projects and design temporary work plans and tunnel construction accidents? 18 TM3 What is the degree of correlation between whether the construction unit started construction in strict accordance with the construction plan and process and the tunnel construction accident? 19 TM4 What is the degree of correlation between the stringency of quality inspection and acceptance of the implementation of key processes in tunnelling and tunnel construction accidents? 20 TM5 What is the degree of correlation between the test equipment configuration, test qualification, testing frequency and equipment maintenance records and tunnel construction accidents? 21 TM6 What is the degree of correlation between monitoring scope and testing data analysis compliance and integrity and tunnel construction accidents? 22 TM7 What is the degree of correlation between the appropriateness of the advanced geological exploration method, the timeliness and detail of the forecasting work and tunnel construction accidents? 23 TM8 What is the degree of correlation between the timeliness of the design unit in adjusting the construction plan based on monitoring reports and advanced geological forecasts, and tunnel construction accidents? 24 TM9 What is the degree of correlation between the operability of the safety technical handouts and tunnel construction accidents? 25 SM1 What is the degree of correlation between whether a system of sound safety management practices is targeted and tunnel construction accidents? 26 SM2 What is the degree of implementation of the system, such as the main person in charge of safety on site with the degree of correlation with tunnel construction accidents? 27 SM3 What is the degree of correlation between the efficiency of information reception and site handover of personnel in each department and tunnel construction accidents? 28 SM4 What is the degree to which the integrity of the emergency planning system and emergency linkage mechanism is relevant to a tunnel construction accident? 29 SM5 What is the degree of correlation between the effectiveness of control and treatment of environmental problems involving occupational health of personnel, such as dust, noise and food safety, and tunnel construction accidents? 30 NE1 What is the degree of correlation between the water recharge situation at the construction site and tunnel construction accidents? 31 NE2 What is the degree of correlation between the regional structure where the tunnel is located (seismic zone, regional fracture, etc.) and tunnel construction accidents? 32 NE3 What is the degree of correlation between the climatic conditions in the area where the tunnel is located (e.g. alpine region, rainy region) and tunnel construction accidents? 33 NE4 What is the degree of correlation between the geological environment, such as tunnel envelope lithology and geological structure, and tunnel construction accidents? 34 TCA1 What is the degree of correlation between objective hazards (mechanical damage, adverse geology, etc.) and tunnel construction accidents? 35 TCA2 What is the degree of correlation between subjective sources of danger (poor management, weak safety awareness, etc.) and tunnel construction accidents?
Reliability testing
SPSS25.0 software was used to measure the internal consistency coefficient of the developed questionnaire. As shown in Table 4, the Cronbachs' alpha coefficients of all seven latent variables were larger than 0.7, indicating that the questionnaire had sufficient internal consistency and high reliability [32]. Next, Bartlett's spherical [33] and KMO tests were used to perform the correlation analysis between the variables. The p-value of Bartlett's spherical test was 0.00 (<0.001), and the KMO value was 0.899 (>0.7). According to Kaiser [34], the closer the KMO value is to 1, the higher the correlation of the variables and the more suitable for factor analysis. Therefore, this questionnaire was ideal for factor analysis [35]. Then, factor analysis was conducted using principal component analysis with maximum variance rotation, and seven principal components (HF, MF, GED, TM, SM, NC, TCA) were extracted with a cumulative contribution of 61.420%. The factor loadings of each observed variable were greater than 0.4, indicating that the questionnaire had good structural validity.
Finally, pearson correlation analysis was used to test the relationship between the questionnaire and the mean values of the factors. As shown in Table 5, the mean values of the factors were significantly correlated at the 0.01 significance level, and the correlations among the factors were low to moderately low, indicating good differential validity and a high degree of consistency between the content of the factors tested and the overall content of the questionnaire. In summary, the questionnaire developed in this paper has good reliability and validity.
Model testing and correction
The test parameters in the proposed model were successfully estimated to assess the fitness of the overall model. However, after the goodness-of-fit test, the GFI and AGFI of the initial volume model failed the fit criteria. In addition, the modification index and critical ration corrections were performed to improve the model's explanatory power. The results are shown in Table 6, where only the AGFI (0.888 > 0.871) was slightly lower than the standard value (0.9). While all other indicators meet their respective criteria, indicating that the modified model fits the data better than the theoretical model. Therefore, the optimized model is considered appropriate.
Hypothesis testing
The hypothesized relationships were tested by path analysis of the structural equation model. As shown in Table 7, the significance levels of all 13 paths of action were less than 0.01, indicating the acceptance of all hypotheses. Therefore, the occurrence of tunnel construction accidents is profoundly influenced by the following risk factors: human factors, materials factors, geological exploration design, technical management, safety management, and the natural environment. The final structural model was obtained, as shown in Fig. 4.
Impact analysis
The structural equation model contains variables with multiple influences, and the relationship between the variables is achieved through two or more paths. On the one hand, human factors, materials factors, geological exploration design, technical management, safety management, and natural conditions directly affect tunnel construction accidents. On the other hand: (a) safety management indirectly affects geological exploration design by affecting technical management and in turn construction accidents in mountain tunnels, (b) safety management indirectly affects mountain tunnel construction accidents by affecting human factors, (c) natural conditions indirectly affect mountain tunnel construction accidents by affecting technical management, and (d) technical management, geological exploration and design, and human factors are intermediate variables. Therefore, each latent variable's direct, indirect, and total effects on construction accidents in mountain tunnels were calculated to provide an in-depth analysis of the degree of interaction between the variables. The interaction between the variables explains the model, and the results are shown in Table 7. The direct effect is the impact of each potential variable on the tunnel construction accident, and the value is the path coefficient directly pointing to TCA in Fig. 4. The indirect effect refers to the influence of a factor on TCA indirectly by influencing another factor. The calculation method is the product of all path coefficients. The total effect is the sum of direct impact and indirect impact.
Discussion
This paper has considered the effects of multiple factors, and analyzed tunnel construction accidents using the structural equation approach. We proposed a variable system of factors influencing tunnel construction accidents and constructed a structural equation model. Furthermore, the path coefficient and influence relationship between the influencing factors are obtained.
As shown in Table 8, the six influencing factors on tunnel construction accidents are, in descending order, natural conditions, human factors, safety management, geological exploration design, technical management, and materials factors. Natural conditions such as poor rock stability and ground settlement at the construction site cause the highest collapses and water surges, so natural conditions significantly impact tunnel construction accidents. In addition to natural conditions, safety management has a relatively sizeable indirect impact on other factors, and special attention should be paid to the effect of safety management in construction risk control. As shown in Fig. 4, natural conditions significantly impact technical management (β = 0.52) and geological exploration design (β = 0.23). The unique natural conditions led to many potential risk events that severely impacted the drilling design and indoor experiments for the survey work [36,37]. Such construction risks caused by natural factors often require preliminary investigation through exploration work and then unique design for hazardous points to reduce construction accidents indirectly. Therefore, the site selection for the tunnel project should be based on a comprehensive investigation of the local surrounding rock conditions, geological formations, and other natural factors to reduce the difficulty of subsequent technical work. The impact of natural conditions on technical management lies in monitoring analysis and geological analysis, the results of which are essential guidelines for specialized work such as the selection of construction methods.
In addition, human factors (β = 0.34) and technical management (also affect the geological survey and design (β = 0.21). Although natural conditionsnatural conditions natural conditions limit the results of geological survey limit the results of geological survey limit the results of geological survey, some difficulties caused by natural conditions can be overcome through the efforts of professional survey and design personnel. The changes of surrounding rock cracks before and during tunnel construction can be accurately measured, and appropriate construction schemes can be designed with the cooperation of professional designers to reduce unexpected collapse accidents. On the other hand, the advanced tunnel geological survey instruments, equipment, technologies, and means are constantly updated, requiring workers to have excellent technical literacy and practical operation quality. Therefore, professional tunnel engineers should make technical delivery to the team in all construction links and strictly follow the critical tunnelling processes, especially the hidden works of anchors, small steel pipes and steel arch frames. The quality of geological exploration and design should be improved to ensure tunnel construction safety.
Safety management significantly affects materials (β = 0.30), technical (β = 0.34), and human factors (β = 0.17), with the most significant effect on the materials factors. The selection, maintenance, and preservation of mechanical equipment, safety materials and construction materials at construction sites are inseparable from safety management. Another primary task of safety management is the management of people. Human behavior is trained and guided by the management unit, and personnel's working conditions and working ability directly affect the completion of construction technology and construction quality [38]. Therefore, management units and related personnel, materials and site management should coordinate and play a joint role. Specific management programs should be provided for materials and people, such as procurement and maintenance for materials, machinery and other equipment, safety education and training programs, etc.
There are also some limitations of this study. First, although we consider different identities of tunnel practitioners, not all types of identities in tunnel construction are included. Second, since the questionnaire is subjective, the respondents may be influenced by social expectations, and in turn may not honestly report the risk factors in tunnel construction [39]. Therefore, in the future, the scope and number of respondents can be expanded to reduce the adverse impact on the experimental results. In addition, with the development of tunnel construction technologies, the influencing factors of tunnel construction accidents can be explored from more dimensions.
Concluding remarks
This study reviewed the related literature on tunnel construction characteristics and tunnel accidents. Combined with practical engineering experience, a variable system of the influencing factors of tunnel construction accidents is established, and the interaction mechanism between variables is analyzed using the structural equation model.Based on the research results, the following conclusions are made.
1. The structural equation model effectively describes the internal logic relations of the factors affecting tunnel construction accidents. Furthermore, the existing research conclusions are enriched because the tunnelling tests were conducted about the influence relationships among the variables and all 13 failures. 2. The tunnel construction accidents are mainly affected by 6 factors, in descending order, natural conditions, human factors, safety management, geological exploration design, technical management, and materials factors. 3. Tunnel construction accidents are easily affected by natural conditions and human factors, so we should pay special attention to cutting off the link between them. Risks in the natural environment are identified through survey and design, and it is necessary to develop a long-term practical training and education system to improve tunnel construction practitioners' safety awareness and professional skills, thereby effectively reducing construction accidents.
|
2023-01-17T18:13:45.958Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "192d2ec17d3743d17809d54acb2d585e08e40e29",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a0082e9220582c341d80b724fe0fd2846a58325c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
12469068
|
pes2o/s2orc
|
v3-fos-license
|
Habitat Use by Fishes in Coral Reefs, Seagrass Beds and Mangrove Habitats in the Philippines
Understanding the interconnectivity of organisms among different habitats is a key requirement for generating effective management plans in coastal ecosystems, particularly when determining component habitat structures in marine protected areas. To elucidate the patterns of habitat use by fishes among coral, seagrass, and mangrove habitats, and between natural and transplanted mangroves, visual censuses were conducted semiannually at two sites in the Philippines during September and March 2010–2012. In total, 265 species and 15,930 individuals were recorded. Species richness and abundance of fishes were significantly higher in coral reefs (234 species, 12,306 individuals) than in seagrass (38 species, 1,198 individuals) and mangrove (47 species, 2,426 individuals) habitats. Similarity tests revealed a highly significant difference among the three habitats. Fishes exhibited two different strategies for habitat use, inhabiting either a single (85.6% of recorded species) or several habitats (14.4%). Some fish that utilized multiple habitats, such as Lutjanus monostigma and Parupeneus barberinus, showed possible ontogenetic habitat shifts from mangroves and/or seagrass habitats to coral reefs. Moreover, over 20% of commercial fish species used multiple habitats, highlighting the importance of including different habitat types within marine protected areas to achieve efficient and effective resource management. Neither species richness nor abundance of fishes significantly differed between natural and transplanted mangroves. In addition, 14 fish species were recorded in a 20-year-old transplanted mangrove area, and over 90% of these species used multiple habitats, further demonstrating the key role of transplanted mangroves as a reef fish habitat in this region.
Introduction
In the tropics, seagrass beds and mangroves are formed in the shallow reef flat zone and the near coastline/estuarine region, respectively. Habitat-specific fish species inhabit either of these habitats, whereas some coral reef fishes, such as Lutjanidae, Haemulidae, Lethrinidae, Scaridae, Siganidae, and several other families, utilize these habitats as their nursery grounds [1][2][3][4][5][6][7][8][9][10][11][12][13]. In addition, several fish species show diel movements among these habitats for feeding or shelter [14,15]. Local connectivity by fishes and its importance among coral, seagrass, and mangrove ecosystems have received a great deal of attention in recent years [16][17][18][19]. Previous studies have indicated that the intensity and characteristics of connectivity by reef fishes widely fluctuate depending on regional differences and/or geographical conditions (see [18]), while the intensity also weakens depending on the distance between habitats [20][21][22]. Although several studies have been conducted in various regions under different conditions, few have been performed in Southeast Asian countries relative to other regions such as the Caribbean and Australia (see [18]). Furthermore, only a few studies have evaluated differences in the intensity or effectiveness of connectivity by reef fishes between non-estuarine and transplanted mangroves in the Indo-Pacific [11,[21][22][23][24].
The rates of environmental deterioration at various scales and unabated overfishing continue to increase worldwide, resulting in reductions of fishery resources. Open access to fishing grounds [25] has become a popular trend, leaving resources at the brink of collapse [26,27]. While fish resources face threats of the loss of both biodiversity and stock replenishment, the degradation of coral reef ecosystems has also become a huge social problem (e.g., [28][29][30]). The establishment of marine protected areas (MPAs) is seen as an effective tool to protect coastal habitats and to enhance nearshore fisheries, especially in tropical regions (e.g., [31][32][33]). When establishing a MPA to protect fishery resources, the life history and habitat use of target species must first to be clearly determined. If target fishes exhibit ontogenetic habitat shifts (i.e., habitat changes with growth stages) or if fishes move daily among different habitats for feeding or shelter, all habitats being used may be equally important, regardless of scale and type; therefore, each of these habitats must be included in the establishment of MPA boundaries. Even when establishing a MPA for biodiversity conservation, the inclusion of multiple habitats within a MPA could further enhance its effectiveness, as such a MPA may be able to protect not only habitat-specific species, but also those that inhabit multiple habitats. For these reasons, understanding the interconnectivity of reef fishes among different habitats is a key requirement for making effective management plans in coastal ecosystems, particularly for determining the component habitat structures in a MPA.
Most coastal areas in the Philippines are located in the Coral Triangle, an area known for the highest biodiversity of coral worldwide [34,35]. However, habitat loss along the Philippine coasts has remarkably increased in recent years [36][37][38]. In particular, more than half of natural mangroves had disappeared by 1994, mainly due to the establishment of fish ponds [36,39]. Furthermore, seagrass beds are drastically decreasing, even though these habitats serve as equally important fishing grounds as coral reefs for various commercially important species such as Lethrinidae or Siganidae [40][41][42]. Since the 1930s-1950s, mangrove replantation projects have been implemented and have emphasized the participation of local communities in the Visayas region [43]. In recent years, however, sustaining the health of fishery resources has become difficult in the Philippines, partic-ularly due to poor environmental governance and lack of effective, coherent monitoring programs. This situation eventually led to increased overfishing and other forms of environmental deterioration, which consequently became the topmost increasing concern regarding the proper management of fishery resources and MPAs [44][45][46]. From 1967 to the present, nearly 1,000 MPAs have been established in the Philippines [46]. Upon careful review, most of them focus on coral reefs, and those that incorporate multiple habitats number relatively few. In addition, the implications of connectivity among different habitats have been underexplored and are poorly understood [47,48]. In the Philippines, several studies have evaluated the effectiveness of MPA management in terms of fishery regulations (e.g., [49,50]). Furthermore, the effectiveness of fishery resource conservation was verified by a series of studies in the Sumilon and Apo islands (e.g., [51][52][53]). These studies documented the effects of MPAs on the enhancement of fisheries components; however, they only focused on one type of ecosystem (i.e., coral reefs) and disregarded important effects of the multiple habitats used by some commercially important fish species. If the present study can determine the specific features of each habitat, their connectivity, and corresponding importance, then our findings would be valuable for fishery resource conservation and management in regions of the Philippines. Furthermore, we also compared the transplanted mangroves to other types of habitats; such comparisons have been rare in previous studies.
The present study was designed to address differences in the pattern of habitat use by fishes, with a focus on commercial fishery species, among coral, seagrass, and mangrove habitats in the Philippines and whether transplanted and natural mangroves are used as common habitats for adult fishes and/or as potential nursery habitats for juveniles [54]. Based on our results, we further discuss the importance of including multiple habitats within MPAs.
Study Design
Field surveys were conducted semiannually for 2 years (2010 and 2011) during months representing the rainy season (Septem-ber) and the dry season (March) at Puerto Galera (PG; 13u309 N, 120u579 E) off of northern Mindoro Island, and at Laguindingan (LD; 8u379 N, 124u289 E) off of northern Mindanao Island, the Philippines ( Figure 1). The study site at PG was situated in a fringing reef with the reef flat zone located along both the western and eastern side of Manila Channel within Puerto Galera Bay. The study site at LD was located in a fringing reef where the reef flat zone faces the open sea. The MPAs of PG (entire study site) and LD were established in 2006 and 2002, respectively, both with a strict no-take-zone policy ( Figure 1). Coral reefs at both sites are composed of hermatypic corals (e.g., tabular and branching Acropora; living coral coverage, .80%), which are more abundant near the reef margins. Dominant seagrass species at PG were Thalassia hemprichii (15.9% of cover), Halodule pinifolia (15.0%), and Cymodocea rotundata (12.0%), whereas T. hemprichii (63.6%) and Enhalus acoroides (4.3%) were dominant at LD. The mean (6 SD) canopy heights at PG and LD were 6.465.0 cm (n = 148 quadrats) and 11.463.4 cm (n = 156 quadrats), respectively. Rhizophora apiculata and Sonneratia sp. were the dominant mangrove species at PG, whereas only R. apiculata was present at LD. Mangrove areas at both sites were composed of clear-water non-estuarine mangrove ( Figure 1). The mangroves at LD have been planted along seagrass beds near the shoreline since 1992, and they presently form a band of young and mature trees that protect the coastal communities from strong winds (Honda, personal communication). Fish distribution patterns among coral reefs, seagrass beds and mangrove areas were assessed during each season using an underwater visual transect survey method. In each habitat, seven 1620-m (20 m 2 ) belt transects were established haphazardly using a scaled rope (see also [8,9]). Transects were separated from one another by at least 5 m, In PG, three and four line transects were established in each habitat along the western and eastern side, respectively, of Manila Channel. All fish visual censuses (FVCs) were conducted in daytime between 08:00 and 16:00 h, and fishes were identified to the lowest taxonomic level whenever possible. Individual fish size (total length) was also recorded underwater using a ruler attached to the recording slate. In coral reefs, FVC were conducted via SCUBA or snorkeling depending on the water depth (2.0-8.0 m). In seagrass beds (0.5-1.0 m deep at low tide, 1.5-2.0 m at high tide) and mangrove areas (0.5-1.0 m at low tide, 1.0-1.5 m at high tide), only snorkeling was used and FVCs were conducted when the depth ranged from 1.0 to 1.5 m to avoid tidal effects [55]. Visibility within the water at any transect generally exceeded 7 m. Sea surface water temperature at PG and LD was 30.0uC and 30.4uC in September and 27.8uC and 28.8uC in March, respectively. Salinity at both sites was about 34%, and no estuaries were present near either site. All methods utilized in the present study were conducted under the permit requirements of the municipal government of PG and the Barangay Tubajon in LD.
Data Analysis
At both sites, data collected from each habitat type were analyzed for species composition and density. Because assumptions of homogeneity of variance could not be met by some data even after transformations, nonparametric Steel-Dwass tests were used to determine whether species and individual numbers of fishes differed among habitats for each site, month, and year (see also [9]). Moreover, these variables were compared between PG and LD mangrove areas in each sampling month using Mann-Whitney U-tests. Family composition of species and individuals in each habitat at each site was also estimated.
The similarity of fish assemblages among habitats was examined using data from the seven transects within each habitat for every month. The Chao index [56] was used for this analysis and results were visualized using nonmetric multidimensional scaling (NMDS). Similarity tests among four variables (year, month, site and habitat) were conducted using nonparametric multivariate analysis of variance (NPMANOVA; a = 0.05). All statistical analyses were performed using the ''vegan'' package of R ver. 2.14.1 (R Development Core Team).
Counts of each fish species that occurred in two or more habitats were analyzed to verify the presence of fish that use multiple habitats. To avoid incidental detection, instances of only one individual-of a species recorded in a habitat, along with unidentified species, were excluded from analysis. Moreover, based on the commercial fishery species listed in Fishbase [57], the number of commercial fish species utilizing a particular habitat type or a combination of habitats was determined. Using these data, the habitat or combination of habitas favored by a large number of commercial species was evaluated. Here, commercial species included species listed as ''highly commercial'' or ''commercial'' in Fishbase, while other categories, such as ''minor commercial,'' ''subsistence fisheries,'' ''of no interest,'' and ''no information,'' were not regarded as commercial species. Here, Pomacentrus lepidogenys, which was categorized as ''highly commercial'' in Fishbase, was considered a noncommercial species together with other pomacentrids, because it is highly unlikely that this species was of high fishery importance in the Philippines. Moreover, for cases in which fish exhibited possible ontogenetic habitat shifts, the size distribution pattern in each habitat was visualized. Fish species were considered to undergo possible ontogenetic habitat shifts based on individual counts or mean length. If the individual count of a fish species reached 10 or more within juvenile habitats (seagrass and/or mangrove) and five or more in coral reefs, then this fish species was considered to undergo a potential ontogenetic habitat shift. In addition, if the mean total length of a fish species from the coral reef was significantly longer than that in the juvenile habitat (Mann-Whitney U-test, a = 0.05), then such a fish species may also exhibit an ontogenetic habitat shift. Species belonging to Atherinidae and Gobidae families were excluded from all analyses because they are pelagic and small cryptic fishes, respectively.
Fish Assemblage Structure
In total, 15,930 individuals, belonging to 265 species in 45 families were recorded (Table S1). In coral reefs, 12,305 individuals comprising 234 species in 37 families were recorded. In contrast, fewer fish were recorded in seagrass beds (1,198 individuals belonging to 38 species in 18 families) and mangrove areas (2,426 individuals belonging to 47 species in 24 families). The mean numbers of species and individuals per transect in coral areas at both PG and LD sites were significantly higher than those in seagrass and mangrove habitats (P,0.05), with four exceptions for the number of individuals (seagrass beds in September 2010 and March 2011 at PG and mangrove areas in September 2011 in both PG and LD; Figure 2). Seagrass and mangrove habitats did not significantly differ in terms of either the number of species or individuals (P.0.05), although the numbers of both species and individuals at LD in March 2011 differed significantly between seagrass and mangrove habitats. Neither the number of species nor individuals (P.0.05) significantly differed between PG and LD mangroves, except during September 2011, when the number of species in mangroves was higher at PG than that at LD.
The three most dominant families in terms of the number of species in PG coral reefs were Pomacentridae, Labridae, and Chaetodontidae ( Figure 3a). Dominant families in LD were Labridae and Pomacentridae, and the combined abundance of these families accounted for more than 40% at both PG and LD. For fish in seagrass beds at PG, Labridae accounted for about 30% of all species, followed by Muraenidae, Syngnathidae, Nemipteridae, and Scaridae, in that order. Labridae and Apogonidae together accounted for more than half of the fish species in LD seagrass beds. In mangrove areas, the three most dominant families were Nemipteridae, Pomacentridae and Labridae at PG and Nemipteridae, Siganidae and Lutjanidae at LD. Pomacentridae (represented by Chromis ternatensis, Acanthochromis polyacanthus, and Pomacentrus moluccensis) was the most dominant family in terms of the number of individuals in coral reefs at both PG (69.5%) and LD (58.6%), followed by Seranidae (represented by Pseudanthias huchti; Figure 3b). Fish family composition in seagrass beds differed between PG and LD. Only two species, Plotosus lineatus (Plotosidae) and Siganus spinus (Siganidae), accounted for over 80% of all fish species at PG. In contrast, at LD, Apogonidae (represented by Apogon ceramensis), Labridae (represented by Halichoeres argus and Halichoeres scapularis), and Siganidae (represented by S. spinus) were the three most dominant families, together comprising 80% of fish individuals. In mangrove areas, Plotosidae (represented only by P.lineatus) and Apogonidae (represented by Sphaeramia orbicularis and A. ceramensis) together accounted for about 70% of fish individuals at PG. Apogonidae also represented by S. orbicularis and A. ceramensis) accounted for more than 80% of fish individuals at LD.
Similarity indices revealed that fish communities could be divided into three large groups (coral, seagrass, and mangrove habitat types) regardless of sampling month and site (Figure 4). Results of similarity tests using NPMANOVA revealed a highly significant difference among habitats (F = 11.28, P,0.001). Other variables, such as sampling period (year, month), did not significantly affect patterns of fish structure (F ,2.0, P.0.05), although a marginal difference was observed between sites (F = 1.92, P = 0.08).
Fishes Utilized Multiple Habitats
In total, 29 fish species accounting for 14.4% of species abundance were recorded in multiple habitats (Table 1). Six species were recorded in both coral and seagrass habitats and three of these belonged to Labridae (Table S1). Nine species were recorded in both coral and mangrove habitats: three species belonged to Pomacentridae, while Lutjanidae and Siganidae were represented by two species each. Six species were observed in both seagrass and mangrove habitats, and each belonged to different families. Eight species were recorded in all three habitats, and Labridae and Siganidae were represented by two species each.
For commercial species, the total number of species in coral reefs was 27 (62.8%), which was relatively greater than the numbers found in seagrass beds (two species, 4.7%) and mangrove areas (four species, 9.3%; Table 1). Ten commercial species (23.3%) were recorded in multiple habitats. Sixteen commercial species used multiple habitats or exclusively used either seagrass beds or mangrove areas, accounted for 37.2% of commercial species (i.e., ''all except coral reef only'' group in Table 1).
Fourteen fish species were recorded in the transplanted mangrove area at LD, 13 of which also utilized coral and/or seagrass habitats (Table 1). Even though minimal differences were found between PG and LD in terms of the numbers of total species, commercial species, and multiple habitat users, the number of all fish species that were mangrove users, including those with commercial value, was twice higher at PG than at LD (Table 1). In addition, PG and LD differed greatly in the number Percentages of totals are shown in parentheses. Instances for which only one individual was recorded in a habitat for each species were excluded (i.e., numbers differ from those in Table S1). All unidentified species were also excluded. doi:10.1371/journal.pone.0065735.t001 of species observed in only mangrove habitat (Table 1): S. orbicularis was the only species recorded as a mangrove only user in LD, whereas 14 species were exclusively recorded in the mangrove areas at PG.
In the present study, seven species (Lutjanus fulviflamma, Lutjanus monostigma, Scolopsis lineata, Lethrinus harak, Parupeneus barberinus, Siganus fuscescens, and Siganus guttatus) exhibited possible ontogenetic habitat shifts from seagrass beds Figure 5. Relative abundance of the seven fish species in coral reefs (white), seagrass beds (hatched), and mangrove areas (black) across habitats according to class sizes using pooled data from Puerto Galera and Laguindingan. See Table S1 for size distribution of fishes in each habitat for every site. Mean individual numbers per 1,000 m 2 during the period are given above each size-class column. doi:10.1371/journal.pone.0065735.g005 and/or mangrove areas to coral reefs ( Figure 5). Of these seven species, multiple adult-sized individuals of only L. harak (n = 8) occurred in seagrass beds.
Discussion
The present study revealed that fish assemblage structure varied significantly among coral, seagrass, and mangrove habitats at the study sites, although differences between seagrass and mangrove habitats in terms of species richness and abundance were not significant (Figures 2, 4). The 199 fish species recorded only in coral reefs was accounted for approximately 75% of all fish species recorded, whereas only nine and 15 fish species exclusively utilized in seagrass and mangrove habitats, respectively (Table S1). Although the majority of fish species was found in coral reefs, the other habitats also yielded several unique species. The fact that five families of fish were found only in mangrove habitats emphasizes the need to conserve multiple habitats even without considering connectivity. Different habitats exhibit different environmental conditions and fish assemblage structures; thus, managing all of these habitats can serve as a very effective method of conserving coastal biodiversity.
Even though each habitat exhibited different fish compositions, many fish species were found to use multiple habitats, and these fish could be categorized into two groups. The first group includes those species that generally did not change habitat preference after settlement although they did inhabit more than one habitat (also see [19]). The second group includes fishes that use seagrass and/ or mangrove habitats as their feeding or shelter grounds at the adult stage or as nursery grounds in the juvenile stage. Adult-sized Lutjanus griseus and mullids are examples of fish that migrate daily into seagrass/mangrove habitats for feeding, similar to the observations of Nakamura and Tsuchiya [8] and Luo et al. [15]. In this study, several adult-sized L. harak were observed not only in coral reefs but also in seagrass beds. Although we could not determine if these individuals migrated between habitats or if each habitat harbored its own population, adult individuals of coral fishes have often been observed in seagrass beds and/or mangrove areas in previous studies (e.g., [11,24]). Adult P. barberinus and S. guttatus were sometimes observed in seagrass beds at the study sites, although these fish were not recorded in the transect survey. Feeding behavior of L. harak and P. barberinus was also observed in seagrass beds (Honda, personal observation). Moreover, seven species were confirmed to use seagrass and/or mangrove habitats during the juvenile stage. Even though fish species commonly exhibit ontogenetic habitat shifts (see [18,19]), it is also important to recognize that the juvenile fish of some species were found only in seagrass beds and/or mangroves (e.g., [2,9]). Such habitat fidelity is considered to reduce the likelihood of fish flexibility or opportunism in habitat use.
More than 37% of the commercial fish recorded in this study utilized seagrass and/or mangrove habitats or one of these habitats in combination with coral reefs, and more than 34% of the fish that utilized multiple habitats in this study were commercial species (Table 1). In addition, six of seven species that exhibited possible ontogenetic habitat shifts (the exception being S. lineata) were commercial species. Based on previous reports, many of the species that exhibit ontogenetic habitat shifts, as well as those species that migrate among different habitats in their adult stage, have fishery value (e.g., [2,12,15]). Thus, the inclusion of adjacent seagrass beds and mangrove areas connected to coral reefs in the same MPA offers several important benefits: increased carbon dioxide fixation or sequestration by seagrasses and mangroves [58], buffering against disasters such as high waves [59], enhanced conservation of biodiversity of organisms, and increased sustainability of fishery resources [21,22,42,60].
Regrettably, seagrass beds and mangrove forests are disappearing worldwide [61][62][63]; consequently, the species richness and biomass of fishes and invertebrates decrease with such habitat losses [16,64]. In the present study, 14 fish species that utilize mangroves were recorded in the transplanted mangrove area, and most of these species were multiple habitat users (Table 1), indicating that both natural and transplanted mangroves play an important role as habitat for some reef fishes. Moreover, this finding suggests that transplanting mangroves can be useful in terms of fishery resource conservation and recovery. The total number of fish species considered as mangrove users at LD was half in the value observed at PG, even though the two sites did not greatly differ in the number of multiple habitat users (Table 1). Almost two decades have passed since mangroves were transplanted at LD; however, much more time may be needed for the colonization of fish species that are dependent on mangroves for their recovery and population replenishment, as only a few natural mangroves exist nearby as sources of populations of fish species. The fact that S. orbicularis was the only fish species unique to mangrove areas at LD supports this hypothesis.
Blaber and Milton [65] and Thollot [66] reported that fish species diversity in clear-water mangroves was lower than that in estuarine mangroves. In most cases, estuarine mangroves are surrounded by more simply structured habitats, such as mud or sand habitats, whereas clear-water non-estuarine mangroves connected to coral reefs exhibit a more complex structure. Such differences in the complexity of the surrounding habitats may affect the structure of species diversity between clear-water nonestuarine mangroves and estuarine mangroves [24]. In addition, a few reports have indicated that clear-water non-estuarine mangroves in the Indo-Pacific serve as juvenile habitat for reef fishes [11]. Barnes et al. [24] compared fish assemblage structure between coral reefs and clear-water non-estuarine mangrove areas near Orpheus Island in the Great Barrier Reef, and they found that no specific reef fish species used clear-water non-estuarine mangroves as their juvenile habitats. Nevertheless, we found that seven reef fish species exhibited possible ontogenetic habitat shifts from clear-water non-estuarine mangroves to coral reefs. This finding strongly indicates that clear-water non-estuarine mangroves in the Indo-Pacific function as the juvenile habitat of some reef fishes, similar to mangroves in the Caribbean region (e.g., [1,67]) and estuarine mangroves in the Indo-Pacific [6,9,68].
Human population near coastal areas in the tropics are expected to double in the next 50-100 years, ultimately leading to increased fishing pressure and therefore accelerated biodiversity loss and depletion of fishery resources [69]. Even though several limited MPAs contribute to fishery resource conservation and recovery (e.g., [51,70]), coastal fisheries are not effectively managed in most coastal areas in Southeast Asian regions, including the Philippines [46]. This situation will presumably get worse in the future with increases in the human population. In the tropics, not only overfishing but also juvenile habitat loss, followed by decreases in the survival rate of juveniles, have strongly contributed to declines in fishery resources in recent years. The conservation and development of juvenile habitats would delay the exhaustion of such resources.
Supporting Information
Table S1 Total number of individuals and size range of 265 species in three habitats at two study sites during the study period. PG, LD, and TL indicate Puerto Galera, Laguindingan, and total length, respectively. (XLSX)
|
2016-05-12T22:15:10.714Z
|
2013-08-20T00:00:00.000
|
{
"year": 2013,
"sha1": "6d514b2e8d5731a3a364b518972689abc66c5dc6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0065735&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d514b2e8d5731a3a364b518972689abc66c5dc6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
21135226
|
pes2o/s2orc
|
v3-fos-license
|
Nodal vascularity as an indicator of cervicofacial metastasis in oral cancer: A Doppler sonographic study
Background: The objective of this study was to assess nodal vascularity by Doppler sonography and to find out the correlation between clinical and various Doppler sonographic features for the detection of the metastatic nodes in oral cancer patients. Patients and Methods: A total number of 55 patients of histopathologically proven oral cancer presenting with enlarged superficial cervicofacial lymph nodes were included in the study. Patients were subjected to clinical examination according to a specially designed proforma and the TNM staging was done. If more than one enlarged nodes were present, then the node with the largest diameter was chosen for further Doppler ultrasonographic examination followed by fine needle aspiration cytology test of the same node. Results: Correlations of patterns of color Doppler flow signals with cytological diagnosis showed that central type of vascular pattern was statistically significant parameter for benign lymph nodes and peripheral type of vascularity was highly significant parameter for malignant lymphadenopathy. It was found that the cut-off value of resistive index 0.6 was statistically significant in the assessment of metastatic node (P < 0.01) with a sensitivity of 45.5% and specificity of 93.9%. On comparison of the clinical features (TNM staging) with Doppler sonographic features, it was found that the characteristic features suggestive of malignant lymph nodes on Doppler sonography such as peripheral blood flow and high resistive index were more consistently and frequently associated with the higher sub-stages of T3 and T4 and N2b and N2c of TNM staging system. Conclusion: Nodal vascularity may be used to differentiate benign from malignant lymphadenopathy. Proper judicious use of non-invasive color Doppler ultrasonographic examination provides an opportunity to eliminate the need for biopsy in reactive nodes and provide treatment in a more precise manner.
INTRODUCTION
Oral squamous cell carcinoma is the most common malignant tumour of the oral cavity. Cervicofacial lymph node status has been shown to be prognostically important in head and neck cancer outcome. It has been found that if the metastasis involves the ipsilateral lymph nodes, then the prognosis is 50% and if it involves contralateral lymph nodes, then it further reduces down to only to 25%. 1 Traditionally, only the enlargement in the size of the lymph metastatic nodes occurs because tumours larger than a few millimeters in diameter stimulate the growth of new vessels by secreting an angiogenesis factor. 2,3 As the node is progressively involved, increased vascularity is seen in the central and peripheral parts. These changes, therefore, are reflected on color Doppler sonography by a qualitative increase in peripheral vascularity. 4,5 Hence the study was undertaken to assess nodal vascularity by Doppler sonography for detection of cervicofacial metastasis in oral cancer patients and to correlate clinical and various Doppler sonographic features. We also aimed to validate the role of Doppler sonography as a non-invasive diagnostic tool for the detection of the metastatic nodes.
PATIENTS AND METHODS
The study group comprised 55 patients of oral cancer presenting with enlarged superficial cervicofacial lymph nodes who reported from November 2009 to March 2011, prospectively to the department of Oral Medicine and Radiology, Mahatma Gandhi Postgraduate Institute of Dental Sciences, Puducherry. The study was approved by the Ethics and Research committee of the Institution. Patients in the study group were selected based on the following inclusion and exclusion criteria. Individuals with clinical evidence and histopathologically proven oral cancer and individuals presenting with enlarged, palpable superficial lymph nodes in head and neck region were included in the study. Individuals presenting with other known causes of cervical lymphadenopathy such as oral infections, tuberculosis, granulomatous diseases like sarcoidosis were excluded from the study sample.
Patients with oral cancer presenting with enlarged palpable cervicofacial lymph node were subjected to clinical examination according to a specially designed proforma and the TNM staging was done. The regional lymph nodes were assessed for the number, size, consistency and mobility parameters. TNM staging was done according to American Joint Committee on Cancer (AJCC) 2010.
If more than one enlarged nodes were present, then the node with the largest diameter was chosen for further Doppler ultrasonographic examination. The largest lymph node (size approximately ≥1 cm in diameter) palpable was taken for further investigation. If more than one lymph node was palpable satisfying size criteria, the lymph node, which was hard or fixed, was subjected to further investigation.
In this prospective study, the perfusion patterns of metastases and reactively enlarged nodes in patients with known squamous cell carcinoma of oral cavity were examined. The subjects lay supine on the examination couch, with the shoulders supported by a soft pad, and the neck was hyper-extended. The subjects lay in this position for 5 min before the commencement of the examination, to ensure that the blood flow is measured at rest. Grey scale and power Doppler sonography were performed using an L&T Medical Sequina unit equipped with a wide bandwidth (range 5-10 MHz) transducer. All examinations were performed by a single examiner experienced in head and neck sonography and Doppler sonography techniques. Grey scale ultrasonography was performed at 8 MHz, and the standard Doppler settings were chosen for optimal detection of the signals from the lymph node vessels, which had low velocity flow.
On the grey scale ultrasonography, the largest transverse diameter of the lymph node was measured and the echogenicity was assessed. The echogenicity was classified in to homogenous and heterogeneous types [ Figures 1 and 2] based on internal architecture of lymph node. If the lymph node is uniformly hypo-echoic, then it was designated as homogenous and if it is showing the presence of both hyper-echoic as well as hypo-echoic areas then it was considered to be heterogeneous node. Lymph node with heterogeneous echo pattern showing areas of cystic necrosis was considered as one of the factor indicating the involvement of the node by metastatic cells. Settings of the Doppler ultrasonographic unit were standardised for high sensitivity, with a low wall filter to allow detection of vessels with low blood flow. The vascular pattern of each lymph node was determined and classified according to the location of the vascularity: 1. Central (hilar) -A single vascular signal or vascular signal branching radially, originating symmetrically, and showing a regular course from the nodal hilum [ Figure 3]. 2. Mixed -Presence of central and peripheral vascular patterns [ Figure 4]. 3. Peripheral (capsular) -Flow signals along the periphery of the lymph nodes, with or without branches into the nodes [ Figure 5]. 4. Absent.
In evaluating the vascular pattern, the hilar vascular pattern was considered to be suggestive of benignity, whereas peripheral and mixed vascular patterns were considered to be suggestive of a malignant node. The angle independent velocimetric indexes, resistive index (RI) and pulsatility index (PI) were also calculated. RI and PI were measured using on board software as follows: RI = (peak systolic velocity -end diastolic velocity)/peak systolic velocity; PI = (peak systolic velocity -end diastolic velocity)/timeaveraged maximum velocity.
After the ultrasonographic examination of the lymph node, the same node was subjected to fine needle aspiration cytology (FNAC) test. Patient consent was taken prior to biopsy. The slides were examined by the experienced oral pathologist for the presence of metastatic cells [ Figure 6] . Some cases where smear was having insufficient material and were not of good quality were made again after recalling the patient. Only those reports were included as positive finding on FNAC where frank metastatic cells were seen in smear by cytopathologist.
The correlation between the FNAC findings and Doppler sonographic examination were subjected to suitable statistical methods.
RESULTS
A total of 55 patients fulfilling the inclusion and exclusion criteria were included in the study. Age of the study group ranged from 28 to 80 years including 28 males and 27 females. The mean age of male patients was 55.21 and 56.22 for females [ Table 1].
After the clinical examination, ultrasonographic examination of the neck was done and assessed for echogenecity, blood flow, intra-nodal vascular resistance.
Echogenecity: Out of 55 lymph nodes, 32 were homogenous and 23 were found to be heterogeneous. On FNAC examination, 78.1% of homogenous nodes were found to be non-metastatic, whereas 65.2% of heterogeneous nodes were found to be metastatic [ Table 2 and Figure 7].
For assessment of correlation between echogenicity and FNAC of lymph nodes, Fisher exact test was done. It was found that homogenecity for benign lymph nodes and heterogenecity for malignant lymph nodes was a highly significant parameter (P < 0.01), with a sensitivity of 68.18% and specificity of 75.76%, respectively. Area under the ROC (Receiver Operating Characteristic) curve came out to be 0.72 indicating that the randomly selected individual from the positive group (homogenous and non-metastatic node; heterogeneous and metastatic node) has a test value larger than that for a randomly chosen individual from negative group (homogenous and metastatic node; heterogeneous and non-metastatic node) 72% of the times [ Figure 8].
Color flow pattern showed 33 centrally perfused lymph nodes, that is color Doppler ultrasonographic finding suggestive of benign lymphadenopathy, 7 nodes with peripheral flow, that is color Doppler ultrasonographic finding suggestive of malignant lymphadenopathy, 10 lymph nodes with mixed vascularity and 5 lymph nodes with no flow [ Table 3].
For comparing the vascular flow patterns with echogenicity, chi-square test was done. Among the group of lymph nodes showing central/hilar flow, 82% (27 out of 33) were found to be having homogeneous internal architecture. In contrast, all the lymph nodes showing the peripheral vascularity (seven nodes) were found to exhibit heterogeneous architecture. The study showed that there was a significant correlation between the echogenicity and vascular pattern of lymph nodes with homogeneous nodes exhibiting central flow and heterogeneous node associated with peripheral flow (P < 0.05) [ Table 4].
Comparison of vascular flow patterns with FNAC findings was done by chi-square test. Among the group of lymph nodes showing central/hilar flow, 72.7% (24 out of 33) Table 5 and Figure 9].
The study showed that central type of vascular pattern was statistically significant parameter for benign lymph nodes (P < 0.01) with a sensitivity of 69.1% and specificity of 73.5%. The study also showed that peripheral type of vascularity was highly significant parameter for malignant lymphadenopathy (P < 0.01) with a sensitivity of 68.2% and specificity of 75.8%.
In the study the RI ranged from 0.4 to 0.75. Fisher exact test was applied to find the significance of RI in assessing the metastatic nodes. Out of total 12 nodes having value of resistivity index greater than 0.6, 10 (83.3%) were found to be metastatic on FNAC examination [ Figure 10]. We found that the cut-off value of RI 0.6 was statistically significant in the assessment of metastatic node (P < 0.01) with a sensitivity of 45.5% and specificity of 93.9% [ Table 6 and Figure 10].
DISCUSSION
For many years, regional lymph nodes in tumour bearing hosts have been considered to be anatomic barriers to the systemic dissemination of tumour cells. The characteristic in tumour formation is angiogenesis and therefore the morphologic and hemodynamic changes that occur in tumour vessels can be used as a clue to differentiate between benign and malignant. Most normal cells do not release angiogenic substances except during embryogenesis, growth, wound repair or immune states. 6 An important pre-condition for angiogenesis seems to be a certain amount of tumour cell mass. In an initial (avascular) tumour stage, micro metastases are fed by regular nodal vessels. It is estimated that it takes 1 billion malignant cells to produce a tissue mass of 1 cm 3 . 7 Only after the original vessel system becomes insufficient for the nutrition of micro metastases as a consequence of its destruction or the diffusion distance becoming too large, a spontaneous and distinct growth tendency of new blood vessels begins with the release of angiogenic stimulus from tumour cells called "tumour angiogenetic factor". This neovascularisation penetrate the node from its periphery and consisted of thin-walled blood vessels that lacked a muscular layer and often shows chaotic anastomoses and shunts. Tumour growth appears to depend on this process. The architecture and the hemodynamic of nodal vessels would differ among various nodal diseases. This property of nodal disease provides the potential for diagnosis if vascular changes can be reliably detected. Thus color Doppler sonography, one of the advances in sonography can aid in differentiating benign from malignant lymph nodes.
The normal lymph nodes are hypo-echoic and homogeneous. When the lymph node is infiltrated by metastatic cells, the To our knowledge, in 1988, Morton et al., were the first to describe flow signals in the hilum of lymph nodes by using color Doppler ultrasound. 3 Lymphatic vessels are not displayed at color Doppler sonography due to the low flow velocity and the lack of backscattering erythrocytes.
Our assessment of vascular patterns with Doppler sonography revealed two categories of patterns in cervical lymphadenopathies. One was a benign pattern group characteristic of avascular and hilar types, and the other was a malignant pattern group showing spotted, peripheral and mixed types. In this study, 33 nodes showed central/hilar colour flow signals, suggestive of a benign nature, of which 24 nodes proved to be benign and 9 malignant according to FNAC examination. Nine malignant lymph nodes showed central flow in our study, which was comparable to the study by Sato et al., 8 in which similar metastasis was confirmed in one lymph node with central colour flow signals. The reason for this could be the presence of micro metastasis at the early stage of lymph node involvement that could not be detected by color Doppler ultrasonography as intra-nodal vascular alterations take place at a relatively late stage of metastasis. 9 In the early stage of microinfiltration, vascularity may be increased owing to local immune reaction.
Out of seven nodes showing peripheral perfusion, six were metastatic. In metastatic lymphadenopathy, destruction of hilar vascularity by tumour cells may result in the induction of vascular supply from the peripheral pre-existing vessels or from vessels in the peri-nodal soft tissue. Thus, they have peripheral flow. The findings of our study are comparable with a previous study, which suggested that the peripheral flow in malignant nodes is in aberrant arterioles or veins within the capsule, subcapsular area or surrounding connective tissue. 10 In our study, 10 lymph nodes showed mixed vascularity, the distribution of which was equal, that is 5 were metastatic and 5 were non-metastatic on FNAC examination. Mixed flow of the metastatic node might be explained by two pathogeneses. First, as the tumour nests replace the node, the pre-existing nodal vessels may proliferate and transform into feeding vessels by tumour angiogenesis, resulting in central aberrant nodal vessels. Second, advanced tumour infiltration of a node will destroy the hilar blood supply, resulting in induction of the vascular supply from the peripheral pre-existing vessel or vessels in peri-nodal connective tissue, which may be accelerated by extracapsular invasion. 11 This study showed an absence of flow in five lymph nodes, out of which three were non-metastatic nodes and two were metastatic nodes on FNAC examination. There can be different reasons for no flow. It may be because of the total replacement of the nodal tissue by necrosed and keratinised tumour tissue. The relatively low number of backscattering erythrocytes in the tiny peripheral vessels decreases the signal intensity, which may not surpass the noise level. Low flow velocities or high Doppler angles result in a low Doppler frequency shift, which may be suppressed by the high pass or wall filter. Post-processing functions to reduce motion artefacts may also suppress flow signals in the echogenic centre of the lymph node. Consequently, absent flow signals do not mean that perfusion is absent. 3 In the study of vascular resistance of lymph nodes, by comparing the highest RI within suspected lymph nodes, we could differentiate between benign and malignant lymphadenopathies, which showed high vascular resistance. Some workers have worked on the analysis of vascular resistance in cervical lymphadenopathies; however, their results were inconsistent and even contrary to each other. Chang et al. 6 assumed that metastatic lymph nodes contain arteriovenous shunts and vessels that lack a muscle layer. They found that the lymph nodes involved by malignant processes typically had a low vascular resistance. The problem was that they selected and compared the lowest RI of each nodal flow sampling. Conversely, Steinkemp et al. 7 and Choi et al., 12 We found that the cut-off value of RI 0.6 was statistically significant in the assessment of metastatic node (P < 0.01) with a sensitivity of 45.5% and specificity of 93.9%. This finding was in accordance to the findings of Steinkemp et al. and Na DG et al., 4 who found a sensitivity and specificity of 47% and 94%, respectively, with the cut-off value of RI 0.7. When a sampled cervical lymph node had a high RI, it would usually prove to be metastatic. A possible reason for this extraordinarily high resistance was that as tumour cells spread into the lymph node, they grow and replace a large portion of the lymph node. Ultimately, the lymph node is totally replaced by tumour cells. At this stage, tumour cells compress vessels in the lymph node. This vascular compression by tumour cells would increase vascular resistance, causing an increase in RI.
On comparison of the clinical features (TNM staging) with Doppler sonographic features, we found that the characteristic features suggestive of malignant lymph nodes on Doppler sonography such as peripheral blood flow and high RI were more consistently and frequently associated with the higher sub stages of T3 and T4 and N2b and N2c of TNM staging system. All the lymph nodes showing the peripheral vascularity belonged to T3 and T4 stage group, whereas six out of seven lymph nodes showing the peripheral flow belonged to N2b and N2c stage group. Similarly 75% of lymph nodes (9 out of 12 lymph nodes) with RI more than 0.6, which is suggestive of malignant lymph node, belonged to T3 and T4 stage group. Thus, our study supports the fact that T stage is reflective of tumour burden and the risk of nodal metastasis increases with increasing T stage of primary tumour.
The usefulness of color Doppler ultrasonography is often doubted due to inconsistent results and disagreement on different methodologies and vessel sampling. 6,10,12 There are previous studies and in which colour doppler ultrasound CDUS was used as a method of differentiating benign from malignant lymphadenopathy. However, there is controversy regarding its reliability. Giovagnorio et al. 13 reported that CDUS is promising because it is easily applicable and does not require calculations. Na et al. stated that tissue characterisation is not possible by ultrasound and it cannot detect early stage malignant lymphadenopathy, however, the use of high frequency transducers has improved the ability to detect and interrogate the vascular signals.
In this study, all the findings suggested superiority of the color Doppler ultrasonography examination. One of the goals of this study was to differentiate between benign and malignant cervical lymph nodes using the color Doppler sonography, and the present study appreciates the important role of color Doppler sonography in the diagnostic approach to cervical lymphadenopathy.
Further in this study, we have used non-invasive ultrasonography with minimally invasive FNAC test to correlate the features of ultrasonography of enlarged cervicofacial lymph nodes in patients with oral cancer causing minimum discomfort to the patient. This may validate the role of Doppler sonography as a non-invasive diagnostic tool for the detection of the metastatic nodes.
One limitation of this study is that the changes in internal architecture of the lymph nodes cannot be recognised in deeper nodes. The possible explanations for this can be due to decreasing contrast resolution because of signal attenuation with increasing distance of the object of interest from the ultrasound probe and the poorer spatial resolution of ultrasound probes used for analysing deeper structures.
|
2018-04-03T00:49:44.461Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "9a689ce761d7a9ca894860f32dd1260ece0b9507",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4124542",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "caea6eb0e03b0febffb9691c2e815cf81e06d813",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263667987
|
pes2o/s2orc
|
v3-fos-license
|
Measuring the moment-of-inertia of a rigid body using a swing-pendulum
A novel way to construct a compound pendulum is to suspend a distributed mass from a single pivot using light inextensible strings. Here we describe how such a compound ‘swing-pendulum’ can be used to infer the moment-of-inertia of a rigid body. Our approach is particularly suitable for contexts in which it is impractical to suspend the body from one of its internal points, and is illustrated using data sourced from student-led experiments on steel and aluminium rods.
Introduction
The compound (physical) pendulum is a key topic in introductory physics, and one that connects several core concepts, including simple harmonic motion, centre-of-mass, moment-of-inertia, and torque [1][2][3][4].Such a pendulum is also easy to construct: one simply suspends a rigid body of mass m through a horizontal axis z passing through any point O excluding the body's centre-of-mass C (see Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.figure 1(a)) [1,4].Indeed, if this is done, then the the body can be set to oscillate harmonically with time-period where g ≈ 9.81 ms −2 is the acceleration due to gravity, I z is the body's moment-of-inertia about O, and h is the distance from O to C (see figure 1(a)) [1,4].Crucially, since the parallel axis theorem may be used to express I z in terms of the body's moment-of-inertia I about the horizontal axis passing through its centre-of-mass C, viz [6,7] In both cases the centreof-mass C of the body is a distance h from O, where OC makes an angle θ with the vertical OP, and swings along an arc PC.Thus, the dynamics of each configuration are identical, with the weight of the body mg yielding a restoring force F = mg sin θ perpendicular to OC (see section 2).[Note: as indicated by the x-y-z reference axis, the z-axis passing through O is normal to the x-y plane containing PC].
Equation ( 1) can also be expressed as In this way, one may infer the body's moment-ofinertia I empirically by measuring the pendulum's time-period T for a given h [6,7].Almost all textbook analyses of the compound pendulum use a pivot O internal to the body (see figure 1(a)) [5], and it is therefore no surprise that experiments reported in the wider literature typically do the same, either by forcing a pivot through the body, or by boring a hole as a point of suspension [6][7][8][9][10][11].Such approaches are effective if the rigid body is relatively small, and made from a low-friction material that can be worked easily; however, if the body is impractically large or heavy, or if damage by boring is to be avoided, then fashioning an internal pivot is more difficult.
In this article, therefore, we describe a novel, alternative method for constructing a compound pendulum that we refer to as the compound swingpendulum, and which is based on swinging a rigid body from an external pivot using light inextensible strings (figure 1(b)).Crucially, since this 'swing-pendulum' approach does not require boring holes, or forcing pivots, it can be used to infer the moment-of-inertia of a rigid body when the conventional configuration (figure 1(a)) is impractical.Indeed, the dynamics of the swingpendulum and the conventional compound pendulum are identical, meaning that equation (3) applies to both cases (see figure 1, and section 2).In what follows, then, we test equation (3) empirically for the swing pendulum: first, by verifying that it correctly predicts T (sections 3 and 4); and second, by exploring how the equation can be used to infer I (section 5).Note that the experiments reported here were undertaken as 'furtherwork' exercises by undergraduate students during a first-year level physics laboratory on harmonic oscillations of rigid bodies.In this way our investigation of the swing-pendulum may be considered a case study in how students can be encouraged to engage enthusiastically with exploratory practical work, even in very simple contexts.
Theory
Before proceeding to the experiments, let us briefly confirm that the dynamics of the swingpendulum are identical to those of the conventional compound pendulum.To this end, consider the swing-pendulum depicted in figure 1(b), which is constructed by suspending a body of mass m from an external point O by light inextensible strings.If the distance from O to the body's centre-of-mass C is denoted h, and if θ is the angle between OC and the vertical, then the body will be subject to a restoring torque about O, and thence governed by Newton's second law for rotation [1] where I z is the body's moment-of-inertia about O. Thus, for small displacements satisfying the small angle approximation sin θ ≈ θ, the swingpendulum will oscillate harmonically according to (the frequency of oscillation) defines the oscillation time-period As claimed, therefore, the time-period of the swing-pendulum is identical to that of the conventional compound pendulum (see equation ( 1)), and can likewise be expressed using the parallel axis theorem (equation ( 2)) as where I is the moment-of-inertia of the body about its centre-of-mass C (see equation ( 3)).Indeed, since the light strings can be thought of as augmenting the body's boundary to incorporate O as an internal point, the conventional compound pendulum, and the compound swing-pendulum are physically equivalent systems.
Experiments
A simple way to confirm that equation (3) works for the swing-pendulum is to investigate how T varies with h for an object of known momentof-inertia I, and to this end we have chosen to base our experiments on metal bars (see figure 2).Indeed, a metal bar makes a suitable swingpendulum test-case for at least two reasons: first (i) it is not always easy to bore a hole through metal without specialist milling tools (so the conventional configuration with an internal pivot is not necessarily practical, or desirable); and second, (ii) a bar of length λ and thickness µ ≪ λ can be modelled as a thin uniform rod with moment-of-inertia and is therefore readily accessible to students from a theoretical perspective [1][2][3].
The basic apparatus for these experiments is shown in figure 2, where the swing-pendulum is constructed by suspending the bar from a screwhook O using household string, and h (the distance from O to the bar's centre-of-mass C) is adjusted by changing the string's length.Thus, for each value of h, it is possible to infer the swing-pendulum's time-period T by measuring the total time T N for some N oscillations, and computing T = T N /N [12].Note that that the oscillation amplitude must be small (say, less than 15 • [1]) to ensure that the small angle approximation sin θ ≈ θ holds (see section 2).
As detailed in table 1, our experiments considered two types of metal bar: a mild-steel boxsection bar; and an aluminium box-section bar.One advantage of box-section is that the string may be passed through the length of the bar internally, and fastened in a loop prior to suspension (as in figure 2); however, so long as the distance h can be measured accurately, any method of suspending the bar is acceptable (including asymmetric configurations, with the string secured externally), and students can be encouraged to explore different approaches [13].Note that the bars are considered 'thin' in the sense that both satisfy (µ/λ) 2 ≪ 1 (see table 1); similarly, the strings
Results
Data from our experiments on metal bars are listed in table 2, and plotted in figure 3. Assuming the bars to behave as thin, uniform rods, this data agrees well with the theoretical predictions of equation (3), i.e.
, where I = mλ 2 12 (10) may be computed using the values of λ and m in table 1 (see equation ( 9)).Hence, to within experimental uncertainty, we find as expected that the swing-pendulum behaves in the same way as a conventional compound pendulum.Note that it is a useful class exercise in dimensionless scaling [14] to have students express equation (3) in the normalised form where are the normalised time (T * ), distance (h * ), and moment-of-inertia (I * ) respectively.Indeed, by scaling the experimental data in this way (see table 2), both sets of measurements align with a single normalised theory curve (see figure 4).
Moment-of-inertia
In the preceding section we verified our expression for the time period T of equation ( 3) by using the fact that the moment-of-inertia was known to be I = mλ 2 /12 (see equation ( 9)).Observe, however, that if the moment-of-inertia of the body is not known, then equation ( 3) can be rearranged
to infer I from measurements of the time period T(h).
There are several methods for doing this [6], but perhaps the most straight-forward is simply to express equation (3) in the form so that I can be determined by computing its value from each measurement of the time-period T(h), and then taking an average.Indeed, here we 9) and ( 11)).
exploit this idea by using our scaled data to infer I from the swing-pendulum experiments by averaging over both sets of measurements simultaneously, that is, by using equation ( 13) in its dimensionless form
Conclusion
The compound 'swing-pendulum' is a novel approach to constructing a compound pendulum based on suspending a rigid body from an external pivot using light inextensible strings (section 1).By experimenting on metal rods, we have confirmed that the theory for such a swing-pendulum is consistent with that of a conventional (internally pivoted) compound pendulum (sections 2 and 3), and may similarly be used to infer the moment-of-inertia of a rigid body, especially in those situations when the conventional configuration is impractical (sections 4 and 5).In this way, the swing-pendulum offers a fresh perspective on the dynamics of the compound pendulum, and an alternative method for experimenting on moments-of-inertia in introductory physics laboratories.
When considered alongside more conventional studies of compound pendula, our swingpendulum experiments with first-year level undergraduate students have also presented several didactic benefits.For example, pivoting the system about an external point helps to emphasise the fact that the rotational characteristics of a body are not intrinsic properties of the body per se, rather that they are properties defined with respect to an appropriate reference axis.More importantly, however, the simplicity of the swingpendulum configuration makes it ideally suited for 'further-work' activities of the kind that encourage students to engage creatively in exploratory experimental work, and to think critically about experimental errors.Indeed, as we described in section 4, a particularly satisfying feature of our experiments on rods is the ease with which the data can be normalised for dimensionless scaling, a process which reveals important insights into symmetry and universality in physics [14].We look forward to developing these ideas to support work in other mechanical contexts, such as experiments on trifilar pendulums [16], as future investigations.
dynamics of coupled pendula suitable for distance teaching Phys.Educ.55 065008 [13] Most length measurements here were taken to an accuracy of ±0.005 m; however, the distance h was deduced from the string length using trigonometry, so subject to an uncertainty that increases as h decreases.Likewise, time measurements T N were taken to an accuracy of ±0.5 s, so that the uncertainty on T = T N /N varies depending on the total number of oscillations N considered
Figure 1 .
Figure 1.Two configurations for a compound pendulum: (a) the conventional configuration formed by pivoting a rigid body B about an internal point O; and (b) the novel 'swing-pendulum' configuration achieved by suspending the body B from an external pivot O using light strings slung around some points A and B.In both cases the centreof-mass C of the body is a distance h from O, where OC makes an angle θ with the vertical OP, and swings along an arc PC.Thus, the dynamics of each configuration are identical, with the weight of the body mg yielding a restoring force F = mg sin θ perpendicular to OC (see section 2).[Note: as indicated by the x-y-z reference axis, the z-axis passing through O is normal to the x-y plane containing PC].
Figure 2 .
Figure 2. Photograph (a) and schematic (b) of a swing-pendulum constructed from a steel bar of length λ and thickness µ suspended from a screw-hook O by household string, with h the distance from O to the bar's centre-ofmass C (see figure 1(b)).
[ 14 ]
Bissell J J, Ali A and Postle B J 2022 Illustrating dimensionless scaling with Hooke's law Phys.Educ.57 023008 [15] Barlow R J 1997 Statistics: a Guide to the Use of Statistical Methods in the Physical Sciences (Wiley) [16] Wang H 2021 A modified trifilar pendulum for simultaneously determining the moment of inertia and the mass of an irregular object Eur.J. Phys.42 015002
Table 1 .
Length λ, mass m, and thickness µ measurements for the metal bars.
Table 2 .
(11)sets of swing-pendulum data taken by students during introductory physics laboratories at the University of York[13].Here the fourth and fifth columns list the data normalised to λ and 2π √ λ/g respectively (see figures 3 and 4), with values for I * in the final column inferred according to equation(11)as I * = h * (T 2 * − h * ).Figure 3. Swing-pendulum time period T as a function of h for two metal bars: a steel bar (closed circles); and an aluminium bar (open circles).The theory curves are given by equation (10) assuming I = mλ 2 , with values for m and λ taken from table 1.
|
2023-10-05T20:03:07.768Z
|
2023-10-02T00:00:00.000
|
{
"year": 2023,
"sha1": "3f15420c2f17ad1f1d7a20b655fdd75f867db59f",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6552/acf816/pdf",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "3f15420c2f17ad1f1d7a20b655fdd75f867db59f",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238674965
|
pes2o/s2orc
|
v3-fos-license
|
Interaction between technology and recruiting practices While technology has improved sharing and managing information, there are legitimate concerns about the quality of information and its use in recruitment
With its declining costs and widespread adoption, information and communications technology (ICT) will continue to affect all aspects of recruiting. ICT shapes how required job skills are determined, affects how information about available jobs is disseminated, and helps with the evaluation of potential new hires. Evidence suggests that employers looking for workers with technical skills, non-cognitive skills, and less experience benefit the most from using ICT when recruiting. This suggests that public policy could improve outcomes across a wider group of employers and job searchers, by offering training and incentives that enhance users’ ICT skills. ELEVATOR PITCH
ELEVATOR PITCH
Employers are steadily increasing their reliance on technology when recruiting.On the one hand, this technology enables the wide dissemination of information and the management of large quantities of data at a relatively low cost.On the other hand, it introduces new costs and risks.The ease with which information can be shared, for example, can lead to its unauthorized use and obsolescence.Recruiting technologies are also susceptible to misuse and to biases built into their underlying algorithms.Better understanding of these trade-offs can inform government policies aiming to reduce search frictions in the labor market.
KEY FINDINGS Cons
The ease of information sharing on websites that host job boards and resume banks can lead to stale information about jobs and resumes.Employers can use social networking sites to get information about workers without the workers' knowledge.The use of technology might not improve hiring and can lead to shorter tenure of new hires.Exclusive reliance on technology to analyze data can result in inefficient hiring because of biases and manipulation.
Pros
Online job boards have improved the ease with which information about available job opportunities can be shared and updated.Online platforms for contract labor enable employers to hire from a larger and more diverse pool of applicants.Better information about job searchers has tilted employers' hiring in favor of disadvantaged job searchers and increased the number and quality of matches, especially for technical jobs.Better information about employers has increased the quality of matches as evidenced by lower turnover.
Source: Based on data in Brencic, V. "Developments in the market for employment websites in the US".
MOTIVATION
Prior to the widespread adoption of information and communication technology (ICT), employers with job vacancies advertised their jobs in newspapers, placed help-wanted signs on their premises, contacted employment agencies, and encouraged employees to spread the word about job openings.Because employers were rarely surveyed about these activities, employer recruiting was not well documented.As a result, researchers have limited understanding about the recruiting process for this period.For the most part, existing research documents the types and number of recruiting channels employers used and the amount of time employers put into analyzing information they received through their various recruiting efforts.Very little data exist on what information employers used and how the available information affected hiring decisions.
Advancements in ICT have led to the introduction of tools that help to store, share, access, and analyze large amounts of data.The widespread adoption of the internet, for example, introduced new tools for information sharing.Craigslist, Monster, LinkedIn, and Careerbuilder are some examples of websites that offer platforms for posting job ads and resumes online.Another group of websites for contract labor offers platforms for actual delivery of jobs that can be conducted and monitored online.Websites like Upwork (formerly oDesk), Amazon's mTurk, and eLance allow employers to hire from a pool of workers from every part of the world.These websites often provide unprecedented levels of detail about prospective workers (e.g.past performance measures, evaluations from past employers, past wages) and job openings.Other websites, like Glassdoor, have emerged to provide a depository of reviews of employers and their workplaces at a level of detail that was not previously available to those without ties to the employers.In a similar way, social networking sites have become a depository of information that employers can access to gain information about their job applicants.Advancements in ICT have also allowed for the introduction of tools such as artificial intelligence (AI), which help with the analysis of information.
Because these new tools leave behind a trail of information that can be stored and analyzed, the wide adoption of ICT is offering new insights into recruiting processes.The black box of recruiting is beginning to be unpacked.However, a growing body of research also offers insights into the costs and risks of using ICT in recruitment.Some of these arise due to a lack of skills needed to use ICT effectively, improper use and low quality of information, and the potential for misuse of tools that help with the analysis of information.This article reviews evidence on the benefits and risks of increasing reliance on ICTs as they substitute and complement traditional recruitment practices.
Setting hiring requirements
The first stage of recruiting begins by identifying the need to hire a worker to do tasks that cannot be completed by existing workers.In doing so, employers identify the skills that are required to complete the tasks.Advancements in ICT can affect this stage in two ways.
First, ICT can be used to identify changes in the required skills linked to various occupations.An example of such efforts is the pan-European-led CEDEFOP.The purpose VERA BRENC ˇIC ˇ | Interaction between technology and recruiting practices of this project is to collect descriptions of job openings from online job boards, analyze their content, and identify information about the required skills, wage offers, and so on.These efforts are motivated by the need to inform decisions over training and schooling by, respectively, employers and job searchers.The information gathered can also inform employers about the need to update skill requirements.
The second channel through which ICT can influence recruiting is the skill requirements of recruiters themselves.One study shows, for example, that managers hire worse people (e.g. with shorter tenure) when they ignore the recommendations of a job test administered to applicants [1].While the authors do not provide direct evidence for the finding, one explanation they offer is that the managers do not have adequate skills to use recommendations.If this explanation is correct, then recruiters need to be equipped with proper skills to use ICT effectively.
Information sharing: Dissemination and access
The next two stages of recruiting involve the dissemination of information about the job opening and the gathering of information about prospective hires.The internet has become the main channel through which ICT supports these two stages of recruiting.
Employers' use of the internet for sharing information about available jobs started in the early 1990s when employers used discussion forums to advertise jobs.Over time, specialized websites started to offer platforms for posting job ads and resumes.As 1, employment websites experienced an increase in the number of visitors and page views over time.As tasks that require ICT grew in importance for many jobs, new websites started to offer platforms where jobs could be completed entirely online.These changes created an online market for contract labor.This segment of online recruiting is also becoming increasingly important.
Changes brought on by these websites enhanced employers' ability to disseminate information to a much wider audience and permitted more frequent updating of information about available jobs.The amount of information that employers could share about their jobs (and the workplace) improved as binding restrictions on the length of a job ad were eliminated.These changes meant that employers could disseminate information with more ease, greater speed, and at lower cost, which has led to a significant reduction in search frictions and the potential for a better functioning labor market.Employers' recognition of this potential and the shift toward online dissemination of information about jobs has forced statistical agencies to change the way data on labor demand are collected.Rather than relying on classified ads sections in newspapers, as had been done in the past, some statistical agencies now rely on online job postings to get their measures of available job openings.
The extent to which employment websites offer employers access to information about a larger and more diverse applicant pool compared to offline recruiting tools is most evident from studies that document employers' use of online platforms for contract labor.Several studies in this line of research find that employers in the US draw on a broad pool of job searchers that transcends national borders for jobs that can be completed online [2].Because these platforms offer detailed information about the workers (e.g.past performance measures, past employers' ratings, experience on the platform as measured by the number of hours worked, and wage history), they can particularly benefit workers with less experience (i.e.new labor market entrants) and workers in less developed countries whose credentials employers often have a harder time evaluating.Two relevant studies demonstrate this, finding that access to more detailed data about workers improves subsequent employment and earnings of new entrants [3] as well as the likelihood of securing an interview, being shortlisted for work, and wage bids for workers from less developed countries [2].The authors of the latter study, however, find that workers from less developed countries benefited less when access to better on-the-job monitoring was available to employers.This finding affirms that the initial gains to the disadvantaged group of job applicants were due to the online platform succeeding in improving access to information.These gains disappeared once employers could get the relevant information about the workers by using on-the-job monitoring.
Whereas online platforms for contract labor have improved access to information about job searchers, the presence of websites like Glassdoor is providing a space for workers to reveal information about workplaces and employers voluntarily.These platforms provide information about employer quality that would otherwise be harder to observe as job searchers and employers expand their search geographically.Studies have found links between reviews of employers that were posted by workers on Glassdoor and firm performance, job interview experience (i.e.experience was more in line with expectations), and worker turnover.
Ease of access to information: Costs and risks
Employers' reliance on various online platforms to disseminate and gather information has also ushered in new costs and risks.First, because the costs of online dissemination of information have declined, information might not be maintained with regular updates, resulting in phantom vacancies whereby job ads are not removed from an online job board once filled and continue to be advertised.This introduces new search frictions in the labor market as job searchers direct too many of their job applications toward job postings that have only recently been posted on an online job board.While the existing literature has focused on the way stale information about available job openings might frustrate job searchers, this is also likely to be relevant for employers.Such concerns arise if job searchers do not update their resumes once a job is found or new qualifications are acquired.In this case, employers incur a cost of sorting through information that is not up to date.As the Illustration on p. 1 shows, employment websites in the US, on average, increased the length of time that resumes were kept in online resume banks over time.Whereas an average employment website kept a resume in its resume bank for about six months from 1996 until 2003, it kept them on average for 12 months by the end of the sample period in 2011.The incidence with which the websites kept resumes online indefinitely also increased over time.Whereas 20% of the websites kept resumes indefinitely in the early 2000s, 50% did so in 2011.Employment websites, on the other hand, decreased the length of time that job ads were kept posted on online job boards.
The second risk relates to privacy.Legislation in many countries protects job searchers from having to reveal certain information (e.g.religious affiliation, political convictions) to employers during the recruiting process.With the proliferation of social networking sites and a reduction in tracking costs (i.e.costs of collecting information about an individual over time or/and across online sites), it has become easier for employers to use social networking sites to gather information about the job applicants that they cannot legally get during a formal hiring process.Several surveys of employers in Belgium, Canada, the US, Greece, France, and Switzerland confirm that employers pursue these activities.At the same time, several other surveys also find that many job searchers reveal information about themselves online without realizing that the information can be collected by current and prospective employers for uses other than those intended at the time the information was posted online.
To get a better sense of these concerns, authors of one study sent employers fictitious resumes [4].Together with the resumes, which included no information about personal traits, the authors created fictitious Facebook pages that revealed information (i.e.religious affiliation and sexual orientation) linked to the fictitious applicants.The field experiment revealed differences in callback rates.The pattern was consistent with some employers using information about the job applicants' personal traits on Facebook when deciding whom to interview.Another similar field experiment in Belgium revealed that callback rates were higher for candidates whose fictitious Facebook accounts featured a more attractive photo.
The third potential disadvantage relates to access to a larger and more diverse pool of job searchers from different parts of the world that is afforded to employers on many of the online platforms for contract labor.One study, for example, finds that teams consisting of workers from different countries that are formed via online platforms for contract labor are less productive than teams that are nationally more homogenous.This relationship is particularly strong for workers with specialized skills and is caused by difficulties with communication VERA BRENC ˇIC ˇ | Interaction between technology and recruiting practices rather than preferences or expectations [5].Another study finds that despite access to detailed information about a diverse pool of workers on a website for contract labor (oDesk), diaspora connections continue to play an important role in employers' hiring decisions.
Information analysis using AI: Screening of job applicants
The final stage of recruiting requires the analysis of available information about the job applicants based on which job offers get extended.While internet-backed tools have led to a faster and cheaper dissemination of and access to information, recent applications of AI offer the basis for the analysis of large amounts of data.AI, in a narrow sense, is software that offers predictions and recommendations based on patterns it identifies in data.One example is the use of algorithms to assign job interviews based on information from the applicants' resumes.
The use of AI to support recruiting offers several benefits.The most immediate benefit is that AI allows employers to utilize the large amounts of data that have become available.
Another benefit is the ability to conduct non-routine tasks, such as screening job applications.Most of the evidence on the effectiveness of AI as a recruiting tool comes from field experiments conducted in online markets for contract labor.This research tends to focus on evaluating the effects of a recommendation system that alerts employers to the existence of good job applicants.The recommended applicants are typically chosen by an algorithm that identifies overlaps between the applicants' skills and employers' required skills, the applicants' availability, and the applicants' ability.
One such study finds that the introduction of a recommendation system by an online platform for contract labor increased employer-initiated invitations to job searchers to apply for job openings.Whereas employers' hiring success only improved for technical job openings, no effect was found on wages or productivity of new hires [6].Another study finds that hiring based on algorithms may be less biased than human decisionmaking as its use tends to benefit minority candidates.Moreover, algorithm-based hiring has been shown to lead to job applicants who are more successful in interviews, are more likely to receive and accept job offers, and are less likely to use outside job offers during salary negotiations [7].These benefits were larger when hiring candidates with better non-cognitive skills rather than better cognitive skills.Finally, one study finds that while AI-based recommendations affect employers' decisions about which job applicants to review in more detail, they do not affect employers' choices of whom to hire [8].A related group of studies has revealed that AI-based recommendation systems are used less often for specialized jobs or when experienced workers are sought.Overall, the findings from these studies suggest that the gains from using recommendation systems are varied.It can be said that AI-based hiring disproportionally benefits employers with a vacancy that requires either technical skills, non-cognitive skills, or less experience.
Concerns related to AI-based recruiting
Several potential concerns regarding the use of AI in recruiting have also been identified, though their significance has yet to be documented empirically.First, many recommendation systems use existing data to form their recommendations.In this sense, recommendations can be plagued by biases that might be inherent in the data that are VERA BRENC ˇIC ˇ | Interaction between technology and recruiting practices generated from past (biased) decisions.Past performance measures, for example, that are used by recommendation algorithms might be measured with error for some workers due to the intentional discriminatory practices of their supervisors who were tasked to measure their performance.A related second concern is that only aspects of a job or a worker that are easier to measure and quantify can be incorporated into an algorithm [9].
A third concern is that using performance outcomes of existing workers to inform hiring decisions ignores the performance of those who were not hired [9].This can potentially lead to unintended outcomes.In instances of past discrimination, for example, only the best performers from discriminated groups get hired.AI would predict hiring in favor of these groups while not considering that their past performance is not representative of the group but rather is an outlier caused by discrimination.While this issue also exists in the absence of AI-backed decision-making, it makes clear that AI cannot resolve all the problems inherent to decision-making.
Fourth, many recommendation systems draw on rules that lead to unintended biases.Even if employers are indifferent about the gender of their target audience, for instance, an algorithm that optimizes cost-effectiveness might result in job ads being shown disproportionally to men if advertising to men is less expensive [10].One study has found that high-paying jobs are advertised less often to female users than to male users of Google's search engine [11].
Although no direct evidence for the cause of this link is provided, the authors stipulate that one reason might be that male users tend to click on ads for high-paying jobs more often than female users.If an algorithm that underlies Google's search engine is set up to maximize click probability, past gender differences in clicking will prompt the search engine to target users based on gender when advertising high-paying jobs.
Fifth, if the rules that underlie algorithms are known then they can be manipulated.Job searchers might include words in resumes or pursue activities that they know can improve their chance of securing a job interview.Such actions can result in distributional effects if job searchers differ in their ability to use and manipulate the technology.Thus far no evidence has been found that such manipulation is widespread.Two recent developments, however, may make this concern more relevant.Efforts to analyze the job content of online job ads have increased through projects like the pan-European CEDEFOP.Recent calls for better regulation of the use of AI are putting pressure on greater transparency of AI-backed algorithms [12].Both changes are likely to result in easier access to data and rules that are used in AI-backed algorithms.
The above discussion of concerns makes clear that any adoption of AI must consider potential risks as well as gains.That said, these concerns can be ameliorated with relative ease by making all components transparent to their users.
Aggregate effects on the functioning of the labor market
The existing literature has documented positive and negative implications of ICT use at various stages of recruiting.It is therefore not surprising that evidence about the aggregate effects on the overall functioning of the labor market is scant.One study focuses on the rapid expansion of Craigslist, which provides an online platform for posting job ads, real estate ads, for-sale ads, and ads seeking relationships [13].The study's authors find that the entry of Craigslist into local labor markets in the US did not decrease the local unemployment rate.This finding suggests that, despite improving dissemination of and VERA BRENC ˇIC ˇ | Interaction between technology and recruiting practices access to information, Craigslist did not improve the aggregate functioning of the labor market.One explanation for this null result might be Craigslist's cannibalization of online search traffic that had, prior to its entry, been taking place on competing employment websites.If this explanation is correct, then it underscores the difficulties of assessing aggregate effects by focusing on a single online platform or a single ICT tool.
LIMITATIONS AND GAPS
The key factor that limits researchers' understanding of the interaction between ICT and recruiting is the sheer complexity of the interaction.In addition, much of the existing evidence is based on recruiting efforts of employers in North America and tends to be restricted to specific segments of the labor market (i.e. a market for hourly work, jobs that require technical skills, and entry-level jobs).Access to data that are not tied to a particular country, industry, occupation, or skill level would allow a better understanding of the overall nature and variety of drawbacks and benefits of employers' reliance on ICT when recruiting.Furthermore, researchers need to better understand the factors that contribute to or hinder employers' adoption of ICT tools.Particularly important seems to be how differences in their adoption contribute to inequities by widening the gap in labor market experiences between those with access to new tools and those without.
SUMMARY AND POLICY ADVICE
Employers are increasingly relying on the internet and other ICT-backed tools to support many of the processes involved in their recruiting practices.These processes include dissemination of information about available job openings, search for information about potential hires, the analysis of job applications, and the initiation of contacts with potential new hires.Such activities can now be done with greater ease, much faster, and at a lower cost than prior to the adoption of ICT.
That said, new costs unique to the use of ICTs have also become apparent.One such cost is related to the large amount of available irrelevant information, which is due to irregular updating.Another cost arises as information gets accessed without the knowledge of those to whom it pertains thereby raising privacy concerns.The use of ICT as embodied by AIbacked recommendation systems can also result in costs due to suboptimal decisions arising from biases inherent in the underlying algorithms and data.Finally, the potential misuse of ICTs by users who lack the appropriate skills can also be costly and introduces new risks.
New policies need to take these costs and benefits into account.The greatest potential lies in policies that seek to enrich recruiters with skills to help them adopt and use ICT effectively.This could be done through subsidies geared toward on-the-job training or by ensuring that such skills are taught in schools.A separate issue that requires regulatory oversight concerns the need for data retention.AI-based recommendations systems require large amounts of data.Some of these data are digital footprints left by job searchers' and employers' activities online.Longer retention of such data could violate users' rights as protected by privacy laws.It may also be problematic if the data's usefulness decays over time as it becomes obsolete.Regulators could require that the details of underlying algorithms and the data be publicly available for review to minimize biases that can be present in AI-backed recruiting [12].
Length of time job ads and resumes are posted on employment websites ˇˇ VERA BRENC ˇIC ˇ | Interaction between technology and recruiting practices
Figure 1 .
Figure 1.Online traffic on websites that host job boards and resume banks in the US
|
2022-05-29T06:29:55.775Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8c337a0ea763f04d2020b3232851f61d2e082609",
"oa_license": null,
"oa_url": "https://wol.iza.org/uploads/articles/577/pdfs/interaction-between-technology-and-recruiting-practices.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8c337a0ea763f04d2020b3232851f61d2e082609",
"s2fieldsofstudy": [
"Business",
"Computer Science",
"Economics"
],
"extfieldsofstudy": []
}
|
246456484
|
pes2o/s2orc
|
v3-fos-license
|
Differentiation Between Primary Central Nervous System Lymphoma and Atypical Glioblastoma Based on MRI Morphological Feature and Signal Intensity Ratio: A Retrospective Multicenter Study
Objectives To investigate the value of morphological feature and signal intensity ratio (SIR) derived from conventional magnetic resonance imaging (MRI) in distinguishing primary central nervous system lymphoma (PCNSL) from atypical glioblastoma (aGBM). Methods Pathology-confirmed PCNSLs (n = 93) or aGBMs (n = 48) from three institutions were retrospectively enrolled and divided into training cohort (n = 98) and test cohort (n = 43). Morphological features and SIRs were compared between PCNSL and aGBM. Using linear discriminant analysis, multiple models were constructed with SIRs and morphological features alone or jointly, and the diagnostic performances were evaluated via receiver operating characteristic (ROC) analysis. Areas under the curves (AUCs) and accuracies (ACCs) of the models were compared with the radiologists’ assessment. Results Incision sign, T2 pseudonecrosis sign, reef sign and peritumoral leukomalacia sign were associated with PCNSL (training and overall cohorts, P < 0.05). Increased T1 ratio, decreased T2 ratio and T2/T1 ratio were predictive of PCNSL (all P < 0.05). ROC analysis showed that combination of morphological features and SIRs achieved the best diagnostic performance for differentiation of PCNSL and aGBM with AUC/ACC of 0.899/0.929 for the training cohort, AUC/ACC of 0.794/0.837 for the test cohort and AUC/ACC of 0.869/0.901 for the overall cohort, respectively. Based on the overall cohort, two radiologists could distinguish PCNSL from aGBM with AUC/ACC of 0.732/0.724 for radiologist A and AUC/ACC of 0.811/0.829 for radiologist B. Conclusion MRI morphological features can help differentiate PCNSL from aGBM. When combined with SIRs, the diagnostic performance was better than that of radiologists’ assessment.
INTRODUCTION
Preoperative distinguishing primary central nervous system lymphoma (PCNSL) from glioblastoma (GBM) is of highly clinical relevance because treatment strategies for the two diseases vary substantially. In patients with GBM, surgical resection followed by concurrent chemoradiation is the first-line treatment, whereas patients with PCNSL usually undergo stereotactic biopsy followed by high-dose methotrexate (1,2). Moreover, preoperative application of steroids may affect the histopathologic diagnosis of PCNSL (2). Therefore, reliable preoperative differentiation of both entities is important.
Conventional magnetic resonance (MR) imaging features allow distinguishing PCNSL from typical GBM for most patients because PCNSL in an immunocompetent patient usually manifests as a homogeneously enhanced mass lesion on contrast-enhanced T 1 -weighted (T 1 CE) images. And typical GBM usually exhibits an irregular rim-like enhancement with necrosis (3,4). However, this enhancement pattern is not reliable in cases of atypical glioblastoma (aGBM) with no visible necrosis, which complicates the discrimination between aGBM and PCNSL (5,6).
Both conventional and advanced MR techniques have been reported to be helpful in differentiating PCNSL from GBM (7)(8)(9)(10)(11)(12). However, most of these studies enrolled all GBM patients, which can be differentiated from PCNSL based on findings of conventional MRI in most cases. A few studies on differentiating PCNSL from aGBM involve advanced imaging sequences or radiomics strategy (5,6,13,14). Despite great advances, these techniques are associated with increased costs and postprocessing time and may not be routinely adopted by every patient in clinical practice. In contrast, T 2 -weighted imaging (T 2 WI), T 1 -weighted imaging (T 1 WI), and T 1 CE imaging are almost always available. Systematic evaluation of MRI morphological features of PCNSL and aGBM is, however, lacking. As an important supplement to subjective analysis, easily obtained quantitative parameters can further provide diagnostic information. Considering the pathophysiological difference between PCNSL and aGBM may be reflected in the form of signal intensity ratio (SIR), whether SIR analysis is effective in distinguishing aGBM from PCNSL remains largely unknown.
Here, we endeavored to compare morphological features and analyze SIR based on conventional MR sequences (T 1 WI, T 2 WI, and T 1 CE) to develop a quick and easy tool for differentiation of PCNSL and aGBM.
MATERIALS AND METHODS
Ethics review board approvals from three institutions were obtained, and written informed consent was waived for this retrospective study.
Patients
Potentially eligible patients from Tangdu Hospital (from January 2012 to June 2021), XD Group Hospital (from January 2015 to May 2021), and West China Hospital (from January 2016 to June 2021) were identified with pathologically proven PCNSL or GBM.
Inclusion criteria were as follows: 1) no prior treatment history before MR examination, including biopsy, surgery, radiotherapy, chemotherapy, or corticosteroid treatment; 2) pretreatment MRI with conventional sequences available, including axial T 1 WI, T 2 WI, and T 1 CE imaging; 3) no hemorrhage inside the tumor based on T 1 WI and T 2 WI; 4) all PCNSL patients were immunocompetent. The exclusion criteria were as follows: 1) typical GBM with visible necrosis; 2) poor image quality with motion artifacts or susceptibility; 3) intracranial metastasis from systemic lymphoma. Atypical GBM was defined as solid enhancement with no visible necrosis based on axial T 2 WI and T 1 CE imaging, which were evaluated by two independent raters (YY and GX, with 5 and 10 years of experience in neuro-oncology imaging, respectively). When discrepancy exists, consensus was reached through discussion with a senior radiologist (G-BC, with 27 years of experience in brain tumor diagnosis).
According to the inclusion and exclusion criteria, 98 patients (center 1, n = 72; center 2, n = 26) with pathologically proven PCNSL (n = 66) or aGBM (n = 32) were consecutively enrolled and comprised the training cohort. Another cohort of 43 patients from center 3 with a diagnosis of PCNSL (n = 27) or aGBM (n = 16) comprised the external test cohort. The flow diagram for patient selection is shown in Figure 1.
MR Image Acquisition
MRI scans were performed at three institutions with different protocols and various scanners. The routine sequences included axial T 1 WI, T 2 WI, and T 1 CE imaging. The detailed MRI parameters are provided in Table S1 in the Supplementary Material. All patient names were de-identified prior to analysis.
Image Analysis
Qualitative morphological features, which were characterized based on the criteria outlined in Table 1, were analyzed independently by two neuroradiologists (YY and GX), who were blinded to the final results. The inconsistency between them was resolved by discussion with a third senior neuroradiologist (G-BC). Notably, reef sign, peritumoral leukomalacia sign, and T 2 pseudonecrosis sign were defined in our study for the first time (representative cases, see Supplementary Material Figure S1).
ITK-SNAP software (version 3.8.0; http://itksnap.org) was used for SIR analysis (15). The abovementioned two neuroradiologists independently placed region of interest (ROI) for further consistency testing. The details of ROI placement strategy are shown in Figure S2 and Table S2 in Supplementary Material. Finally, four quantitative parameters, including T 2 ratio (rT 2 ), T 1 ratio (rT 1 ), T 1 CE ratio (rT 1 CE), and rT 2 /rT 1 ratio (T 2 /T 1 ), were obtained for each patient. The calculation formula is as follows: rT 2 = mean signal intensity of the lesion (SI lesion ) on T 2 WI mean signal intensity of contralateral normal white matter (SI control ) rT 1 = SI lesion on T 1 WI/SI control rT 1 CE = SI lesion on T 1 CE/SI control T 2 /T 1 = rT 2 /rT 1
Radiologist's Assessment
Two neuroradiologists (LZ and L-FY, with 10 and 17 years' experience in radiology, respectively) independently reviewed the images. All radiologists had no prior knowledge of exact number of each entity and the final results. They can only have access to conventional MR images (T 1 WI, T 2 WI, and T 1 CE). Diagnosis was based on subjective analysis according to their clinical experience. The final diagnosis was recorded using a 4point scale (1 = definite GBM; 2 = likely GBM; 3 = likely PCNSL; and 4 = definite PCNSL). To assess intra-observer agreement, radiologists reevaluated images after a 2-month washout period.
Statistical Analysis
All statistical analyses were performed with SPSS 20.0 software (IBM Corp., Chicago, IL, USA) and R software version 3.6.1 (http://www.R-project.org). The normal distribution of data was investigated with Kolmogorov-Smirnov test. Numerical variables with normal distribution were denoted as mean and standard deviation. Continuous and categorical variables were compared using two-sample t-test and Fisher's exact test, respectively. The intraclass correlation coefficient (ICC) was used to test the consistency of SIRs between the two radiologists. Intra-observer agreements of radiologist's assessment were evaluated with Cohen's kappa coefficient. Linear discrimination analysis (LDA) models for distinguishing aGBM from PCNSL were constructed with SIRs and morphological features alone or jointly. Receiver operating characteristic (ROC) analysis was performed to determine the performance of radiologists' assessment and different models in the training, test, and overall cohorts, and accuracy (ACC) and area under the curve (AUC) were obtained. P < 0.05 indicated a significant difference.
Demographic Characteristics
Patient demographic characteristics are summarized in Table 3. Incision sign, reef sign, T 2 pseudonecrosis sign, and peritumoral leukomalacia sign were detected in the PCNSL group but none in the aGBM group. Among them, reef sign and peritumoral leukomalacia sign were statistically different in both training (all P < 0.001) and test cohorts (reef sign, P = 0.003; peritumoral leukomalacia sign, P = 0.018). Similarly, significant statistical differences between the two groups were observed in incision sign and T 2 pseudonecrosis sign based on the training cohort (all P < 0.001), whereas the differences in the test cohort were not statistically significant. Accounting for the small sample size of the test cohort, in order to increase the statistical power, we combined the training and test cohorts and performed statistical analysis on the overall cohort again. The results showed that incision sign and T 2 pseudonecrosis sign were significantly different between the two groups (all P < 0.001).
In addition, PCNSL was more likely to involve both supratentorial and infratentorial compartment than aGBM based on the overall cohort (P = 0.036). There were no significant differences in lesion type, streak-like edema, butterfly sign, angular sign, and involvement of structures between the PCNSL and aGBM groups (all P > 0.05).
Comparison of Signal Intensity Ratios Between Primary Central Nervous System Lymphoma and Atypical Glioblastoma
The rT 2 , rT 1 , T 2 /T 1 , and rT 1 CE values calculated for PCNSLs and aGBMs are summarized in Table 4. T 2 /T 1 and rT 2 values in aGBMs were significantly higher than those in PCNSLs in both the training and test cohorts (all P < 0.001). The rT 1 value in aGBMs was significantly lower than that in PCNSLs (training cohort, P < 0.001; test cohort, P = 0.048). The rT 1 CE value of PCNSLs was slightly higher than that of aGBMs, but the difference was not statistically significant (all P > 0.05). The representative cases are shown in Figures 2, 3. For radiologist's assessment, the diagnostic performance of radiologist B with more experience (AUC = 0.811, ACC = 0.829, sensitivity = 0.857, and specificity = 0.831) was better than that of radiologist A (AUC = 0.732, ACC = 0.724, sensitivity = 0.736, and specificity = 0.710).
Reproducibility of Signal
Intensity Ratio Measurement and Radiologist's Assessment Table 6 shows that both inter-reader agreement for SIR measurement and intra-reader agreement for radiologist's assessment achieved good performance, with ICC/Kappa value ranging from 0.796 to 0.913. For SIR measurements, inter-reader agreement was highest for the measurement of rT 2 (ICC = 0.913). Regarding reproducibility of radiologist's assessment, experienced radiologist B (Kappa = 0.903) showed higher intrareader agreement than that of radiologist A (Kappa = 0.796).
DISCUSSION
Differentiating PCNSL from aGBM (with no visible necrosis) is challenging. In the present study, we found that T 2 pseudonecrosis sign, incision sign, reef sign, and peritumoral leukomalacia sign were closely related to PCNSL. Compared to radiologist's assessment, model 1, which combined the SIRs and MRI morphological features, achieved the best diagnostic performance in distinguishing PCNSL from aGBM.
During the past decades, various MR modalities and different analysis strategies were explored to differentiate PCNSL from GBM (7-10, 13, 14, 16, 17), whereas the present study focused on SIR analysis of conventional MR sequences mainly based on the following four considerations. First, in clinical practice, T 1 WI, T 2 WI, and T 1 CE imaging are routinely obtained for patients across different hospitals (18). In contrast, advanced MRI techniques, such as diffusion-weighted imaging (DWI) and perfusion-weighted imaging (PWI), are performed when necessary, which require additional expense and time. Furthermore, no unified standard was established for differential diagnosis. For example, although several prior studies have confirmed the efficiency of DWI in distinguishing PCNSL from GBM, overlapping of parameters makes accurate differential diagnosis challenging (19)(20)(21). Likewise, PWI is another commonly used technique, and its quantitative measurement reproducibility leads to the lack of a unified threshold to distinguish the two entities (22,23). Second, radiomics approach can be used for differential diagnosis of PCNSL and GBM. Despite promising results, a recent systematic review suggested that conclusions derived from radiomics should be interpreted with caution due to the suboptimal quality of the studies (17). In contrast, the traditional analysis method is timesaving and easy for clinical implementation and interpretation. Third, clinical experience of radiologists suggests that PCNSL has slightly higher T 1 WI and lower T 2 WI signal intensity than GBM. However, visual judgment is subjective, and precise quantitative assessment is needed, especially for those that cannot be differentiated by the naked eye. Although T 1 and T 2 mapping can accurately quantify T 1 and T 2 values of tissue, they are not performed as routine sequences due to long scanning time and complex postprocessing. In contrast, signal intensity of the lesion is easily obtained from T 1 WI and T 2 WI but is susceptible to many factors, including the characteristics of the tissue itself (T 1 value, T 2 value, and proton density) and MRI equipment and scanning parameters (field strength, repetition time, and echo time). Therefore, in this study, the SIR was used as a quantitative parameter to eliminate the influence of different MRI scanners and imaging parameters on the results. Similar to our study design, the SIR also showed potential for differential diagnosis in other scenarios (12,(24)(25)(26). However, different from previous studies, we used an external test cohort to further clarify the actual diagnostic performance of the SIR. Fourth, our study The bold P value suggests a significant difference between the variables in the two cohorts. PCNSL, primary central nervous system lymphoma; aGBM, atypical glioblastoma.
FIGURE 2 | (A-C)
A 68-year-old woman with primary central nervous system lymphoma (PCNSL) presented with left hemiparesis for 1 month. MRI showed a left frontal lobe lesion with iso-to slight hyperintensity on T 2 WI (A), slight hypointensity on T 1 WI (B), and marked homogeneous enhancement on T 1 CE imaging (C) (take gray matter for reference). The quantitative parameters showed that rT1, rT2, T2/T1, and rT1CE were 0.65, 1.20, 1.82, and 1.87, respectively. The case was correctly diagnosed as PCNSL by models 1, 2, 4, and 5 and radiologist B while wrongly classified as glioblastoma (GBM) by radiologist A. (D-F) A 43-year-old woman with GBM presented with seizure. MRI showed a left frontal lobe lesion with isointensity on T 2 WI (D), slight hypointensity on T 1 WI (E), and marked homogeneous enhancement on T 1 CE imaging (F) (take gray matter for reference). The quantitative parameters showed that rT 1 , rT 2 , T 2 /T 1 , and rT1CE were 0.66, 1.46, 2.25 and 2.11, respectively. The case was correctly diagnosed as GBM by models 1, 2, 4, 5, and 6 and radiologist B while wrongly classified as PCNSL by radiologist A.
did not involve complex image preprocessing, including image registration, brain extraction, and standardization. ITK-SNAP software used in our study can realize simultaneously quantitative measurement of T 1 WI and T 2 WI signal intensity in the same ROI without image registration. The entire analysis was limited to the time required to identify lesions and electronically locate ROIs. From the clinical point of view, this approach may be a highly costeffective quantitative analysis tool.
Most previous studies enrolled all PCNSL and GBM cases, regardless of sign of intratumoral necrosis as a powerful indicator to distinguish the two entities, and their inclusion criteria could partially explain the higher ACC (7,8,10,27). Therefore, we reasoned that confining our study to PCNSL and aGBM cases is closer to the clinical diagnostic dilemma in order to seek more powerful imaging signs to identify the two entities. In our study, four morphological features were closely associated with PCNSL, including incision sign, T 2 pseudonecrosis sign, reef sign, and peritumoral leukomalacia sign. Among them, the diagnostic value of incision sign has been confirmed in a previous study (28). T 2 pseudonecrosis sign, reef sign, and peritumoral leukomalacia sign, defined by the present study for the first time, were observed only in PCNSL and not in aGBM. For T 2 pseudonecrosis sign, the mismatch between heterogeneous T 2 WI signals and homogeneous enhancement is the diagnostic core, which may be related to the degree of tumor infiltration along the white matter fiber bundles. The reef sign was defined as single or multiple foci that presented as hypointensity on T 1 WI, hyperintensity on T 2 WI, and brighter signal within contrast-enhanced area of the lesion. Although the corresponding pathological mechanism of this sign is still unclear, it may be related to the leakage of contrast medium in the tumor area (29). Peritumoral leukomalacia sign was defined as an area manifested as hypointensity on T 1 WI and hyperintensity on T 2 WI in the region adjacent to the tumor. The possible explanation is that the PCNSL cells are closely arranged and cluster along vascular channels, which destroy the blood supply of the adjacent brain parenchyma, resulting in encephalomalacia (30). The above four imaging signs were FIGURE 3 | (A-C) A 60-year-old woman with primary central nervous system lymphoma (PCNSL) presented with right hemiparesis for 3 months. MRI demonstrated the lesion was located in the left basal ganglia and thalamus with slight hyperintensity on T 2 WI (A), hypointensity on T 1 WI (B), and marked heterogeneous enhancement on T 1 CE imaging (C) (take gray matter for reference). The quantitative parameters showed that rT 1 , rT 2 , T 2 /T 1 , and rT 1 CE were 0.83, 1.34, 1.62, and 1.23, respectively. The case was correctly diagnosed as PCNSL by models 1, 2, 4, 5, and 6 and radiologist B while wrongly classified as glioblastoma (GBM) by radiologist (A). (D-F) A 23-year-old woman with GBM presented with nausea and vomiting for 2 months. MRI showed a vermis lesion with slight hyperintensity on T2WI (D), hypointensity on T1WI (E), and obvious homogeneous enhancement on T1CE imaging (F) (take gray matter for reference). The quantitative parameters showed that rT 1 , rT 2 , T 2 /T 1 , and rT 1 CE were 0.57, 1.91, 3.35, and 2.73, respectively. The case was correctly diagnosed as GBM by models 1, 2, 4, and 5 while wrongly classified as PCNSL by the two radiologists. statistically significant between PCNSL and aGBM based on the overall cohort. So, we believe that these signs may be useful in daily radiological practice and help differentiate PCNSL and aGBM.
In the present study, PCNSL had higher rT 1 and lower rT 2 than aGBM. The possible mechanism is that a high degree of cellularity and high nuclear-cytoplasm ratio lead to the decrease of tumor water content (31,32), which contributes to signal characteristics. Although the rT 1 CE of PCNSL was slightly higher than that of GBM, there was no significant difference between the two groups. This result differs from that of the study by Anwar et al. (9), which reported a sensitivity of 83.3%, specificity of 85.7%, and AUC of 0.92 for differentiating PCNSL and GBM. Different study populations, MRI sequence parameters, and timing and dosage of MRI contrast administration may contribute to this inconsistency. Notably, compared with rT 2 or rT 1 , T 2 /T 1 achieved the highest AUC in distinguishing PCNSL from aGBM. The good diagnostic performance may be attributed to the fact that T 2 /T 1 can provide better contrast as a quantitative tool. Many studies have confirmed that T 1 /T 2 ratio was useful in differentiating benign and malignant lesion in breast (33) and liver (24), quantifying the demyelinated cortex in multiple sclerosis (25).
There are several limitations for the current study. First, our sample size was relatively small, especially for the aGBM group. From 3 medical centers, only 48 patients with atypical and solid enhancement met the inclusion criteria and were selected. Second, radiologic-pathologic correlation for morphological features was not performed. Third, although the repeatability and reproducibility of SIR measurements were good; however, possible bias still existed due to the manual positioning ROIs. Finally, our cohort included heterogeneous MRI equipment and scanning parameters mimicking the circumstances encountered in a clinical setting. However, as a semiquantitative parameter, how SIR is affected by different equipment and scanning parameters is not clear. Further prospective study is needed. CONCLUSION T 2 pseudonecrosis sign, reef sign, and peritumoral leukomalacia sign are closely related to PCNSL, which are never reported before. Compared to radiologists' assessment, the combination model of morphological features and SIRs can provide better diagnostic performance in distinguishing PCNSL from aGBM.
DATA AVAILABILITY STATEMENT
The data analyzed in this study are subject to the following licenses/restrictions: The raw data are not publicly available due to them containing information that could compromise research The bold P value suggests a significant difference between the variables in the two cohorts. Model 1, rT 2 + T 2 /T 1 + rT 1 + Localization + Incision sign + Reef sign + Peritumoral leukomalacia sign + T2 pseudonecrosis sign; Model 2, rT 2 + T 2 /T 1 + rT 1 ; Model 3, Localization + Incision sign + Reef sign + Peritumoral leukomalacia sign + T2 pseudonecrosis sign; Model 4, rT 2 ; Model 5, T 2 /T 1 ; Model 6, rT 1 . PCNSL, primary central nervous system lymphoma; aGBM, atypical glioblastoma; CI, confidence interval; AUC, area under the curve; ACC, accuracy; PPV, positive predictive value; NPV, negative predictive value.
participant privacy/consent. Requests to access these datasets should be directed to hanyu0920@163.com.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the institutional review board from Tangdu Hospital, XD Group Hospital, and West China Hospital. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
G-BC and L-FY conceived the study. YH, Z-JW, and W-HL participated in the study design. YH, Z-JW, W-HL, YY, JZ, X-BY, LZ, GX, S-ZW, and L-FY performed the data acquisition. L-FY and YH participated in the statistical analyses. All authors participated in the data interpretation. YH drafted the first version of the report. All authors contributed to the article and approved the submitted version.
|
2022-02-02T16:02:01.855Z
|
2022-01-31T00:00:00.000
|
{
"year": 2022,
"sha1": "43f2af78b4b6c74a35bb9cdbb1ad05278e66f358",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.811197/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fcf5b088738560f4b227024e8ccfa97add240eb2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9217979
|
pes2o/s2orc
|
v3-fos-license
|
Anti-GPC3-CAR T Cells Suppress the Growth of Tumor Cells in Patient-Derived Xenografts of Hepatocellular Carcinoma
Background The lack of a general clinic-relevant model for human cancer is a major impediment to the acceleration of novel therapeutic approaches for clinical use. We propose to establish and characterize primary human hepatocellular carcinoma (HCC) xenografts that can be used to evaluate the cytotoxicity of adoptive chimeric antigen receptor (CAR) T cells and accelerate the clinical translation of CAR T cells used in HCC. Methods Primary HCCs were used to establish the xenografts. The morphology, immunological markers, and gene expression characteristics of xenografts were detected and compared to those of the corresponding primary tumors. CAR T cells were adoptively transplanted into patient-derived xenograft (PDX) models of HCC. The cytotoxicity of CAR T cells in vivo was evaluated. Results PDX1, PDX2, and PDX3 were established using primary tumors from three individual HCC patients. All three PDXs maintained original tumor characteristics in their morphology, immunological markers, and gene expression. Tumors in PDX1 grew relatively slower than that in PDX2 and PDX3. Glypican 3 (GPC3)-CAR T cells efficiently suppressed tumor growth in PDX3 and impressively eradicated tumor cells from PDX1 and PDX2, in which GPC3 proteins were highly expressed. Conclusion GPC3-CAR T cells were capable of effectively eliminating tumors in PDX model of HCC. Therefore, GPC3-CAR T cell therapy is a promising candidate for HCC treatment.
inTrODUcTiOn Hepatocellular carcinoma (HCC) accounts for 90% of primary liver cancers and is one of the deadliest cancers in Asia (1)(2)(3). Current curative approaches for liver cancer mainly involve partial liver resection, liver transplantation, chemotherapy, and transarterial chemoembolization (4,5). Despite enormous advances in the diagnosis and treatment of liver cancer in the recent decades, the 5-year survival rate has remained at about 10% (6,7). Thus, more novel potential strategies, such as immunotherapy with genetic engineering of T cells to express chimeric antigen receptor (CAR), are now being tested in clinical trials (http:// www.clinicaltrials.gov). For accelerating the steps of clinical trials, careful preclinical evaluations in models that closely mirror the clinical situation are urgently required. Patient-derived xenografts (PDXs) refer to a procedure in which cancerous tissue from a patient's primary tumor is implanted directly into an immunodeficient mouse (8). This technique offers several advantages over standard cell line xenograft models. Unlike cancer cell lines, primary tumor cells are directly derived from human tissues and are not subjected to frequent high-serum environments and passages. Thus, PDX models are more biologically stable when passaged in mice in terms of mutational status, gene expression patterns, drug responsiveness, and tumor heterogeneity (9). Despite these benefits, only two studies report the use of PDX models of HCCs in drug testing (10,11). No study has yet examined the use of CAR T cells in PDX models of HCC. Thus, it is necessary to carry out preclinical evaluation of novel CAR T cells against HCCs in PDX models.
It has been shown that glypican-3 (GPC3), a 580-AA heparan sulfate proteoglycan, expresses in 75% of HCC samples but not in healthy liver or other normal tissue (12). GPC3 is, therefore, a suitable target for CAR T cell therapy. Two previous studies showed the promising activity of GPC3-CAR T cells against HCC cell lines in vivo (13,14). However, the capacity of GPC3-CAR T cells to eliminate HCC has not been evaluated in PDX models yet. In this study, we established and characterized primary human HCC xenografts to assess the cytotoxicity of adoptive GPC3-CAR T cells.
establishment of hcc Xenografts
Written informed consent was obtained from 12 patients, and the study received ethics approval from the Research Ethics Board of GIBH and the Second Affiliated Hospital of Guangzhou Medical University. All experimental protocols were performed in accordance with guidelines set by the China Council on Animal Care and the Ethics Committee of Animal Experiments at GIBH. The mice were provided with sterilized food and water ad libitum and housed in negative pressure isolators with 12-hour light/ dark cycles. The isolation was performed following a previously described method with some modifications. The diagnosis of HCC was confirmed by histologic analysis in all cases. HCC tissues were transplanted into NOD/SCID/IL2rg −/− (NSI) mice that were sourced from Li's lab (15)(16)(17). Primary HCC tumors were placed in RPMI 1640 in an ice bath. Thin slices of tumor were diced into ~25 mm 3 pieces. The tissue was transplanted subcutaneously in the right flank of 8-week-old male NSI mice. Growth of the established tumor xenografts was monitored at least twice weekly through measurement of the length (a) and width (b) of the tumor. The tumor volume was calculated as (a × b 2 )/2. For serial transplantation, tumor-bearing animals were anesthetized with diethyl ether and sacrificed via cervical dislocation. Tumors were minced under sterile conditions and transplanted in successive NSI mice as described earlier.
For the Huh-7 and HepG2 xenograft model, mice were inoculated subcutaneously with 2 × 10 6 Huh-7 cells on the right flank. When the tumor volume was approximately 50-100 mm 3 , the xenografts were randomly allocated into two groups, and the mice were given intravenous injection of human GPC3-CAR T or Control-CAR T cells in 200-µL phosphate-buffered saline solution as indicated. The tumor volume was calculated as (a × b 2 )/2.
genes and lentiviral Vectors
To generate CARs-targeting GPC3, the genes of anti-GPC3 scFv, based on GC33 antibodies (18) and anti-CD19 scFv as Control ScFv, were first synthesized and subcloned in frame into lentiviral vectors containing expression cassettes encoding an IgM signal peptide and CD3ζ, CD28ζ, and 4-1BBζ signaling domains under the control of an EF-1α promoter. The sequence of each cloned CAR was verified via sequencing.
isolation, Transduction, and expansion of Primary human T lymphocytes Peripheral mononuclear cells (PBMCs) were separated via density gradient centrifugation (Lymphoprep, Stem Cell Technologies, Vancouver, BC, Canada). Primary human T cells were isolated from PBMCs via negative selection using the pan T Isolation Kit (Miltenyi Biotec, Germany). T cells were cultured in RPMI 1640 supplemented with 10% FCS (Gibco, Life Technologies), 100-U/ mL penicillin, and 100 g/mL streptomycin sulfate (R10) and were stimulated with particles coated with anti-CD3/anti-CD28 antibodies (Miltenyi Biotec, Germany) at a cell-to-bead ratio of 1:2. Approximately 72 h after activation, T cells were transfected with supernatant containing lentiviral vectors expressing Control or GPC3-CARs. After transduction for 12 h, T cells were cultured with R10 medium supplemented with IL-2 (300 IU/mL). T cells were fed with fresh media every 2 days and were used within 21 days of expansion in all experiments.
cytotoxicity assays
The target cells HepG2-GL, Huh-7-GL, and A549-GL were incubated with Control-CAR T or GPC3-CAR T cells at the indicated ratios in triplicate wells in U-bottomed, 96-well plates. Target cell viability was monitored 24 h later by adding 100 µL/ well substrate d-Luciferin (potassium salt; Cayman Chemical, USA) resolved at 150 µg/mL. The background luminescence was negligible (<1% the signal from the wells with only target cells). The viability percentage (%) was, therefore, equal to the experimental signal/maximal signal, and the killing percentage was equal to 100 − viability percentage.
enzyme-linked immunosorbent assay (elisa)
Enzyme-Linked Immunosorbent Assay kits for IL-2 and interferon-γ were purchased from eBioscience, San Diego, CA, USA, and all ELISAs were conducted in accordance with the manuals provided. Control-CAR T and GPC3-CAR T cells were co-cultured at a 1:1 E/T ratio for 24 h in duplicate wells, from which the supernatant was collected and measured for the concentrations of IL-2 and IFN-γ.
Quantitative real-Time Polymerase chain reaction (Pcr) mRNA was extracted from cells with TRIzol reagent (Qiagen, Stockach, Germany) and reverse transcribed into cDNA using the PrimeScript™ RT reagent Kit (Takara, Japan). All reactions were performed with TransStart Tip Green qPCR SuperMix (TransGene, Beijing, China) on a Bio-Rad CFX96 real-time PCR machine (Bio-Rad, Hercules, CA, USA), using the primers shown in Table S1 in Supplementary Material. Delta CT calculations were relative to β-actin and corrected for PCR efficiencies.
Flow cytometry
Flow cytometry for the GFP% of transduced T cells and for GPC3 and PD-L1 expression on HCC cells was performed on a C6 cytometer and analyzed using the FlowJo software. The PBMCs, spleens, and bone marrow (BM) from xenograft mice were treated with a red blood cell lysis buffer (Biolegend), and the cells were stained with anti-hCD3, hCD4, and hCD8 analyzed on a Fortessa cytometer (BD Biosciences). All FACS staining was performed on ice for 30 min and washed with PBS containing 2% FBS before cell cytometry. Mouse tissues were weighed and harvested into ice-cold RPMI 1640. The tissues were manually morselized with a scalpel and then mechanically disaggregated through 40-to 100-mm filters.
histological analysis
Organ or tissue samples were fixed in 10% neutral formalin, embedded in paraffin, sectioned at 4-µm thickness, and stained with hematoxylin and eosin or antibodies (GPC3 and AFP). Images were obtained on a microscope (Leica DMI6000B, Leica Microsystems, Wetzlar, Germany).
statistics
The data are presented as the mean ± SEM. The results were analyzed via an unpaired Student's t-test (two-tailed). Statistical significance was defined by a P value of less than 0.05. All statistical analyses were performed using the Prism software version 6.0 (GraphPad).
resUlTs establishment and characterization of hccs from PDXs
Of the 12 models, 6 did not grow in the first generation (P1). PDXs of HCC were successfully engrafted from six PDXs in immunodeficient mice (NSI, NOD/SCID-IL2rg −/− ). Of these, three xenografts were propagated beyond the third generation (P3) (Figure 1A), whereas three tumors were still growing in the first generation (P1). Taken together, a success rate of 25% was reached when the third generation was considered successfully engrafted. An overview of the successfully growing PDX models and their clinical characteristics of the original patients are shown in Table 1.
To validate the established PDX models, we compared their morphology, immunological markers (GPC3 and AFP), and gene expressions with those of the corresponding primary tumor. Histologic evaluation of the xenografts revealed tumor tissue with morphologic characteristics like those of the original primary human tumor ( Figure 1B). Immunologic markers of liver tumors such as GPC3 and APF were detected in both primary patient tumors and xenografts ( Figure 1C). Quantitative reverse transcription PCR was performed to characterize the mRNA expression level of some tumor-related genes in xenografts and primary tumor ( Figure 1D). These genes are associated with carcinogenesis, aggression, and characterization of HCCs (19,20). Unsupervised hierarchical clustering of selected transcriptional profiles confirmed that all patients and xenograft pairs cluster together ( Figure S1 in Supplementary Material). Collectively, our results indicate that PDX of HCCs in the mice recapitulate the original disease and remain stable through three serial transplantations.
T cells engineered to express gPc3-cars
The sequencing encoding the anti-GPC3 scFv ( Figure S2A in Supplementary Material) was cloned in frame into lentivirus vectors containing CAR expression cassettes with CD28, 4-1 BB, and CD3ζ endodomains (Figure 2A). For the generation of T cell populations that expressed the anti-hGPC3-CAR, a two-step optimal expansion protocol was developed. CD3-and CD28activated T cells after 72 h were transduced with the GPC3-CAR construct to generate GPC3-CAR T cells. The expression of CARs was measured via flow cytometry through eGFP expression. CARs were stably expressed from day 7 to day 14 with no significant difference ( Figure 2B). The frequency of CAR expression was 58.6% for CD19-CAR and 49.2% for GPC3-CAR ( Figure 2C). Flow cytometric analysis using a goat anti-mouse F(ab)2 confirms that the expression of CAR molecules was consistent with eGFP ( Figure S2B in Supplementary Material). In our optimal expansion protocol, T cells begin to expand at day 3 and continued to expand until day 21. Reproducible expansion of 20-to 50-folds of T cell can be achieved at day 14 ( Figure 2D). Collectively, these experiments established a robust two-step method to transduce and expand (up to 50-fold) CAR-transduced T cells from the peripheral blood of healthy donors.
Phenotypic and Functional characterization of gPc3-car T cells
To better define the phenotyping of CAR-transduced T lymphocytes after infection, we next performed 35 different cell surface markers. CAR T cells were compared at the beginning (day 0) and the middle (day 14) of the T cell culture process. We observed upregulation of the activation markers CD25 and CD27, the migration maker CCR7 (Figure 3A), and the costimulatory receptors CD86 (21) and CD137 (Figure 3A), which are indicators of enhanced proliferative potential of T cells. After in vitro culture, T cells acquired an intermediate effector memory phenotype with the progressive downregulation of CD28 and CD62L. Moreover, we observed upregulation or downregulation of multiple molecules involved in cell adhesion. CD18, CD44, and CD49d were upregulated, and CD49f, CD107a, and CD56 were downregulated ( Figure 3A). Notably, key inhibitory and exhaustion-associated molecules such as PD-1, CTLA-4, and TIM3 were upregulated ( Figure 3A). Importantly, we found strikingly similar CAR T cell phenotypes across all six tested donors, as illustrated in the heat map in Figure S3E in Supplementary Material. A hallmark function of activated T lymphocytes is the production of cytokines. To evaluate this production, we co-cultured CAR T cells with GPC3-positive HCC cell lines as target cells ( Figure S4A in Supplementary Material). GPC3-CAR T cells secreted high levels of INF-γ and IL-2 after coincubation with only GPC3-positive targets (Figures 3B,C). These data collectively characterize CAR T cells as a highly reproducible cellular product of activated lymphocytes, endowed with migratory potential and natural cytotoxic machinery.
effective serial Killing of gPc3-Positive human hcc cells by gPc3-car T cells
To test whether GPC3-CAR T cells could specifically recognize and kill GPC3-positive targets, cytotoxicity assays were performed by incubating the CAR T cells with GPC3-positive HCC cells (Huh-7and HepG2) and GPC3-negative cells (A549) ( Figure S4A in Supplementary Material). GPC3-CAR T cells were highly cytotoxic against the GPC3-positive HCC cells, Huh-7, and HepG2. By contrast, GPC3-CAR T cells did not target GPC3-negtive cells ( Figure 4A). These data demonstrate that GPC3-CAR T cells selectively target GPC3-positive tumor cells. It had been demonstrated that 4-1BB endodomains ameliorate exhaustion of CAR T cells (22). To further explore cytotoxic potency of GPC3-CAR T cells incorporating 4-1BB costimulatory domains, we performed a co-culture in which CAR T cells were restimulated with GPC3-positive HCC cells every 24 h for three consecutive days at E:T ratios of 1:1 (23). Killing of Huh-7 and HepG2 hepatoma cells was only observed when GPC3-CAR was reconstituted ( Figure 4B). Taken together, these results indicate that GPC3-CART cells displayed significantly specific and efficient cytotoxicity against GPC3-positive target cells.
adoptive Transfer of gPc3-car T cells suppresses hcc cell lines growth In Vivo
To explore the killing of GPC3-positive tumors by GPC3-CAR T cells in vivo, we used a subcutaneous xenograft model in which transplant tumors were established in immunodeficient mice using HepG2 and Huh-7 cell lines. Tumors were established for 7 days, and a tumor volume of approximately 50-100 mm 3 was obtained. The mice were then treated via adoptive transfer of GPC3-CAR T or Control-CAR T cells. Tumor growth was efficiently suppressed by intravenous injection of 5 × 10 6 GPC3-CAR T cells (n = 5), as compared to a control group that received Control transduced T cells (n = 5) (Figures 5A-D). We also detected human T cells in the PBMC and tumor tissues of mice with subcutaneous Huh-7 or HepG2 xenografts after T cell infusion (Figures 5E,F). The results show that GPC3-CAR T cells can efficiently suppress the growth of HCC cell lines in mice.
Patient-Derived hcc Xenograft is controlled by gPc3-car T cells
Patient-derived xenograft models preserve the heterogeneous pathological and genetic characteristics of the original patient tumors and may provide a precision preclinical model for immunotherapy evaluation. Our results show that GPC3 protein was highly expressed in xenografts of HCCs, so we tested the effect of GPC3-CAR T cells in these PDX models. In all three individual PDX models, 2.5 × 10 6 CAR T cells were given by intravenous injection twice after the tumor volume reached 50-100 mm 3 . The efficient antitumor effect was observed in the xenografts treated with GPC3-CAR T cells compared to the Control-CAR T cells (Figures 6A-F). We observed that GPC3-CAR T cells have better cytotoxicity in PDX1 and PDX2 than PDX3. We propose that the heterogenous nature of tumors can affect GPC3-CAR T cells cytotoxicity to tumors in vivo. Studies have shown that the highly expressed MET, CTNNB1, and CCND1 are associated with aggression of HCCs (24). MET, CTNNB1, and CCND1 are highly expressed in PDX3 than in PDX1 and PDX2 ( Figure 1D). This result implies that PDX3 tumor cells are more aggressive. Programed cell death 1 (PD-1), an immunoinhibitory receptor belonging to the CD28 family, has been shown as a frequently used physiologic immunosuppressive mechanism by tumors to invade host immunity (25). Our results show that PD-L1 was highly expressed in PDX3 but not in PDX1 and PDX2 ( Table 1). These results suggest that tumor aggression and immunosuppressive molecules should be considered in CAR T cell therapy. T cell analysis also shows that GFP-positive T cells are higher in GPC3-CAR T groups than in Control-CAR T groups (Figures 6G-I).
Taken together, our results demonstrate that GPC3-CAR T cells were able to efficiently suppress the growth of primary GPC3positive HCC in vivo.
DiscUssiOn
In this study, we report the establishment of three PDX models from primary HCC in NSI mice. The xenografts were successfully serially transplanted while preserving the characteristics of the original patient tumors. We observed that tumors in PDX2 and PDX3 xenografts grow faster than that in PDX1. Our data suggest that their growth behavior is positively correlated with the expression levels of MET, CTNNB1, and CCND1. Previous studies show that MET and CTNNB1 act as oncogenes in HCCs and that CCND1 is the hallmark of cell cycle progression (19,26,27).
Previous studies have shown that GPC3-CAR T cells efficiently eradicate liver cancer cells lines that possess a high level of GPC3 expression in vivo (13,14). Here, we observed that GPC3-CAR T cells were less effective to kill HCC cell lines in xenografts, compared to a previous report (13). A possible reason for this difference is that we used NOD/SCID/IL2 −/− mice for xenotransplantation, whereas the previous report used NOD/SCID mice, in which natural killer (NK) cells are active against tumor cells. By comparing the subcutaneous growth of Huh-7, we observed that Huh-7 cells grew faster in NOD/SCID/IL2 −/− mice than in NOD/SCID mice. In addition, tumor-experienced T cells (28) can promote NK cell activity against tumors cells. It is likely that GPC3-CAR T cells may kill tumor cells in synergy with mouse NK cells.
Patient-derived xenograft models have been commonly used to test drug efficacies and identify biomarkers in a number of cancers, including liver, ovarian, pancreatic, breast, and prostate cancers (9). Previous studies have shown that tumors in PDX models are biologically stable and accurately reflect the histopathology, gene expression, genetic mutations, and therapeutic response of the patient tumor (9). Several recent preclinical studies and clinical trials have demonstrated the efficient activity of CD19-CAR T cells against acute B lymphoblastic leukemia (29). However, CAR T cells that target solid tumors have so far demonstrated limited efficacy. To date, the most positive trials reported have used GD2 CARs to target neuroblastoma (3 of 11 patients with complete remissions), HER2 CARs for sarcoma (4 of 17 patients showing stable disease), and HER1 CARs for lung cancer (2 of 11 patients with partial responses) (30). We report that GPC3-CAR T cells impressively eradicated tumors from PDX1 and PDX2, which were less aggressive and were PD-L1 negative. In contrast, GPC3-CAR T cells were less cytotoxicity to tumors in PDX3 that were more aggressive and highly expressed PD-L1, suggesting that we need to combine CAR T cell therapy and immune checkpoint inhibitors to achieve higher efficacy of eliminating PD-L1-positive HCC. Similarly, two recent reports showed that combining CAR therapy and PD-1 blockade was efficacious in breast cancer and mesothelioma models (31,32). Downregulated expression of GPC3 in HCC cells may affect GPC3-CAR T-specific cytotoxicity to tumor cells. While our data show that the percentage of GPC3-positive cells was not changed in control and GPC3-CAR T treatment in vivo ( Figures S4B,C in Supplementary Material).
In summary, we established and characterized three GPC3positive PDXs of HCC. We also show that GPC3-CAR T cells suppressed tumor growth but with different efficacies in the PDX models of the three individual patients. Therefore, PDX models can potentially be used to evaluate the efficacy of GPC3-CAR T cell therapy for treating HCC in individual patients.
aUThOr cOnTriBUTiOns ZJ and XJ contributed to the conception and design, collection and/or assembly of data, data analysis and interpretation, and manuscript writing. SC, XW, YL, SL, and QLiang contributed to the provision of study material or patients and collection and/or assembly of data. BL, SW, and QW provided administrative support. YY and DP contributed to the conception and design and provided financial support. YY, QLiu, and PLiu contributed to the conception and design. PX and PLi contributed to the conception and design, data analysis, and interpretation, manuscript writing, and final approval of manuscript and provided financial support.
|
2017-05-03T18:57:07.229Z
|
2017-01-11T00:00:00.000
|
{
"year": 2016,
"sha1": "3ba6d6b94f5d825c193d6acfb1219eb8b41b7e95",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2016.00690/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ba6d6b94f5d825c193d6acfb1219eb8b41b7e95",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
61588219
|
pes2o/s2orc
|
v3-fos-license
|
PLDA with Two Sources of Inter-session Variability
In some speaker recognition scenarios we find conversations recorded simultaneously over multiple channels. That is the case of the interviews in the NIST SRE dataset. To take advantage of that, we propose a modification of the PLDA model that considers two different inter-session variability terms. The first term is tied between all the recordings belonging to the same conversation whereas the second is not. Thus, the former mainly intends to capture the variability due to the phonetic content of the conversation while the latter tries to capture the channel variability. In this document, we derive the equations for this model. This model was applied in the paper"Handling Recordings Acquired Simultaneously over Multiple Channels with PLDA"published at Interspeech 2013.
1 The Model
PLDA
We take a linear-Gaussian generative model M. We suppose that we have i-vectors of the same conversations recorded simultaneously by different channels or different noisy conditions. Then, an. i-vector φ ijk of speaker i, session j recorded in a channel l can be written as: where µ is speaker independent term, V is the eigenvoices matrix, y i is the speaker factor vector, U is an the eigenchannels matrix, x ij and ǫ ijl is a channel offset. The term x ij must be the same for all the recordings of the same conversation. The term ǫ ijl accounts for the channel variability. We assume the following priors for the variables: y ∼ N (y|0, I) (2) x ∼ N (x|0, I) where N denotes a Gaussian distribution; and D is a full rank precision matrix. φ is an observable variable and y and x are hidden variables.
Notation
We are going to introduce some notation: • Let Φ d be the development i-vectors dataset.
• Let Φ t = {l, r} be the test i-vectors.
• Let Φ be any of the previous datasets.
• Let θ d be the labelling of the development dataset. It partitions the N d i-vectors into M d speakers.
Each speaker has H i sessions and each session can be recorded by L ij different channels.
• Let θ t be the labelling of the test set, so that θ t ∈ {T , N }, where T is the hypothesis that l and r belong to the same speaker and N is the hypothesis that they belong to different speakers.
• Let θ be any of the previous labellings.
• Let Φ i be the i-vectors belonging to the speaker i.
• Let Y d be the speaker identity variables of the development set. We will have as many identity variables as speakers.
• Let Y t be the speaker identity variables of the test set.
• Let Y be any of the previous speaker identity variables sets.
• Let X d be the channel variables of the development set.
• Let X t be the channel variables of the test set.
• Let X be any of the previous channel variables sets.
x iHi ] be the channel variables of speaker i.
• Let M = (µ, V, U, D) be the set of all the model parameters.
Definitions
We define the sufficient statistics for speaker i. The zero-order statistic is the number of observations of speaker i N i . The first-order and second-order statistics are We define the centered statistics as We define the session statistics as where L ij is the number of channels for the conversation ij. We define the global statistics
Data conditional likelihood
The likelihood of the data given the hidden variables for speaker i is We can write this likelihood in other form if we define:
Posterior of the hidden variables
The posterior of the hidden variables can be decomposed into two factors: Using equations (3) and (18) ln Equation (29) has the form of a product of Gaussian distributions. Therefore
Posterior of y i
The marginal posterior of y is We can use Bayes Theorem to write Simplifying Then Using equations (2), (18) and (34) Then where
EM algorithm 3.1 E-step
In the E-step we calculate the posterior of y and X with equation (24) 3.2 M-step ML We maximize the EM auxiliary function Q(M) Taking equation (23) where K = S − 2CṼ T +ṼRỹṼ T , so Finally, we need to evaluate the expectations E Y [ỹ ij ] and E Y ỹ ijỹ T ij and compute Rỹ and C.
M-step MD
We assume a more general prior for the hidden variables: To minimize the divergence we maximize where where The transform (y, x) = φ(y ′ , x ′ ) such as y ′ and x ′ has a standard prior is x =µ x + Hy We can transform µ, V and U using that transform
Objective function
The EM objective function is equation (52) summed for all speakers 4 Likelihood ratio Given a model M we can calculate the ratio of the posterior probabilities of target and non target as shown in [1]: where we have defined the plug-in likelihood ratio R (Φ t , M). To get this ratio we need to calculate P (Φ|θ, M). Given a model M, the y 1 , y 2 , . . . , y M ∈ Y are sampled independently from P (y|M). Besides, given the M and a speaker i the set Φ i of i-vectors produced by that speaker are drown independently from P (Φ|y i , M). Using these independence assumptions we can write: Then, the likelihood of Φ is where K(Φ) = N i=1 P (φ j |y 0 , M) is a term that only dependent on the dataset, not θ, so it vanishes when doing the ratio and we do not need to calculate it. What we need to calculate is: Q (Φ i ) = P (y 0 |M) P (y 0 |Φ i , M) (120) and the likelihood ratio is: Making y 0 = 0 we can get use (39), (2) to calculate Q (Φ) Given a set of training observations Φ 1 of a speaker 1 with statistics N 1 and F 1 ; and a set of test observations Φ 2 of a speaker 2 with statistics N 2 and F 2 . To test if the speakers 1 and 2 are the same speaker the log-likelihood ratio is ln R (Φ t , M) = 1 2 − ln |L 3 | + γ T 3 L −1 3 γ 3 + ln |L 1 | − γ T 1 L −1 1 γ 1 + ln |L 2 | − γ T 2 L −1 2 γ 2 Using that γ 3 = γ 1 + γ 2 :
|
2015-11-20T21:08:04.000Z
|
2015-11-20T00:00:00.000
|
{
"year": 2015,
"sha1": "dc02e6da47cfd005c0c8f976738c8946397d423d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dc02e6da47cfd005c0c8f976738c8946397d423d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
17477395
|
pes2o/s2orc
|
v3-fos-license
|
of extreme events and associated risks
Abstract. Currently, a shift from classical flood protection as engineering task towards integrated flood risk management concepts can be observed. In this context, a more consequent consideration of extreme events which exceed the design event of flood protection structures and failure scenarios such as dike breaches have to be investigated. Therefore, this study aims to enhance existing methods for hazard and risk assessment for extreme events and is divided into three parts. In the first part, a regionalization approach for flood peak discharges was further developed and substantiated, especially regarding recurrence intervals of 200 to 10 000 years and a large number of small ungauged catchments. Model comparisons show that more confidence in such flood estimates for ungauged areas and very long recurrence intervals may be given as implied by statistical analysis alone. The hydraulic simulation in the second part is oriented towards hazard mapping and risk analyses covering the whole spectrum of relevant flood events. As the hydrodynamic simulation is directly coupled with a GIS, the results can be easily processed as local inundation depths for spatial risk analyses. For this, a new GIS-based software tool was developed, being presented in the third part, which enables estimations of the direct flood damage to single buildings or areas based on different established stage-damage functions. Furthermore, a new multifactorial approach for damage estimation is presented, aiming at the improvement of damage estimation on local scale by considering factors like building quality, contamination and precautionary measures. The methods and results from this study form the base for comprehensive risk analyses and flood management strategies.
Introduction
Within the framework of the Center for Disaster Management and Risk Reduction Technology (CEDIM, a cooperation between the University of Karlsruhe and the GFZ Potsdam), the project "Risk Map Germany" aims to provide improved methods and data for the quantification and mapping of risks.Here, in particular the developments concerning flood risk assessment are presented.The damages in Germany due to severe flood disasters in the last decades amount to billions of Euro.Examples are the Rhine floods in 1993 with 530 M , and in 1995 with 280 M , the Odra flood in 1997 with 330 M , the Danube flood in 1999 with 412 M , the Elbe and Danube flood in 2002 with 11 800 M (Kron, 2004).The need of specific research efforts and spatial data, particularly hazard or risk maps, for an improved risk assessment and prevention on regional and local level is evident.
In this project, "hazard" is defined as the occurrence of a flood event with a defined exceedance probability."Risk" is defined as the potential damages associated with such an event, expressed as monetary losses.It becomes clear that hazard and risk quantification depend on spatial specifications (e.g., area of interest, spatial resolution of data).With regard to flood risk, the local water level is decisive for the occurrence of damage (e.g.Smith, 1994; see below).Therefore, a high level of detail, i.e. an appropriate scale of flood maps is a fundamental precondition for a reliable flood risk assessment.Detailed spatial information on flood hazard and vulnerability is necessary for the development of regional flood-management concepts, planning and cost-benefit analysis of flood-protection measures and, extremely important, for the preparedness and prevention strategies of individual stakeholders (communities, companies, house owners etc.).Moreover, the mapped information that an area or object is potentially endangered due to a given flood scenario directly implies legal and economical consequences such as competencies of public authorities for flood control and spatial planning, owner interests, insurance polices, etc.
Published by Copernicus GmbH on behalf of the European Geosciences Union.
In Germany, the federal states (Bundesländer) are responsible for flood management and for the generation of flood maps.Many state authorities have been working for years on the delineation of inundation zones with map scales of up to ≥1:5000 in urban areas in order to recognise the flood hazard for discrete land-parcels and objects.Increasingly, public flood-hazard maps are available on internet platforms (e.g., Nordrhein-Westfalen, 2003, Rheinland-Pfalz, 2004, Sachsen, 2004, Bayern, 2005, Baden-Württemberg, 2005).In the next years, with respect to amendments in legislation, regional significance and technical possibilities, flood-hazard maps with high spatial resolutions can be expected successively for all rivers in Germany.Details of procedures and mapping techniques vary from state to state and due to local concerns (e.g., data availability, vulnerability, public funds).An overview of different approaches is published by Kleeberg (2005).For example in Baden-Württemberg, two types of flood-hazard maps will be provided for all rivers with catchment areas >10 km 2 .The first map will show the extent of inundation zones of the 10-, 50-and 100-year event, supplemented by an "extreme event" being in the order of magnitude of a 1000-year event and, as documented, information on historical events.The second map will provide the water depths of the 100-year event.The basic requirements and typical features of these upcoming nation-wide mosaic of flood-hazard maps can be drafted as follows (compare e.g.UM Baden-Württemberg, 2005, MUNLV, 2003): -Representation of present flood-relevant conditions (updating after significant changes) -Representation of inundation zones for flood events of different recurrence intervals up to generally 100 years, for large rivers 200 years (e.g., Rhine) -Representation of inundation depths, potentially flow velocities -Representation of extreme, historical events (exceeding the 100-year event, as available) -Representation of flood-protection measures, potentially local hazard sources -Level of detail for local analyses and planning purposes Following these requirements on flood-hazard assessment and related purposes on local scale, one can rely on a set of methods for the quantification of hydrological and hydraulic parameters and their spatial intersection with digital terrain models (DTM) and land-use data.A number of investigations (e.g.Uhrich et al., 2002) showed that the quality of flood maps strongly depends on the quality of the DTM used.
Uncertainties in DTMs are more and more overcome by an increasing availability of high-resolution digital terrain models from airborne surveys (e.g., Laserscanner or aerial photographs).In spite of these technical standards and advances in practice, it can be noted that flood-risk assessment remains a quite challenging task, especially regarding the uncertainties related to extreme events exceeding the design flood or to the damage due to failures of flood control measures (Apel et al., 2004;Merz et al., 2004).For example, uncertainties associated with flood frequency analyses are discussed by Merz and Thieken (2005).However, the visualisation and communication of uncertain information in hazard maps should be optimised in a way, that non-experts can understand, trust and get motivated to respond to uncertain knowledge (Kämpf et al., 2005).
In contrast to hazard mapping, the assessment of damage and its visualisation as risk maps is still far from being commonly practised in Germany.Risk maps, however, help stakeholders to prioritise investments and they enable authorities and people to prepare for disasters (e.g., Takeuchi, 2001;Merz and Thieken, 2004).Good examples for risk assessments and maps are -among others -the ICPR Rhine-Atlas (ICPR, 2001), the programme of flood-hazard mapping in Baden-Württemberg (UM Baden-Württemberg, 2005), the integrated flood management conception in the Neckar river basin (IkoNE, 2002), the DFNK approach for the city of Cologne (Apel et al., 2004;Grünthal et al., 2006), and the risk assessment in England and Wales (Hall et al., 2003).Since flood risk encompasses the flood hazard and the consequences of flooding (Mileti, 1999), such analyses require an estimation of flood impacts, which is normally restricted to detrimental effects, i.e. flood losses.In contrast to the above discussed hydrological and hydraulic investigations, flood damage modelling is a field which has not received much research attention and the theoretical foundations of damage models need to be further improved (Wind et al., 1999;Thieken et al., 2005).
A central idea in flood damage estimation is the concept of damage functions.Most functions have in common that the direct monetary damage is related to the inundation depth and the type or use of the building (e.g.Smith, 1981;Krzysztofowicz and Davis, 1983;Wind et al., 1999;NRC, 2000;Green, 2003).This concept is supported by the observation of Grigg and Helweg (1975) "that houses of one type had similar depth-damage curves regardless of actual value".Such depth-damage functions, also well-known as stage-damage-functions, are seen as the essential building blocks upon which flood damage assessments are based and they are internationally accepted as the standard approach to assess urban flood damage (Smith, 1994).
Probably the most comprehensive approach has been the Blue Manual of Penning-Rowsell and Chatterton (1977) which contains stage-damage curves for both residential and commercial property in the UK.In Germany, most stagedamage curves are based on the most comprehensive German flood damage data base HOWAS that was arranged by the Working Committee of the German Federal States' Water Resources Administration (LAWA) (Buck and Merkel 1999;Merz et al., 2004).But recent studies have shown that stage-damage functions may have a large uncertainty (e.g.Merz et al., 2004).
The investigation concept within the CEDIM working group "flood risk" is based on the main goal to improve the flood-risk assessment on local scale in different modules of the quantification procedure.Special attention is given to extreme events.This was realised in pilot areas in Baden-Württemberg, where a good data and model basis is given.
Following this modular concept, the quantification procedure can be divided in three major steps: -Regional estimation of flood discharges (basin-, sitespecific hydrological loads) -Estimation of flow characteristics in potential inundation areas (local hydraulic impacts) -Estimation of the resulting damages (area-or objectspecific risk assessment) An overview of suitable approaches for these steps of hazard and vulnerability assessments is given in Table 1.The left column states minimum requirements on data and methods for a standard quality of hazard and risk assessment on local scale in Germany.The right column lists more sophisticated approaches which require more spatial information and more complex calculations up to fully dynamic simulations of unobserved extreme flood situations.Parts highlighted in bold letters are further addressed by the present study, without giving priority to any of the listed possibilities.The present paper is structured in the Sects.2, 3 and 4 according to the above mentioned steps, respectively.
2 Estimation of extreme flood events and their probabilities
Basis and objectives
The estimation of flood frequencies is well-known as a key task in flood hazard assessment.Actually, the availability of reliable and spatially distributed event parameters for extreme floods is a fundamental prerequisite for any comprehensive flood-risk management.For instance, peak discharges for recurrence intervals up to T=100 years (corresponding to an exceedance probability of one percent per year) are commonly accounted in flood mapping and floodprotection planning.Peak discharges for larger events with recurrence intervals up to 1000 or even 10 000 years are required for dam safety analyses (cf.DIN 19 700), hazard mapping for extreme cases, related risk analyses and emergency planning purposes.For example in Baden-Württemberg, a guideline gives specific technical recommendations for the dimensioning of flood-protection measures (LfU, 2005a).These recommendations already include the preventative consideration of potential impacts of future climate change on peak discharges by a so-called "climate change factor" proposed by Ihringer (2004) based on statistical analyses of downscaled regional climate-model outputs.
On the other hand, it is important to bear in mind that flood-estimation procedures in practice mainly rely on observed discharge data.In the field of hydrology, it has long been recognised that many annual maximum flood series are too short to allow a reliable estimation of extreme events, leading to the conclusion that instead of developing new methodologies for flood-frequency analysis, the comparison of existing methods and the search for other sources of information have to be intensified (e.g., Bobée et al., 1993).This is especially true for small catchment areas where the availability of flow data is generally worse (numerous ungauged areas or rather short periods of records).According to this, the need of regional analyses to compensate the lack of temporal data and to introduce a spatial dimension in flood estimates is evident.Beside flood-frequency analysis, regional analyses can help to identify physical or meteorological catchment characteristics that cause similarity in flood response.Considerable uncertainties, although being an intrinsic part of extreme value estimations, can be managed by the complementary use of different methods (e.g., flood-frequency analysis and rainfall-runoff models).In fact, a stepwise approximation from different directions, involving both statistical theory as well as knowledge of catchment characteristics and flood processes seems to be the most viable way to build confidence in flood estimates, to identify and exclude implausible values and thus, to reduce uncertainties to a smaller bandwidth.
Hence, the specific goal here is to discuss a regionalization method for state-wide flood probabilities in Baden-Württemberg.Emphasis is given to comparisons among models for recurrence intervals from 200 to 10 000 years.
Method and data
The first regionalization methods for flood estimates in Baden-Württemberg were developed in the 1980's (Lutz, 1984).In 1999, the following regionalization approach for the mean annual peak discharge (MH Q) and peak discharges (H Q T ) for recurrence intervals T from 2 to 100 years, partially 200 years was published (LfU, 1999), followed by an updated version on CD in 2001(LfU, 2001).The approach is based on flood frequency analyses at 335 gauges which cover catchment areas from less than 10 km 2 (∼7% of all gauges) to more than 1000 km 2 (∼7%) and periods of records varying from a minimum of 10 to more than 100 years (average 45 years).At large, these statistical analyses at single gauges involved 12 types of theoretical cumulative distribution functions (cdfs); the parameter estimation was done using the method of moments and the method of maximum likelihood.The final selection of cdfs was supported by regional comparisons (e.g., for neighboured gauges) to avoid inconsistencies especially for higher recurrence intervals.These eight parameters, especially h N G and LF , were identified as significant for the peak discharge.LF is an empirical factor and represents all kind of regional influences, particularly geological characteristics.Together, they are taken into account in the following multiple linear regression equation, which is used asapproach for flood quantile estimation (i.e.MH Q and H Q T ) especially for ungauged sites: with: Y, Y T dependant variable: Y =MH q for regionalization of MH Q Y T =Hq T /MH q for regionalization of H Q T MH q mean annual peak discharge per unit area [m 3 /(s×km 2 )] H q T annual peak discharge per unit area [m 3 /(s× km 2 )] of recurrence interval T C 0 ...C 8 regression coefficients The regression coefficients C 0 ...C 8 are estimated based on the gauge-specific flood estimates and the above mentioned spatial data sets (available in LfU, 2005b) using the method of least squares.The application of this approach requires two steps.First, MH q is estimated using Eq.(1).Subsequently, H Q T in unit [m 3 /s] is determined using Y T =H q T /MH q in Eqs. ( 1) and ( 2): Recently, this approach was extended to recurrence intervals of 200 to 10 000 years using a selection of 249 gauges and applied to a more detailed spatial data set (6200 locations of the river network, LfU 2005b).The selection of gauges was done considering the record length and the quality of the flow series in order to achieve more reliable model adjustments for low-frequency events.The present regionalization approach is thus consisting of 13 regression equations, i.e. one equation for MH Q and each H Q T for T from 2 to 10 000 years.The coefficients (C 0 -C 8 ) of these equations are fully documented in LfU (2005b), at which the corresponding coefficients of determination are R 2 >0.99 for all single recurrence intervals (logarithmic analysis).As Fig. 1 exemplifies for C 7 and C 8 (compare Eq. 1), the coefficients show a homogeneous progression upon the whole spectrum of recurrence intervals, although they are estimated separately for each recurrence interval.To enable user-specific estimates, the complete spatial data sets and a calculation tool for the regionalization approach are integrated in a geographical information system (LfU, 2005b), which is distributed as stand-alone software to local authorities and planning companies.By these means, regionalized MH Q and H Q T are provided for any user-defined location of the river network in Baden-Württemberg, completed by analogous information at 375 gauges and longitudinal profiles for 163 major rivers.Furthermore, the extension of the regionalization approach to very high recurrence intervals supports the ongoing state-wide elaboration of flood hazard maps and regional dam safety analyses.
Model results, comparisons and discussion
A comparison of at-site and regional flood-frequency analysis is exemplified in Fig. 2 for this specific gauge it may be noticed that the regionalization approach is able to reproduce the shape of the statistical distribution.At the same time, the 95%-confidence interval for the statistical distribution (dashed lines) is indicating substantial uncertainties, especially in the area of extrapolation (for this sample: about 25% for all H Q T with T≥100 years).Summing up, the deviations between regionalization and flood-frequency analysis vary among the mentioned 249 gauges as presented in Fig. 3.The mean deviation is <2.5% at approx.40% of the gauges, <7.5% at approx.75% and <12.5% at approx.90%.The deviation is >20% at approx.3% of the gauges where generally human activities (e.g., urban drainage systems) or karst conditions are present.Figure 4 illustrates a sample map of the regionalization approach for H q 1000 in Baden-Württemberg.According to this sample, the highest peak discharges per unit area occur in the mountainous regions of the Black Forest (Upper Rhine Basin) and the upper Neckar Basin.
To substantiate the regionalization approach especially for small ungauged catchment areas, the results can be compared to outcomes of rainfall-runoff (RR) models which are supposed to build on a better representation of catchment characteristics.This was done here for the Fils catchment, a tributary to the Neckar River (707 km 2 , see Fig. 4), where a RR-model (software see Ihringer, 1999) is available from a hydrological study on local flood problems.Within the RR-model, the catchment is represented by 907 subareas and 1501 nodes for the drainage network, considering a total urban area of 92 km 2 , 331 stormwater holding tanks of urban drainage systems and 7 flood-retention basins.As input of the RR-model, rainfall statistics provided by the German Weather Service (DWD, 1997) were used; these rainfall statistics cover recurrence intervals from 1 to 100 years for different duration classes from 0.5 to 72 h.For the assessment of higher peak discharges, the mean precipitation depths for the different duration classes were extrapolated.However the maximum peak of a set representing the relevant spectrum of precipitation characteristics was chosen to estimate the 1000-year quantile according of this approach.This value is compared to the quantile estimated from the regionalization approach based on observed flood peaks.Figure 5 shows a comparison of 1000-year peak discharges of both models (H Q 1000 from regionalization and RR).According to the spatial discretization of the regionalization approach, 265 locations of the drainage network are plotted.The axes are logarithmic in order to visualise the small values better.It can be concluded that the results match fairly well with tendencies to higher deviations between both approaches in smaller ar-eas (especially for H Q 1000 <10 m 3 /s).The variation around the bisecting line, standing for a perfect agreement of both models, can be understood as residual uncertainty of the mutual application of both models for these specific catchment areas.The deviations may be further discussed taking more knowledge on local characteristics into account, that was not yet used in one or both models (e.g., outlets of urban drainage systems).Mathematically, the deviations between both models amount to <7.5% for 66% of all 265 plotted locations, <12.5% for 82% and >30% for 3% of these locations.The latter belong to smaller areas respectively peak discharges (e.g., 6.96 m 3 /s from regionalization versus 11.02 m 3 /s from RR, Fig. 5) where the RR-model -in general -may recognise local influences better.Therefore, more confidence may be given to the regionalized H Q T as implied by statistical analysis alone (Fig. 2).
In view of the needs of practitioners, a coherent and robust approach for regional flood estimates is thus available and broadly established in a state-wide sense.Uncertainties of this approach concerning local distinctions call for hydrological justifications of the plausibility of flood estimates on local scale.The model comparison strategy seems to be the logical way for model validation and practically the unique chance to reduce uncertainties effectively in areas where the availability of flow records is scarce.This is valid not only in spatial sense (ungauged areas) but also for the extrapolation to very long recurrence intervals.Apart from its practical use for regional flood estimates, the regionalization approach leads from at-site flood frequency analysis to distributed hydrologic modelling of flood events, enabling a vice-versa review and mutual enhancement of these methods.
Basis and objectives
To quantify flood hazard and risk in urban areas or at individual locations, flood discharges (e.g., H Q 100 ) have to be transformed into hydraulic parameters like water levels, inundation depths or flow velocities by means of hydrodynamic-numerical (HN) models.In many cases when the flow patterns in a given river section are characterised by compact and coherent streamlines, 1-dimensional (1-D) HN-models are considered as adequate for the estimation of flood-water levels and delineation of inundation zones (e.g., Baden-Württemberg, 2005).In cases with more complex river geometries and flow patterns (e.g., at river confluences or other complex flow conditions), 2-dimensional (2-D) models are used for a spatially differentiated hydraulic analysis, especially when local parameters like flow direction, flow velocity, shear stress, etc. are requested.Depending on the intended purposes, both types of models (1-D, 2-D) may be applied for stationary flow conditions (e.g., hazard assessment for a certain H Q T ) or unsteady flow conditions (e.g., for impact analyses of dike failures).
At the Neckar river (see Fig. 4), the pilot area of this part of the study, a complex flood-information system was set up since the late 1990's (Oberle et al., 2000), consisting of a series of 1-D-and 2-D-HN-models which are interactively connected with a geographical information system (GIS).This system enables the simulation of different flood scenarios in order to evaluate, for example, effects of river engineering measures on flood waves.Through its GIS-interface, the hydraulic results can be superimposed with a high resolution DTM (grid size: 1×1 m) to determine inundation zones and respectively the boundaries.The DTM is based on elevation data from different data sources, i.e. terrestrial and airborne surveys.Apart from topographical information, floodrelevant spatial data like flood marks, flood impact area, retention zones and legally defined flood areas, are integrated in the GIS.Linkups to aerial photographs of recent flood events complete the volume of spatial data sets.
With respect to the main target parameters of flood-risk analysis and mapping (water levels, inundation zones/depths) and to the flow characteristics along the Neckar river, a generally 1-dimensional HN procedure was chosen.The choice of this procedure was supported by the fact that the handling of the system and the computing time should match with the size of the study area (approx.220 river kilometres) and the goal, to install the system as operational tool for daily working practice in the water management authorities.Finally, the calculation of a flood event and the visualisation of inundation depths in the GIS only takes minutes with this system, so that analyses can be realised also based on actual flood forecasts.Some river sections with more complex flow conditions (e.g., tributary mouths) could be assessed only insufficiently by means of an 1-dimensional approach.Here, local 2-D-HN-models were additionally applied.However, a stationary calculation on the base of a 2-dimensional HN procedure requires several hours even using a powerful computer.
The hydrodynamic method of the above mentioned 1dimensional procedure is based on the solution of the Saint-Venant-Equations by an implicit difference scheme (Preissmann-scheme, compare Cunge et al., 1980).Under the normal flow conditions of the Neckar river, this approach is valid and very efficient even for large river sections with respect to data handling, model build-up, model calibration and validation as well as sensitivity analyses and, finally, studies of variants.The functionality of the system includes modelling schemes for looped and meshed river systems as well as for river-regulating structures (e.g.weirs, groins, water power plants).The system geometry of the HN-model, i.e. the discharge area of the main channel and the floodplains, is represented by modified cross sections.The model calibration is done by comparing calculated water levels with surveyed ones.In most sections, water level measurements of different recent flood events are available, thus a calibra-tion and validation for a spectrum of (flood) discharges is possible.A detailed description of the system is given by Oberle (2004).
Areas that contribute to the retention volume of the Neckar river during a flood event are taken into account by a function of storage capacity depending upon the water level.This function can be determined from the digital terrain model by means of several GIS-functionalities and can be verified by comparing calculated with surveyed flood hydrographs.
Hydraulic modelling of extreme floods
In most cases, HN-models are applied to documented flood events from the last decades (in order to assess the present hydraulic conditions) or to statistical flood events with recurrence intervals up to 100 or 200 years (e.g., for delineation of inundation zones or as design events for flood protection measures).With regard to the increasing process complexity, for instance in the case of overtopping or even destruction of a flood protection structure, and to the lack of measurements for the calibration and validation of model parameters for such cases, the application of HN-models to larger floods is rarely practised.However, despite of the uncertainties, it is necessary to apply HN-models for floods that exceed the 100-or 200-year level, as they are the most relevant situations in terms of residual risks, causing severe damages and fatalities.In particular with respect to residual risks, it is obvious that model parameters should also be valid for extreme events.Only the reflection of all physically plausible hazardous situations from the occurrence of first inundations to the maximum possible water levels yields to a comprehensive hazard and risk assessment.This applies equally for flood situations below the design event, as required e.g. for cost-benefit-analyses of protection measures or for the assessment of residual risks due to other failures of technical or non-technical measures (e.g., late installation of mobile protection elements).
Often, historical flood marks indicate much higher water levels than current flood protection level and thus, should serve as realistic reference scenarios for extreme events.In the upper part of the Neckar, the flood with the highest ever recorded water levels occurred in 1824.The water marks can be found at several buildings in flooded communities giving impressions of the severity of historic floods.They can be taken into account for all flood related planning.With adapted HN-models (discharge-relevant areas, roughness coefficients, etc.) it is possible to assess if similar flood water levels could appear in the present situation.
The present HN-calculations at the Neckar river have confirmed that the historical event 1824 was much higher than today's design flood.Figure 6 shows the calculated maximum water levels of the Neckar river for the 100-year flood (H Q 100 ) and the historical flood of 1824 (under actual hydraulic conditions).For example, around the community of Offenau, 98 km upstream of the confluence of the Neckar Nat.Hazards Earth Syst.Sci., 6, 485-503, 2006 www.nat-hazards-earth-syst-sci.net/6/485/2006/River with the Rhine River, the historical water level of 1824 was approximately 2.5 m higher than the dikes that have been built for a 100-year flood.It has to be emphasised that the consideration of extreme historical events can not only support flood awareness as realised scenarios (under historical conditions), but also used as reference for the analysis of potential extreme cases under present conditions.In this regard, the intention here was not to reconstruct historical hydraulic conditions or to verify historical information in terms of peak discharges.The results shown can help, for example, to assess the probability of flood events that cause comparable water levels in the actual situation.In terms of a reconstruction of historical discharges, a further investigation on historical hydraulic boundary conditions is required (Oberle, 2004).However, due to the limited historical data availability and quality, major uncertainties are expected.
Hazard mapping
The above presented hydrological and hydraulic models, i.e. the regionalization approach for the estimation of extreme events (H Q T ) as well as the GIS-based flood information system for the Neckar river served as basis for the generation of hazard maps with prototype character in a statewide sense.For example, hazard maps for the lower Neckar river (Figs. 4 and 6) are published on the -internet platform (Baden-Württemberg, 2005).
Basis and objectives
Based on the knowledge of accumulated values in the areas at risk and relationships between event parameters and resulting damage, flood risks can be identified and quantified, i.e. expected damages for a given flood scenario can be calculated.This information about flood risk for individual buildings, settlement areas and river basins is indispensable to inform the population and stakeholders about the local flood risk, for planning of flood control measures and for benefit-cost analyses of these measures.
The comprehensive determination of flood damage involves both, direct and indirect damage.Direct damage is a damage which occurs due to the physical contact of flood water with human beings, properties or any objects.Indirect damage is a damage which is induced by the direct impact, but occurs -in space or time -outside the flood event.Examples are disruption of traffic, trade and public services.Usually, both types are further classified into tangible and intangible damage, depending on whether or not these losses can be assessed in monetary values (Smith and Ward, 1998).Although it is acknowledged that direct intangible damage or indirect damage play an important or even dominating role in evaluating flood impacts (FEMA, 1998; Penning-Rowsell and Green, 2000), the largest part of the literature on flood damage concerns direct tangible damage (Merz and Thieken, 2004).The present study is limited to direct monetary flood damage to buildings and contents of private households.
As outlined above, stage-damage-functions for different building types or building uses are an internationally accepted standard approach for flood damage estimation.While the outcome of most stage-damage functions is the absolute monetary loss to a building, some approaches provide relative depth-damage functions, determining the damage e.g. in percentage of the building value (e.g.Dutta et al., 2003).If these functions are used to estimate the loss due to a given flood scenario property values have to be predetermined (Kleist et al., 2004(Kleist et al., , 2006)).However, using these functions, one has to be aware that the damage estimation is generally associated with large uncertainties as recent studies asserted (Merz et al., 2004).One approach to reduce the uncertainty connected with stage-damage functions is their specific adjustment to the area of interest (Buck and Merkel, 1999).This strategy was followed here, supported by intensive on-site investigations of the building structure in some pilot areas along the Neckar river.
Recent flood events have shown that during slowly rising river floods the maximum water level during the flood event is responsible for the resulting damage.In these cases, the gradient of the flood wave is small and for this reason there are no damaging effects due to flow velocity impacts.Major damages are caused by wetting of contents and building structure in the cellar and the ground floor.This does not apply for flash floods e.g. in mountainous areas where, due to high flow velocity, buildings may collapse partly or totally.Therefore, it is obvious that flood damage depends, in addition to building type and water depth, on many factors which are not considered using stage-damage functions.One factor is the flow velocity, but there are also others like the (Smith, 1994;Penning-Rowsell et al., 1994;USACE, 1996).Although a few studies give some quantitative hints about the influence of some of the factors (McBean et al., 1988;Smith, 1994;Wind et al., 1999;Penning-Rowsell and Green, 2000;ICPR, 2002;Kreibich et al., 2005), there is no comprehensive approach to consider these factors in a loss-estimation model.Using actual flood damage data from the 2002 flood in Germany, we followed this idea here and developed a multifactorial approach for damage estimation.
The flood-damage estimation can be undertaken on different levels of spatial differentiation: -On local scale, the damages can be estimated based on spatial data and stage-damage-functions for individual buildings or land parcels.In Germany, commonly the Automated Real Estate Map (ALK) is used for these assessments.The ALK data show the base-area of the single buildings and give their specific use (e.g.residential building, commercial building, stable, garage).
-On a more aggregated level, the approach can be based on statistical information about population, added values, business statistics or capital assets for land-use units.These values are published yearly by responsible state authorities (statistical offices).Commonly data from the Authoritative Topographic-Cartographic Information System (ATKIS) is used for this approach in Germany.The ATKIS data differentiate more than 100 types of land-use (e.g.residential area, power plant, sports facilities).
-Large-scale analyses may be carried out for larger landuse units, like communities or ZIP-code areas, considering that they may be only partially flooded.These analyses are often based on the CORINE land cover data (Coordinated Information on the European Environment).The CORINE data differentiates 45 different types of land-use (e.g.continuous urban fabric, industrial or commercial units, agro-forestry areas).
During the last years, the computational power increased in a way that today flood damage analyses even for larger river courses can be undertaken with a high level of detail.In this context, the question of spatial scale of damage analysis is moving from limitations concerning the area size to limitations concerning the quality respectively the level of detail of available spatial data sets.
4.2 Flood damage estimation on local scale based on stagedamage curves
GIS-based damage analysis (tool)
As discussed above, it is commonly required in flood-risk assessment to locate accessible information about hazard and vulnerability at a high spatial resolution (e.g. for cost-benefit analyses, for local protection measures, rating of risks for insurance purposes).In view of these practical requirements, a GIS-based tool for damage estimation was developed in the present project.This tool supplements the above mentioned flood information system at the Neckar river, i.e. it builds directly on the water level information for individual endangered objects based on hydrodynamic calculations.
The GIS-based tool for damage estimation on local scale uses the following procedure.
-Selection of the project area (spatial, postcodes or areas of communities).
-Identification and categorization of each building in the project area (based on ALK-Data) -Estimation of the flood-sill for each structure (lowest damaging water level).
-Estimation of the ground-floor elevation (floor above the cellar).
-Estimation of the values for building-structure and contents (fixed/mobile inventory).
-Estimation of the stage-damage-functions, differentiated for different types of buildings, cellar/floor, building structure/contents.
-Calculation of the water-level for each object in the area.
-Estimation of the damages to buildings and contents for different water-levels based upon the type and use of each building.
The tool provides the selection of the project area on the base of different spatial or administrative areas: barrages, communities or postcodes.The area of interest or spatial objects can be selected from tables or as graphical selection in the GIS.
For the damage estimation, the water depth close to or inside the object is the determining factor.With the HN-modelling in connection with the digital terrain model (DTM) the water depths above the terrain is calculated.The assumption that the damaging water depth inside the object is the same as the depth over terrain is correct if the ground floor elevation has the same elevation as the surrounding territory and if there are no protection measures.In this case, the relevant elevation of the object basis can be calculated on the base of the DTM as the mean value of the terrain altitude on the buildings base.A second option in the tool is to enter the ground floor elevation and the height of the flood-sill for each single object.Thus, local object features and protection measures can be considered.
The damage estimation is based on the general assumption that the monetary damage depends on the type and use of the building.One of the basic studies was performed by Penning-Rowsell and Chatterton (1977).In the Blue manual, stage-damage functions for residential buildings in the UK were derived for age and type of the buildings, the duration of the flood event and the social class of the inhabitants.The damages are differentiated for building fabric and contents.Other similar international studies were done by Wind et al. (1999); Smith (1994); Parker et al. (1987).In Germany, in the HOWAS database, some 3600 single damage cases for different objects are included.The damage data were collected after different flood events in Germany.Analyses by Buck and Merkel (1999) showed that for practical uses, damage estimation with a root function provides reasonable results.
Due to the fact that the absolute damage depends on a variety of factors being specific for every single building or land parcel, a meaningful damage estimation can be expected from the application of such stage-damage functions and their adaptation for individual objects or -in terms of exposure and vulnerability -uniform spatial units.For that reason, the possibility to apply different functions was implemented in this software module, where the user can choose at least one of the three following function types: 1. Linear Polygon Function, 2. Square-Root Function, or 3. Pointbased Power Function.
-Linear Polygon Function
The user interface allows to enter 5 pairs of variates (hi/Si) of water-depth and damage, which are interpolated sectional with linear functions.Between the minimum (i=1) and maximum (i=5) pair, the function can be noted: where S = estimated damage, S i , h i = user-defined nodes of the function, h=water depth.The first pair of variates (h 1 /S 1 ) defines the minimum water depth below which the damage is zero.The last point (h 5 /S 5 ) sets the possible maximum damage; for water depths above the damage stays constant.The Polygon function (3) allows a simple adaptation to individual damage symptoms of different types of objects.
-Square-Root Function In practical view, square-root stage-damage functions provide good results for damage estimation (Buck and Merkel, 1999).Therefore, a square-root function is implemented in the damage estimation tool as second function type, where the parameter b is user-defined: with S=estimated damage, h=water depth, b=user-defined parameter.
The parameter b characterises the damage for h=1 m.Hence, using Eq. ( 4), the damage progression can be described with only one parameter.For the sample damage estimation in this paper (see below), the damage functions for different building types in the project area were chosen based on the flood-damage database HOWAS (Buck and Merkel, 1999).
-Point-based Power Function
In some cases, damage does not occur until the stage rises to a threshold height in the building.For example in rooms or storeys, where floors and walls are tiled, damage can be negligible until the water level affects the electrical installation (power sockets).On the other hand, the maximum damage is often obtained, when the contents are submerged; a further rise of the water level does not increase the damage in a relevant manner.For these cases, a power function can be chosen in the tool, where the points of first and maximum damage as well as the exponent C determining the gradient of the function can be individually defined.
with S=estimated damage, h=water depth, (h 0 /S 0 )=point of first damage, (S max / h max )=point of maximum damage, C=user-defined exponent.
The creation, editing, and choice of these three functions are realised by different masks that allow the user to conveniently handle the input.For the Polygon and the Power function types, the damage can be calculated in absolute monetary units (EUR) or percentages of damage.
Before starting the calculation, a flood event must be selected.According to the coupling of the damage estimation tool to the flood information system for the Neckar river, the outcomes of the hydraulic calculations, i.e. water surfaces, can be directly used as input of the damage estimation.The implementation of the damage estimation tool in a GISsoftware environment is realised in four dialogue modules shown in Fig. 8.
Hence, the GIS-based damage estimation tool enables the user to assess the flood damage to single buildings in floodprone areas and the spatial aggregation of the event-specific damage for a defined group of buildings or areas.Figure 9 shows the calculated damage values for a test community for a range of events, beginning from the flood causing the first damage up to the 1000-year event.The damage values in Fig. 9 are standardized to the 100-year event.That means for example, that the damage caused by the 1000-year flood is approximately 2.6 times higher than the one caused by the 100-year event.
Furthermore, the tool includes functionalities to cope with cases where detailed land-use data (e.g.ALK) are not available or where the assessment could be simplified.As revealed in Table 2, it is possible to make assumptions e.g. about the number of affected houses in flood-prone areas, in order to give an overview on flood risk without explicitly calculating monetary damage.For damage calculations on a more aggregated spatial level, the values at risk can be derived from statistical data for administrative districts and related to their spatial unit (EUR/m 2 ).In this case, the damage estimation can be delivered by spatial intersection of flood-hazard information (inundation zone) with land-use data (e.g.ATKIS) in order to calculate the extension of the inundated settlement area (see columns 4 and 5 in Table 2).
Usually, the flood-damage calculation is provided for costbenefit analyses of flood protection measures.For this purpose, the costs of a flood-protection measure can be compared to its benefit, i.e. the avoided damage up to the design event respectively the residual risk after the implementation of the measure, normally expressed as mean annual damage (MAD).For the above mentioned sample community, a dike designed for a 100-year event provides a significant reduction of MAD, but the residual risk due to a larger flood event still accounts for approximately 40 percent of the original value.
The main advantage of the presented damage estimation on a local scale is that the damage-determining factors are given both on the hazard side (being in general the water depth) as well as on the side of vulnerability (stage-damage functions for individual objects).For estimations on an aggregated spatial level, where areas of the same building type may be defined (e.g.ATKIS-data) but no information on individual objects is available, one has to make assumptions on the spatial distribution of buildings and building types.Furthermore, as the water depth in an inundated area varies in space, the definition of the damage-determining water level gets more uncertain with increasing spatial units.Thus, using stage-damage functions, one has to define the damagerelevant depth or use a statistical approach to estimate the spatial distribution.Since flood damage is also influenced by other factors besides the water depth, more knowledge about the connections between actual flood losses and damage-determining factors is needed for the improvement of damage estimation.Therefore, during April and May 2003 in a total of 1697 private households along the Elbe River, the Danube River and their tributaries, people were interviewed about the flood damage to their buildings and household contents caused by the August 2002 flood as well as about flood characteristics, precautionary measures, warning time, socio-economic variables, regional and use-specific factors.The 2002 flood was an extreme event, e.g., with a discharge return period of 150-200 years at the river Elbe in Dresden and with a return period of 200-300 years at the river Mulde in Erlln (IKSE, 2004).Detailed descriptions were published by e.g.DKKV (2003), Engel (2004), IKSE (2004).The total damage in Germany is estimated to be 11.6 billion .The most affected federal state was Saxony where the total flood damage is estimated to be 8.6 billion (BMI, 2002;SSK, 2003).In the affected areas, a building-specific random sample of households was generated, and always the person with the best knowledge about the flood damage in a household was interviewed.An interview comprised around 180 questions and lasted about 30 min.The computer-aided telephone interviews were undertaken by the SOKO-Institute, Bielefeld.Detailed descriptions of the survey were published by Kreibich et al. (2005) and Thieken et al. (2005).
Statistical analysis was undertaken with the software SPSS for Windows, version 11.5.1.and Matlab, version 7.0.1.Since a big share of the resulting data show skewed distributions, the mean and the median are given.Significant differences between two independent groups of data were tested by the Mann-Whitney-U-Test, for three or more groups of data the Kruskal-Wallis-H-Test was applied.For all tests a significance level of p<0.05 was taken.
Factors influencing the flood damage
Flood damage influencing factors can be divided into impact factors like water depth, contamination, flood duration, flow velocity and resistance factors like type of building, preventive measures, preparedness, and warning (Thieken et al., 2005).
During the extreme flood in August 2002, for example, contamination led to significantly higher damage ratios (fraction of the flood damage in relation to the total value) to buildings and contents (Fig. 10).The damage ratio of contents was increased by 93% for high contamination in comparison with no contamination.For building damage it was increased by more than 200%.During the 1999 flood in Bavaria oil contamination on average led to a three times higher damage to buildings, in particular cases even to total loss (Deutsche Rück, 1999).
On the resistance side, private precautionary measures significantly reduce the flood loss even during an extreme flood like the one in 2002 (Fig. 11).The damage ratio of contents was reduced by 55% for very good precautionary measures in comparison with no measures.For building damage it was decreased by 63%.This positive effect of precautionary measures is noteworthy since it is believed that these measures are mainly effective in areas with frequent flood events and low flood water levels (ICPR, 2002).An investigation of single precautionary measures revealed flood adapted use and furnishing as the most effective measures during the extreme flood in August 2002 (Kreibich et al., 2005).They reduced the damage ratio for buildings by 46% and 53%, respectively.The damage ratio for contents was reduced by 48% due to flood adapted use and by 53% due to flood adapted furnishing.The International Commission for the Protection of the Rhine gives a good overview on the effects of private precautionary measures in their report "Non Structural Flood Plain Management -Measures and their Effectiveness" (ICPR, 2002).Interestingly, flow velocity was not identified as one of the main damage influencing factors.As well, a comprehensive study about the main factors influencing flood damage to private households after the 2002 flood revealed, that flood impact variables (e.g., water level, contamination) were the factors mostly influencing building as well as content damage (Thieken et al., 2005).Flow velocity, however, influenced the damage to a small fraction.Also during a survey about the impact of six flood characteristics on flood damage, building surveyors in the United Kingdom assessed flow velocity to be the least important factor (Soetanto and Proverbs, 2004).Since it is known that flow velocity plays a crucial role in mountainous regions, it should be further investigated whether it will be identified as a main influencing factors, if the damage cases are divided in accordance to the dominating flood type (i.e.flash flood and slowly rising river flood).
* CV: coefficient of variance 4.3.3An approach for an improved damage estimation Based on the above mentioned studies about damageinfluencing factors and the finding that the more factors are specified, the lower the coefficient of variation within the data is (Büchele et al., 2004), the following multifactorial approach for damage estimation was developed.The damage data of the 1697 interviewed households after the 2002 flood was first divided into sub-samples according to the damage influencing factors water level, building type (one-family house (ofh), terraced and semi-detached houses (tsh), apartment building (ab)) and quality of building (Table 3).Since not all sub-classes for "very good quality of building (vg)" were filled, the statistics for "medium quality (m)" were calculated and a mean loading factor for all water levels was estimated for the "very good quality".Accordingly, for each flood affected building it has to be decided to which subclass it belongs to so that its probable damage can be calculated using the mean damage ratio of the respective subclass (Fig. 12).Data variability within the sub-samples (coefficients of variation (CV)), and therefore the uncertainty when applying the mean damage ratio as an estimate, were highest for shallow water levels and apartment buildings (Table 3).This might be due to large differences between buildings concerning the quality of cellar contents and the water level above which damage occurs.This threshold depends strongly on the location and shielding of cellar windows, the level of the ground floor, interior accessories etc.Generally, apartment buildings might be more heterogeneous in size and value than one-family houses.This tendency in data variability was the same for damage ratios of building and contents whereas the data variability was generally slightly
water level classes mean damage ratio building [%]
ofh-m ofh-vg tsh-m tsh-vg ab-m ab-vg loading factor: 1.57Fig. 12. Mean damage ratios of buildings and contents of all subsamples (ofh-m: medium quality one-family houses, ofh-vg: very good quality one-family houses, tsh-m: medium quality terraced and semi-detached houses, tsh-vg: very good quality terraced and semi-detached houses, ab-m: medium quality apartment buildings, ab-vg: very good quality apartment buildings).The values for the "very good quality houses (vg)" were calculated with separate loading factors for building (ofh: 1.29, tsh: 1.11, ab: 1.57) and contents damage (ofh: 1.12, tsh: 1.27, ab: 1.72).
smaller for contents damage (Table 3).The CVs were in the same range as the ones of the HOWAS data base, the most comprehensive flood damage data base in Germany, which has CVs of 155% and 149% for total flood damage of private households with flooded cellars only and with flooded storeys, respectively (Merz et al., 2004).
The difference in damage ratios between the building types and qualities was smallest for lowest water levels (Fig. 12).This is probably explained by relatively homogeneous interior fittings and objects stored in the cellars of all building types.A similar trend was also observed by McBean et al. (1988) who differentiated the three water levels -1.8 m, 0.6 m, 2.4 m.Damage ratios of contents are very similar to the ones used in a study in the Rhine catchment and also the trend of higher contents damage in one-family houses in comparison to apartment buildings is the same (MURL, 2000).In contrast, estimates of building damage are lower in the "Rhine study", which uses linearly increasing damage ratios of buildings from 1% at a water level of 50 cm to 10% at a water level of 5 m (MURL, 2000).
Similarly, via the comparison of the sub-samples of different levels of contamination and precautionary measures, loading factors for these cases were calculated (Table 4).The concept of loading or adjustment factors for flood damage curves was already developed by McBean et al. (1988) who calculated adjustment factors for flood warning, longduration floods and floods with high velocities or ice.Unfortunately a differentiation between the building types was not possible here, due to a lack of data.Since only a very limited number of households which had undertaken precautionary measures experienced high contamination (n=21), it is suspected that precautionary measures are largely able to avoid contamination and these cases can be neglected.
This approach seems promising to significantly reduce the uncertainty in damage estimation where no individual on-site investigations are possible.However, it needs further investigation and validation.
Conclusions
Flood management and consequently flood mapping is a key task and ongoing development in the sphere of competency of state authorities in Germany.To support this demanding target from the scientific side, especially aiming at a more reliable flood-risk assessment, these studies were focussed on the improvement of the following methods: 1.'The estimation of extreme events which exceed the design flood of flood protection measures, 2. The assessment of flood hazard and risk over the whole spectrum of possible damagerelevant flood events, 3. The damage estimation via the consideration of various building-and event-specific influences on the resulting damage.In detail, a regionalization approach for flood peak discharges was further developed and substantiated, especially regarding recurrence intervals of 200 to 10 000 years and a large number of small ungauged catchments.The hydraulic simulation presented provides hazard mapping covering the whole spectrum of relevant flood events, with special reference to extreme historical floods, and is directly coupled with a GIS-tool for flood damage assessment based on established stage-damage functions.In addition, the newly developed multifactorial approach for damage estimation considers more damage influencing factors besides the water depth, like building quality, contamination and precautionary measures.Flood hazard and risk assessment is actually practised with an increasing level of detail and accuracy for more and more areas, especially in regions which are frequently or were recently affected by floods, like Baden-Württemberg.In this regard, it is on the one hand self-evident that flood hazard and risk, in particular the residual risk of flood protection measures, must be assessed on a level of detail which supports local planning and precaution.Furthermore, as hazard and vulnerability are not constant in time, corresponding analyses and maps must be updated after significant changes with a minimum of additional expenses.The presented information systems and analysis tools are a base for these purposes, not only for our specific study areas.On the other hand, as discussed, flood hazard and risk assessment is still associated with large uncertainties, even in areas where a rather good data and model base is available (as for example given at the Neckar river by means of the flood information system with relatively well-known hydraulic conditions and substantial spatial data).Considering that integrated flood-risk management implies decisions under uncertainty, the effectiveness of detailed risk analyses has to be critically reflected.This can be done in further pilot studies where the role of different sources of uncertainty in the overall risk assessment procedure can be analysed (e.g., Apel et al., 2004).However, the level of detail of such uncertainty and risk analyses has to be defined for the individual case, i.e. for the specific area of interest, planned protection measure, etc.For floodhazard mapping as proceeding task of practitioners in an area-wide sense, standard methods and simplifications must be accepted, as long as they can be reasoned by other aspects of a comprehensive flood-risk management (e.g., spatial consistency of methods).
Fig. 1 .
Fig. 1.Progression of regression coefficients C 7 and C 8 for recurrence intervals T from 2 to 10 000 years.
Fig. 5 .
Fig. 5. Comparison of peak discharges calculated by regionalization and RR-model: H Q 1000 at 265 locations of the drainage network in the Fils catchment.
Fig. 6 .
Fig.6.Maximum water levels along the Neckar river for statistical flood events H Q T with T =10, 20, ..., 200 years) and the historical event 1824 (HW1824), the latter as reconstructed from water-level marks.Note location of the community Offenau at the river Neckar (98 km upstream of the confluence with the river Rhine).
Fig. 7 .
Fig. 7. Examples of function types used in the GIS-based damage analysis tool.
Fig. 8 .
Fig. 8. GIS-based damage analysis tool (screenshot of graphical user interface).The damage or the general involvement can be calculated by selecting the area of interest, land-use and event information and damage-relevant factors.
4. 3
Development of a multifactorial approach for damage estimation 4.3.1 Damage data of the extreme flood in August 2002
Fig. 9 .
Fig. 9. Standardized damage to residential buildings in a test community for annual exceedance probabilities from 0.1 to 0.0001 (i.e.damage to a 100-year flood=1, situation without flood protection measure).
BFig. 10 .
Fig.10.Damage ratios of residential buildings and contents influenced by different levels of contamination.The contamination classes "medium" and "high" take into account the type of contamination (e.g.chemicals, sewage, oil) and if single, double or triple contaminations occurred (bars=means; dots=medians and 25-75% percentiles).
Fig. 11 .
Fig. 11.Damage ratios of residential buildings and contents influenced by different levels of precautionary measures.The precaution classes "medium" and "very good" take into account the type of precaution (e.g.informational precaution, adapted use, water barriers) and how many precautionary measures have been applied (bars=means; dots=medians and 25-75% percentiles).
Table 1 .
Overview of methods and data for high-resolution flood-risk mapping in Germany.The left column (basic approaches) indicates minimum requirements on data and analysis expenses, the right column stands for more detailed approaches (additional requirements, only in particular cases needed/possible).Parts in bold letters are addressed in this article.
Table 2 .
Standardized damage to buildings and inundated areas for the test community.
Table 3 .
Statistical characterisation of damage ratios of buildings (upper value) and contents (lower value in brackets) of the sub-samples "medium quality of buildings".
Table 4 .
Loading factors for different levels of contamination and precautionary measures.
|
2019-04-24T13:13:40.586Z
|
2006-06-12T00:00:00.000
|
{
"year": 2006,
"sha1": "232f69ca14995de5148484dbaab976d83237ff1c",
"oa_license": "CCBYNCSA",
"oa_url": "https://nhess.copernicus.org/articles/6/485/2006/nhess-6-485-2006.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ca1fea1444965f9f5a617e687071abdc4b99193c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
259130658
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of differences in susceptibility of Campylobacter jejuni strains to UV light-emitting diode (UV-LED) technology
Campylobacter jejuni remains a high priority in public health worldwide. Ultraviolet light emitting-diode technology (UV-LED) is currently being explored to reduce Campylobacter levels in foods. However, challenges such as differences in species and strain susceptibilities, effects of repeated UV-treatments on the bacterial genome and the potential to promote antimicrobial cross-protection or induce biofilm formation have arisen. We investigated the susceptibility of eight C. jejuni clinical and farm isolates to UV-LED exposure. UV light at 280 nm induced different inactivation kinetics among strains, of which three showed reductions greater than 1.62 log CFU/mL, while one strain was particularly resistant to UV light with a maximum reduction of 0.39 log CFU/mL. However, inactivation was reduced by 0.46–1.03 log CFU/mL in these three strains and increased to 1.20 log CFU/mL in the resistant isolate after two repeated-UV cycles. Genomic changes related to UV light exposure were analysed using WGS. C. jejuni strains with altered phenotypic responses following UV exposure were also found to have changes in biofilm formation and susceptibility to ethanol and surface cleaners.
. Bacterial reductions (Log CFU/mL) observed in the nine C. jejuni strains before and after UV exposure at 280 nm for 0, 1, 3, 7, 9 and 11 min. Statistical differences between treatments are indicated with * (p < 0.05). www.nature.com/scientificreports/ most susceptible strain to UV light exposure with the highest bacterial reductions (1.62 ± 0.33 Log CFU/mL), followed by MF716 (1.59 ± 0.37 Log CFU/mL) and MF13415 (1.51 ± 0.19 Log CFU/mL). Differences in inactivation kinetics were observed between these strains when UV at 280 nm was applied. Seven out of nine strains were resilient to longer UV light treatments such that a treatment period of 11 min did not result in significantly higher bacterial reductions than lower treatment times (p ≥ 0.05).
Susceptibility of selected C. jejuni strains to single versus double UV treatments. MF6671, MF13415, 5.33 AP, and NCTC 11168 strains were selected due to their differing inactivation kinetics under UV light at 280 nm, as shown in Fig. 1. To evaluate the effect of repeated UV exposure on these strains, bacterial reductions after a double UV treatment were compared with reductions from single treatment (see Fig. 2). Surprisingly, the effectiveness of the second treatment with UV 280 was significantly decreased when applied for 1 min to MF6671 (from 1.81 to 0.78 Log CFU/mL) and MF13415 (from 1.43 to 0.78 Log CFU/mL) strains, and 11 min to MF13415 (from 1.51 to 1.05 Log CFU/mL) (p < 0.05). Furthermore, the opposite effect was observed for 5.33 AP, with reductions that increased from 0 to 0.49 Log CFU/mL after 1 min exposure and 1.23-2. 35 Log CFU/mL after 11 min exposure (p < 0.05). NCTC 11168 had an increase in bacterial reductions from 0.39 to 1.05 Log CFU/mL after 11 min exposure (p < 0.05). Thus, repeated treatment with UV 280 significantly increased the susceptibility of the latter strains. Moreover, treatments of 11 min with UV 280 were shown to reduce the tolerance of the C. jejuni strains treated for 2 UV-cycles than the ones treated for 1 min (p < 0.05).
Whole-genome sequencing (WGS) analysis. WGS analysis was conducted in order to investigate potential differences in the genomes of the C. jejuni strains that displayed different inactivation kinetics before and after UV treatment. The pangenome of the 9 C. jejuni strains is presented in Fig. 3. In the core genome of these strains, 12,334 gene calls and 1356 gene clusters were found. Moreover, the total genome consisted of 2173 gene clusters and 15,414 genes. The pangenome of these strains displayed two clusters, of which MF6671, MF13415, 5.33 AP, and C16 strains were grouped together, and likewise NCTC 11168, A28f64, MF716, A21f105, and MF701989 strains. Origin and source of the isolated strains did not influence the clustering. Potential mutations in the genome of UV-treated C. jejuni strains were analysed using Snippy to compare UV-treated genomes with non-treated ones. A heatmap of mutations based on SNPs and indels is presented in Fig. 4. Mutations in non-coding sequence (CDS) regions or those affecting hypothetical proteins were not considered and can be found in the Supplementary material (Supplementary Table S.1). In general, UV-induced mutations occurred regardless of the treatment time in the genome of NCTC 11168, 3.55 AP, and the apt gene in MF6671 and MF13415, encoding adenine phosphoribosyltransferase, when exposed to UV light at 280 nm for 1 or 11 min. Moreover, mutations due to SNPs, deletions, and insertions were equally observed. Each mutated gene was mapped to a KEGG pathway to investigate the potential impact of UV light on bacterial functioning and structures. NCTC 11168 was found to have the highest number of mutations, with a broad range of metabolic pathways, structures, and functions affected by UV exposure and with identical response, despite being subjected independently to different UV exposure times (Fig. 4). An SNP mutation in the waaA gene (glycan biosynthesis and metabolism) was found in the 5.33 AP strain exposed to UV for 1 min, unlike the same strain treated for 11 min. Furthermore, other mutations in genes associated with translation, and carbohydrate and cofactors and vitamins metabolism were detected when 5.33 AP was subjected to both exposure times. After UV exposure, the apt gene involved in nucleotide metabolism presented a mutation in MF6671 and MF13415 strains. Other www.nature.com/scientificreports/ mutations in genes related with carbohydrate and amino acid metabolism, flagellar assembly, and replication and repair were found in these two strains.
Assessment of biofilm formation in C. jejuni strains. The growth of Campylobacter biofilms was assessed in nutrient-rich (TSB) and nutrient-poor (M9) media under both aerobic and microaerobic conditions at two different temperatures (37 and 4 °C), and the results were assessed after 24 h of incubation. This protocol was repeated with cultures exposed to UV 280 for 7 min. The strongest biofilm formation was observed in each C. jejuni isolate when grown in a nutrient-rich medium at 37 °C under microaerobic conditions and results are summarised in Table 1. Of the 9 isolates, three of them showed an ability to form strong biofilms at low temperatures in the absence of environmental oxygen concentrations, while low nutrient availability contributed to the formation of weaker biofilms in most isolates. However, the reference C. jejuni strain NCTC 11168 and the isolates C16, MF701989, MF13415, MF6671, 5.33 AP, and a28f64 showed some moderate biofilm formation at 37 °C under microaerobic conditions in nutrient-limited (M9) medium. Under aerobic conditions at 4 °C, stronger biofilm formation was observed in the rich media in only one isolate (a21f105), while moderate biofilm formation was observed in two further isolates (C16 and MF701989). Under the same conditions, weak biofilm formation was observed in most Campylobacter isolates in low nutrient abundance.
Isolates treated with UV light showed a reduced ability to form biofilms compared to untreated isolates across each of the conditions used in this study. Growth conditions with abundant nutrients (TSB), warmer temperatures (37 °C) and the presence of oxygen resulted in the strongest biofilm growth in UV 280 treated cells, although biofilm formation capacity was still less in untreated cells for all strains, with the exception of isolates a28f64 and MF6671. Low growth temperature and low nutrient abundance (M9 medium) resulted in significant reductions (p < 0.05) in biofilm formation after UV treatment for all isolates, with the exception of a21f105. Treatment with UV 280 significantly reduced (p < 0.05) biofilm formation under all conditions for each isolate investigated ( Table 1). Table 2, 3 out of 9 strains showed susceptibility to ethanol, domestic bleach (sodium hypochlorite), and domestic surface cleaner solutions prior to UV treatment of cell suspensions, with the exception of C16, 5.33 AP, NCTC 11168. Moreover, the antimicrobial effect of these solutions was reduced in the majority of C. jejuni strains when exposed at 4 °C. While strains MF13415 and NCTC 11168 showed no cell activity at 4 °C, other strains including A21F105, C16, MF701989, MF716, and a28f64 showed higher resilience against the compounds studied, even at recommended working concentrations. The application of UV-LED technology improved or maintained the inactivation efficacy of these disinfectants in 5 of 9 strains. Nevertheless, isolate C16 at 42 °C showed reduced sensitivity to ethanol following UV treatment, while only the surfactant-based cleaner still showed an effect against UV-treated MF716 at 42 °C. Similarly, resistance to EtOH and NaOCl-based cleaners was higher in UV treated cells of MF6671 and in MF701989 to EtOH and the surface cleaner (Table 2). Increased susceptibility to EtOH in particular was also seen after UV treatment in isolates MF13415, NCTC 11168, and especially 5.33 AP, which was more susceptible to each class of antimicrobial tested. This is notable as untreated suspensions of 5.33 AP showed greater resilience when disinfecting agents were employed. Exposure to UV light and incubation at 4 °C resulted in increased susceptibility of a21f105 to ethanol, and MF701989 and 5.33 AP to NaOCl, but at the same time MF701989 showed increased resilience to the surface cleaner and MF6676 to all the evaluated solutions ( Table 2). MF13415 and NCTC 11168 wild types, interestingly, showed no survival at 4 °C, but mutated strains were able to survive at this temperature.
Discussion
The application of UV-LED technology to reduce Campylobacter numbers in liquid, surfaces, and food has been investigated previously in other studies [13][14][15] . However, to the best of the authors' knowledge, differences in bacterial inactivation kinetics after UV treatment have only been evaluated by Haughton et al. 14 , Haughton et al. 9 , and in the current study. The former study observed that different susceptibilities among Campylobacter isolates towards UV light at 395 nm in a transparent medium was a result of the biological effect and not of any factor of light intensity attenuation 14 . To evaluate the susceptibility of Campylobacter to UV-LED in our study and compare strain susceptibilities, it was necessary to modify the absorbance of the medium in order to reduce UV light penetrability and the high decontamination effectiveness of the UV-LED technology in transparent liquid media. Compared to Haughton et al. 14 , the achieved bacterial reductions in the present study were 6 log lower (Fig. 1). This may be a consequence of reducing the penetrability of the UV light in the medium which may protect bacterial cells and favour their survival 13 . In the study of Haughton et al. 9 , C. jejuni suspensions in a mixture of MRD and UHT skim milk were treated with a UV lamp device at 254 nm and reductions of up to 6 log CFU/mL were observed for all 10 Campylobacter isolates, with a reduction of 3.5 log CFU/mL observed for the least susceptible strain. Although reductions were lower in our study (≤ 1.6 log CFU/mL), probably due to a higher fat content (2%) in the milk matrix or the difference in UV wavelengths, differences in inactivation were also observed in all the studied strains after UV exposure. Pangenome analysis of the 9 C. jejuni strains resulted in two clusters independent of source and origin of isolates. Other authors, such as Thépault et al. 16 , Wilson et al. 17 , and Méric et al. 18 investigated the pangenome of C. jejuni isolates with the aim of correlating their origin and source. They found this task challenging due to the high level of genotypic diversity.
The most noteworthy strains based on observed high variations in inactivation kinetics were selected and subjected to two cycles of UV light treatment at 280 nm. Interestingly, the repeated treatments with UV light at 280 nm had the opposite effect on C. jejuni reductions in the studied strains compared to UV single treatments. Thus, more susceptible strains after single UV treatments had an increased resistance after two UV cycles and 21 investigated the adaptation process of Escherichia coli, Salmonella spp. and Listeria monocytogenes to UV light after 10 repeated UV-cycles and observed that bacterial cells were more resilient to UV light afterwards. These authors suggested that this phenomenon may be a consequence of adaptive mutagenesis when cells are subjected to sub-lethal stress 19 . Although similar observations were made for MF6671 and MF13415 after two UV cycles, the increased susceptibility to UV light found in NCTC 11168 and 5.33 AP strains is in contrast with the former. It is important to note that MF6671 and MF13415 were the most susceptible strains to UV light and 5.33 AP and NCTC 11168 were the most resilient strains when subjected Table 1. Biofilm formation by Campylobacter jejuni isolates under varied conditions of temperature, oxygen abundance and nutrient availability before and after UV light treatment. The parameters for the strength of biofilm formation were based on the logical test: X > 1,"+++++", X > 0.8, "++++", X > 0.6, "+++", X > 0.3, "++", X > 0.1, "+", H90 < 0.1, "−", where X is the optical density (OD) at 600 nm. www.nature.com/scientificreports/ to a single UV treatment. Therefore, a correlation between both effects may be possible. However, insufficient information is currently available in the scientific literature to reach any conclusions. Dissimilarities detected in the alignments of UV-treated isolates may have resulted from induced missense mutations in the bacterial genome. To verify this, Snippy analysis was conducted comparing the genome of UV-treated with non-treated strains. Strains NCTC 11168 and 3.55 AP, which were more susceptible after two UV cycles, presented mutations in genes associated with signal transduction and translation. In contrast, mutations in genes flip and fliR (encoding for flagellar biosynthetic proteins FliP and FliR) were observed in strains which were more resilient to UV light after two UV cycles and are linked to motility and host colonisation 20 . These authors suggested that these reversible mutations are an adaptive mechanism to maintain genome stability (genome robustness) in Campylobacter spp. in response to stress factors like UV 20 . Mutation in the fdtA gene (encoding for TDP-4-oxo-6-deoxy-alpha-d-glucose-3,4-oxoisomerase) was also identified in our study, which has been linked to adhesion and colonisation of E. coli 21 . However, the function of this gene was not described in C. jejuni before. Furthermore, mutations in purF (encoding for amidophosphoribosyltransferase) and apt genes found in the least susceptible strains of this study are evidenced to be associated with a novel adaptive mechanism of C. jejuni to increase its probability of survival, based on promoting the genetic heterogeneity of the bacterial population 22 . Lastly, a mutation in ung gene, encoding for uracil-DNA glycosylase, observed in this study was also previously detected in other studies 23,24 . Although this gene was associated with initiation of base excision repair (BER) pathway, mechanisms induced by UV stress, Gaasbeek et al. 24 and Dai et al. 23 concluded that the mutation of this gene does not promote the repair of DNA damage or recombinational repair in C. jejuni. Thus, C. jejuni strains that were more resilient to UV presented mutations linked to survival mechanisms. For the biofilm formation analysis conducted in this study, variations in biofilm strength and presence/ absence were observed in non-treated C. jejuni strains for the different conditions studied (4 and 37 °C; aerobic and microaerobic; rich and poor-nutrient media). Strain variability in biofilm formation has already been observed for other foodborne pathogens 25 . These authors indicated that further research is required to evaluate the bacterial biofilm formation under more realistic conditions 25 . Thus, our study demonstrated strain variability of C. jejuni in biofilm formation, even under cool, microaerobic, or poor-nutrient environments. In general, UV light diminished biofilm production in most of the studied strains, with greater reductions observed in the presence of additional stresses (4 °C and poor nutrient medium). Some of the previously mentioned mutated genes (flhA, rcsC, mreB and waaA) in NCTC 11168 and 5.33 AP may be associated with biofilm formation, since they are associated with cell motility, morphology and peptidoglycan formation [26][27][28][29] . According to Luo et al. 30 , UV light technology has been mainly assessed for inactivation of microorganisms already within formed biofilms. Nevertheless, there are a few studies investigating UV light as a treatment to prevent biofilm formation 30 . Studies investigating the use of UV to prevent biofilm formation by E. coli and Pseudomonas aeruginosa cells were successful in this regard [31][32][33] . However, the disruptive effect of UV light on the biofilm formation process may not last long 34 . Bacterial cultures in the present study were incubated for only 24 h and therefore, further investigation is required to assess this concern. www.nature.com/scientificreports/ The antimicrobial activity of EtOH and selected surface cleaners was reduced when bacteria were subjected to low temperatures. This temperature-dependent activity of biocides has been extensively demonstrated for the majority of domestic and industrial surface cleaners by several authors 35 . In concordance with our study, Bakht et al. 36 observed variations in antimicrobial susceptibility to biocides, such as EtOH 70% and NaOCl 5% in 120 P. aeruginosa strains, of which 59 were resilient to EtOH 8.75% and 33 strains to NaOCl 0.08%. In the present study, EtOH 70% and NaOCl 2% did not inactivate the 5.33 AP C. jejuni strain at 42 ℃. At the same time, C16 and NCTC 11168 were resilient to a 2% concentration of NaOCl under the same conditions. A treatment with UV light prior biocide exposure showed to improve or maintained the antimicrobial effectiveness of the selected biocides in the majority of the strains. The combined inhibitory effect of UV light together with EtOH or NaOCl has been observed for other pathogenic bacteria including E. coli, Bacillus cereus, Cronobacter sakazakii, S. Typhimurium, among others [37][38][39] . However, 4 C. jejuni strains showed increased tolerance to at least one of the studied biocides after UV treatment in the present study. The presence of mutated genes apt and purF in MF6671 which was identified as a tolerant strain to EtOH and NaOCl-based cleaners after UV exposure may increase the stress tolerance of these bacteria. Thus, C. jejuni cells with mutated purine biosynthesis genes (purF and apt) were more tolerant to hyperosmotic stress. This phenomenon may be caused by a cross-protection mechanism which could develop due to the mutagenic nature of UV light 40 . Hartke et al. 41 studied the effect of pre-treatment with 254 nm UV on Lactococcus lactis and found that bacterial cultures increased tolerance to 20% (v/v) ethanol, heat (52 °C) and H 2 O 2 (15 mM). These authors suggested that an overlapping regulation pathway between UV and other stresses may be ocurring 41 . To our best knowledge, studies, which found a cross-protection of UV light towards common industrial and domestic biocides like EtOH and NaOCl, are lacking. Future research should focus on comparing the findings of this study with transcriptomic analysis in order to better understand the effects of UV light on the bacterial genome, induced mutations, and its linkage with cross-protection. 9 . Bacterial suspensions (20 mL) were poured into Petri dishes with a liquid depth of ~ 6 mm and volume capacity of 24 cm 3 (height: 1.3 cm and diameter: 5.8 cm) and placed centrally in the LED chamber at a distance of 5 cm from the light source. Bacterial suspensions were treated with a UV-LED device (PearlLab Beam, Aquisense technologies, NC, USA) at a wavelength of 280 nm for 1, 3, 5, 7, 9 and 11 min. An in-depth description of the UV-LED device is provided and the wavelength of 280 nm was selected for its high inactivation effect 15,42 . Non-treated samples acted as controls. The UV 280 dose was calculated by multiplying the measured fluence rate of the light (W/cm 2 ) with the treatment time in min. The UV light fluence rate was measured using a radiometer (Opticalmeter, model ILT2400, International light technologies, MA, USA) and confirmed as 0.041 W/cm 2 . UV light doses for every treatment time are provided in the extended data section.
Campylobacter jejuni strain selection and preparation.
Campylobacter enumeration. Immediately after UV treatment, C. jejuni levels were determined in the suspensions of UV-treated and control samples for each strain. Serial tenfold dilutions in MRD were prepared and 0.1-mL aliquots were plated onto mCCDA plates. After incubation for 48 h at 42 °C in microaerobic conditions, bacterial colonies were enumerated and the average counts of treated and control samples were determined. Bacterial reductions were calculated by subtracting the treated sample C. jejuni counts from the nontreated expressed in Log CFU units per mL of suspension.
Repeated treatments of UV light on selected C. jejuni strains. Four C. jejuni strains were selected for further analysis due to their differing susceptibility to UV light: strains MF6671, MF13415, NCTC 11168, and 5.33 AP. Strain suspensions were prepared as detailed above, with 1 mL suspension inoculated into 9 mL of MRD. These ~ 5 log CFU/mL suspensions were treated with the UV 280 for 6 s at a distance of 5 cm from the source in order to reduce the total Campylobacter population of all tested strains by 48-61% (enumerated as above). Surviving colonies following UV 280 treatment were cultured in MHB and incubated microaerobically for 48 h at 42 °C. After incubation, suspensions were inoculated in a mixture of MRD and UHT milk, as detailed above. In this case, exposure times of 1 and 11 min with UV 280 were selected to treat milk and MRD suspensions because of their different kinetics of inactivation. Non-treated samples served as controls. Enumeration of the survivors of the four C. jejuni strains was carried out for all the treatment and control samples with the previously described procedure. Raw sequences were obtained for each strain tested. Raw sequences of the test strains not treated with UV 280 were obtained from a previous study conducted by Truccollo et al. 43 and the raw sequence of C. jejuni NCTC 11168 was recovered from the NCBI database (BioProject PRJNA8). Cleaning was a systemic process conducted using Trimmomatic (v0.3.8) after removing adapters, reads containing more than 10% undetermined bases (N > 10%) and low quality reads with a Qscore below or equal to 5 (Qscore ≤ 5) in a 50% of the total bases were removed. After cleaning, the quality of the reads was evaluated with FastQC (v0.11.8) in combination with MultiQC (v1.9) programs 44,45 . Before proceeding any further, the identification of the strains was carried out with Kraken 2 (v2.0.7 beta) through the standard Kraken 2 database 46 . Assembly of reads to contigs and scaffolds was performed using SPAdes (v3.13.0) with the -careful option. The quality of scaffolds was assessed through QUAST (v5.1.0) and MultiQC 45,47,48 . To visualise the pangenome of the studied C. jejuni strains, anvi' o (v7.1) workflow was employed in the obtained assembled scaffold.fasta files of non-treated and treated strains (https:// meren lab. org/ 2016/ 11/ 08/ pange nomics-v2/; accessed on 23 November 2022) 49 . The former files were converted into anvi' o contig databases through the 'anvi-gen-contigs-database' program. After this step, identification of genes in scaffolds was performed with Prodigal in order to detect open reading frames and their annotation was conducted using the NCBI's Clusters of Orthologous Groups database ('anvi-run-ncbi-cogs' program) 50,51 towards four HMM profiles of anvi' o provided by hidden Markov models ('anvi-run-hmms' program). In order to build the pangenome, similarities of the amino acid sequence were determined and compared within all genomes with NCBI blastp. Minbit heuristics of 0.5 52 were employed to eliminate weak matches between amino acid sequences and the MCL algorithm ('anvi-pan-genome' program) 53 was used to identify clusters.
Genomes of UV-treated strains were compared with non-treated strains using Snippy (v4.3.6) which establishes differences based on single nucleotide polymorphisms (SNPs) and small insertions and deletions (indels), also known as "variant calling" 54 .
Biofilm assessment. C. jejuni isolates were grown overnight in Brain-Heart Infusion broth with 0.5% (v/v) defibrinated horse blood. A previously assessed treatment of UV 280 was used to expose these bacterial suspensions for 7 min. Subsequently, these suspensions were used for the biofilm formation assay. Non-treated samples served as controls. The isolates were aliquoted into Eppendorf tubes and centrifuged at 13,000×g for 5 min. The supernatant was removed, and the pellet was washed in sterile ringer medium (Oxoid, Ltd., Basingstoke, UK). This was again centrifuged, and the supernatant discarded. The pellet was resuspended in Tryptic Soya broth (Oxoid, Ltd., Basingstoke, UK) and M9 medium (MP Biomedicals Germany LLC., Eschwege, Germany) and added in duplicates of 200 µL to a sterile 96 well plate. Four identical plates were prepared and each incubated under one of four conditions: 37 °C under environmental oxygen concentrations, 37 °C under microaerobic conditions, 4 °C under environmental oxygen concentrations, and 4 °C under microaerobic conditions. After 24 h, the medium was removed, and biofilms formed were analysed using a crystal violet staining protocol 55 . The parameters for the strength of biofilm formation were based on the logical test: X > 1,"+++++", X > 0.8, "++++", X > 0.6, "+++", X > 0.3, "++", X > 0.1, "+", H90 < 0.1, "−", where X is the optical density (OD) at 600 nm.
Biocide susceptibility assessment. Campylobacter jejuni isolates were grown overnight in Brain-Heart Infusion broth with 0.5% (v/v) defibrinated horse blood. Suspensions were treated with UV 280 for 7 min and control samples were non-treated. Antimicrobial resistance assessment was conducted following Balouiri et al. 56 . Briefly, bacterial suspensions grown overnight in Muller-Hinton Broth were diluted to an OD 600 = 0.1 and subsequently, 1 ml of this suspension was used to inoculate molten Mueller Hinton agar at 45 °C (5% defibrinated horse blood in Mueller Hinton agar). Once set, 10 µl aliquots of solutions containing the working concentrations of common industrial sanitizing compounds including 70% (v/v) ethanol (EtOH) (Sigma-Aldrich Ltd., Arklow, Ireland), domestic bleach (< 5% chlorine based bleaching agents, 2% sodium hypochlorite) (Milton ® , Proctor & Gamble, U.S.A), and domestic surface cleaner (5-chloro-2-methyl-4-isothiazolin-3-one and 2-methyl-2H-isothiazol-3-one) (2Work Multi-Surface Cleaner, 2Work Supplies, Sheffield, UK) and 50% of these concentrations were added to the plate with the purpose of mimicking dilutions events in an industrial or home setting. After 48 h growth at 42 °C under aerobic and microaerobic conditions, bacterial growth in the presence of these antimicrobial compounds was assessed 56 .
Statistical analysis and visualisation. UV light treatments were conducted in duplicate and three independent experiments were assessed (N = 6). Normality of the data was tested using Kolmogorov-Smirnov test and comparison of treated and control samples was conducted through factorial analysis of variance (ANOVA) for each of the C. jejuni strains. Statistical differences were obtained using the Tukey post hoc test at a α < 0.05 level. The GraphPad Prism program (GraphPad Prism version 8.4.2 Inc, San Diego, CA, USA) was employed to perform the statistical analysis and create the presented graphs. Pangenome visualisation was performed and edited with the anvi' o interactive interface and the program 'anvi-display-pan' . Snippy findings were visualized
Data availability
Raw sequenced data of the 8 C. jejuni isolates were obtained from the dataset of BioProject ID PRJNA688841 and raw dataset of UV-treated isolates from this study can be found under BioProject ID PRJNA906059.
|
2023-06-12T06:16:56.187Z
|
2023-06-10T00:00:00.000
|
{
"year": 2023,
"sha1": "427b761397cdb148d48282c0f24acaa83cdf2dd1",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5a31a8e2178f9d8a5bda3e5c33d414e256484ec3",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
37447391
|
pes2o/s2orc
|
v3-fos-license
|
Construction of a Glucose Biosensor by Immobilizing Glucose Oxidase within a Poly(o- phenylenediamine) Covered Screen-printed Electrode
The glucose biosensors were prepared by the electropolymerization of the non-conductive polymer, Poly(o-phenylenediamine), onto a planar screen-printed electrode. A fabrication procedure was performed to decrease the waste of expensive enzyme. The amperometric glucose response was measured by the potensiostating of the prepared glucose biosensors at a potential of 0.3 V with ferrocene as mediator. Results show that the obtained biosensors have a linear range up to 25 mM glucose, fast response time (100s) and high sensitivities (16.6 nA/mM). Also, the effects of applied potential and sweeping number of Cyclic voltammograms for electropolymerization were systematically investigated and optimal values were recorded.
INTRODUCTION
Since the pioneering work of Clark and Lyons [1] , the integration of an enzyme into an electrode attracted much interest in the development of biosensors up to now [2][3][4][5][6] . The advantages associated with these devices are safety, simplicity, minimal sample preparation, easy to handle, economy, accuracy, precision, and high sensitivity as well as the possibility of developing compact and portable analyzers [7,8] .
In recent years, there has been a growing interest in immobilizing biomolecules in electropolymerized films to develop biosensors. The electropolymerization of electrically (non-)conducting polymers, such as poly (3aminophenol), poly(1,3-diaminozenzene), polyphenol, polyaniline, polyacetylene, polyindole or polypyrrole, is an interesting and effective procedure to prepare biosensors due to its simple preparation, easy miniaturization, and the precise localization of biomolecules [9][10][11] . Also the thickness of the resulting film could be controlled easily [12] .
The electrochemical method involves the entrapment of biomolecules in organic polymers during their electrogeneration on an electrode surface. For most of research, the electrodes were dipped into the solution containing enzymes and monomer for copolymerization. One of the main disadvantages is that only a few enzymes in solution could be immobilized successfully and most expensive enzymes are disused [13,14] .
In this research, a screen-printed planar electrode was used to fabricate the amperometric glucose biosensors by immobilizing GOD into the electropolymerized PPD film with ferrocene as an electron transfer mediator. Enzyme was dried on the surface of electrode before PPD film was formed. Therefore, most of enzyme was constrained under formed PPD film. In addition, only 20 µl enzyme was needed to cover the electrode surface. Few enzymes would be wasted in this study. Apparatus: Electropolymerizations, amperometric measurements, and cyclic voltammetry (CV) were carried out with CHI 650A electrochemical workstation. Screen-printed electrodes were used for all electrochemical experiments. Working electrodes and counter electrodes were made of conductive carbon ink. One naked silver band was performed in 0.01 M FeCl 3 solution for 10 min to fabricate a reference electrode.
Preparation of glucose biosensors:
Firstly, 20 µl 1mg/ml GOD and 20 µl 0.5 mM ferrocene were applied on the surface of electrodes. After drying, o-PD was dropped on the electrode to cover the below enzyme and ferrocene, and then the electropolymerization is performed by voltage cycling between −0.2 and 0.8 V vs Ag/AgCl and a scan rate of 50 mV/s for a certain number of cycles. Unless otherwise indicated, 20 µl of a 5 mM o-PD in PBS (pH 7.2) is used for the procedure. The resulting biosensors were rinsed with 0.01 M PBS (pH 7.2) to remove un-immobilized enzymes, and were stored at 4 °C when not in use.
Measurements:
Cyclic voltammogram measurements were carried out in 0.01 mM PBS (pH 7.2). Amperometric responses of the fabricated biosensors to glucose were studied by injections of aliquots of the glucose stock solution, and the resulting oxidation currents were monitored. All measurements were carried out at room temperatures. Fig. 1. The initial CV curve shows two irreversible oxidation peak at 0.35 V and 0.55 V corresponding to the polymerization of o-PD. On the successive cycles, however, the peak currents decreases, indicating that electropolymerization is a self-limiting process and the oxidation of o-PD may produce a compact and insulting film of PPD, which prevents from further deposition of polymer [15] . After first cycle, a pair of newly formed redox peaks could be observed at −0.2 and 0 V, which corresponds to the reduction and oxidation of PPD film. The detailed information of polymer structure and the polymerization mechanism could be found in the previous report [16] .
RESULTS and DISCUSSION
The enzyme should be kept in neutral pH for optimum activity, so a buffer of pH 7.2 was chose for the electropolymerization of PPD to cover the enzyme. Fig. 1b shows the CV of the electropolymerization of PPD with GOD presence on the electrode surface. Compared with Fig. 1a, two irreversible oxidation peaks at 0.35 V and 0.55 V shifts to more positive potential at 0.4 and 0.6 V in the presence of GOD. Also, the first oxidation peak decreases and the second oxidation peak increases due to the presence of GOD. 2). Without glucose, the enzyme contributes no response and the GOD electrode only exhibits the quasi-reversible electrochemical behavior of ferrocene/ferricnium redox couple (Fig. 2b). The oxidation current is increased to a large extent in the presence of glucose (Fig. 2c), which is indicative of the enzyme-dependent catalytic reduction of the ferricinium ion produced at oxidizing potential [17] . Ferrocene molecules works well as an electron transfer mediator between the electrode substrate and the redox center of GOD. The effect of applied potential on the biosensor response is shown in Fig. 3. The electrode response to glucose begins to increase at 0.1 V with the change of applied potential and becomes nearly constant at Normally, substrate (O 2 ) or product (H 2 O 2 ) of GOD enzyme reaction can be monitored for the development of glucose biosensors [18,19] . The simplest method is to detect the consumption of oxygen at negative potentials (−0.6 V vs Ag/AgCl). However, the oxygen in the sample interferes the result of detection and such biosensors are not suitable for low analyte concentration. Another way is to detect of the oxidization of H 2 O 2 . A lot of reductants (for example, ascorbate, billirubins, acetaminophen ets) in biological liquids are able to be oxidized at the same potential and produce high noise signal [20] . Soluble or immobilized mediators have been developed to shuttle electron between active center of enzyme and electrode surface. Ferrocene was employed in this study for the development of glucose biosensor. Fig. 4 indicates that the current response of the biosensor increases sharply with the increase of the concentration of mediator from 0.1 to 0.5 mM. The biosensor response is limited by the enzyme-mediator kinetics at low mediator concentration and enzyme-substrate kinetics at high concentration of mediator. Higher levels of ferrocene concentration did not improve the signal response further and in fact caused the increase of background current, therefore, the concentration of ferrocene was set at 0.5 mM for all further work. How the thickness of PPD film influences the final property of glucose biosensors was studied by changing the circle number of CV sweep for PPD polymerization (Fig. 5). Since most of charges for electropolymerization are consumed at the first several cycles, the 2, 3, 4, 5 cycles of CV were studied. When the scan cycle is only 2 cycles, the polymer film obtained is too loose or too thin to obtain a high amount of GOD. The biosensor prepared from 3 cycles of CV gives the highest sensitivity, and then the sensitivity decreases with the increase of cycles. It means that the non-conducting PPD film prevents the electron shuttle between GOD and electrode surface. Considering the sensitivity of biosensor, 4 circles of CV for PPD polymerization were chosen. Fig. 6 The typical responses of a biosensor to glucose of 20 mM at 0.3 V.
When the glucose biosensor was polarized at 0.3 V vs Ag/AgCl and then a glucose solution was added into PBS (pH 7.2), a gradual increase in the oxidation current was observed (Fig. 6). The oxidation current A reached a stable steady-state value within 100 s. The repeatability of response of a typical glucose enzyme biosensor was investigated at a glucose concentration of 20 mM. The mean current was 231 nA with a rsd. of 6.2 % (n=10). Fig. 7 shows the experimental calibration curve of the current response against glucose concentration with experimental condition chosen above. The linear range is 5-25 mM glucose and the sensitivity at 0.3 V is 16.6 nA/mM.
CONCLUSION
In this study, the electropolymerized nonconducting PPD film was used to immobilize GOD for the development of a glucose biosensor. GOD was dried on the surface of electrode before the electropolymerization of PPD. The procedure is simple and repeatable. Especially, the procedure decreases the waste of expensive enzyme.
|
2019-03-20T13:14:30.908Z
|
2006-06-30T00:00:00.000
|
{
"year": 2006,
"sha1": "b9cb4829eeca3704d6d929d7bf9921a9c0affe48",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ojbsci.2006.18.22",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0bd96f00062b972fb1d359fabad35beeedb801b3",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
16916974
|
pes2o/s2orc
|
v3-fos-license
|
Does Herd Immunity Exist in Aquatic Animals?
Viral hemorrhagic septicemia virus genotype IVb (VHSV-IVb) is presently found throughout the Laurentian Great Lakes region of North America. We recently developed a DNA vaccine preparation containing the VHSV-IVb glycoprotein (G) gene with a cytomegalovirus (CMV) promoter that proved highly efficacious in protecting muskellunge (Esox masquinongy) and three salmonid species. This study was conducted to determine whether cohabitation of VHSV-IVb immunized fishes could confer protection to non-vaccinated (i.e., naïve) fishes upon challenge. The experimental layout consisted of multiple flow-through tanks where viral exposure was achieved via shedding from VHSV-IVb experimentally infected muskellunge housed in a tank supplying water to other tanks. The mean cumulative mortality of naïve muskellunge averaged across eight trials (i.e., replicates) was significantly lower when co-occurring with immunized muskellunge than when naïve muskellunge were housed alone (36.5% when co-occurring with vaccinated muskellunge versus 80.2% when housed alone), indicating a possible protective effect based on cohabitation with vaccinated individuals. Additionally, vaccinated muskellunge when co-occurring with naïve muskellunge had significantly greater anti-VHSV antibody levels compared to vaccinated muskellunge housed alone suggesting that heightened anti-VHSV antibodies are a result of cohabitation with susceptible individuals. This finding could contribute to the considerably lower viable VHSV-IVb concentrations we detected in surviving naive muskellunge when housed with vaccinated muskellunge. Our research provides initial evidence of the occurrence of herd immunity against fish pathogens.
Introduction
Hedrich [1] introduced the concept of "herd immunity" following research involving measles outbreaks in humans. As part of that research, it was determined that epidemics declined when 68% of children under 15 years of age developed immunity against measles [1]. Since this initial research, herd immunity and the associated critical vaccination threshold necessary to illicit this immunity has been investigated for both human and veterinary practices. The herd-immunity threshold is dependent on the number of secondary infections (R o ), which is variable for each pathogen and environment. For example, during attempts to eradicate wild polioviruses, 100% herd immunity was accomplished with just 65% to 70% immunization coverage in North America [2], while the same vaccine regimen in South America and India resulted in only an estimated 70% herd immunity [3]. For highly pathogenic viruses, a high R o results in a greater critical immunization threshold. In the case of measles, upwards of 92% to 95% vaccination coverage was predicted as being necessary for eradication of the disease [4]. This critical threshold has been investigated for numerous terrestrial pathogens; however, little work has been conducted on aquatic pathogens apart from simple simulation exercises [5]. Whether the concept of herd immunity is even applicable in an aquatic setting is not presently known.
Given that many vaccine preparations have been developed against aquatic pathogens, it is somewhat surprising that aquatic herd immunity has received relatively little attention. Internationally, vaccines are already in use in commercial aquaculture. Recently, Canada approved a DNA vaccine against infectious hematopoietic necrosis virus (IHNV) [6]. Similar DNA vaccines encoding the glycoprotein (G) gene of VHSV genotype I were found to be efficacious in conferring protection to salmonids following virus challenge [7][8][9]. Numerous studies have subsequently demonstrated the VHSV G protein to be the major target protein for neutralizing and protective antibodies [10][11][12][13]. Though, it is unclear whether this immune response confers any protection to neighboring individuals.
In the early 2000s, a novel genotype (IVb) of viral hemorrhagic septicemia virus (VHSV) was isolated in the Laurentian Great Lakes region of North American [14] and found to be highly pathogenic to numerous Great Lakes fish species [15][16][17][18]. Since its first detection, VHSV-IVb has spread to each of the Great Lakes and inland waterbodies throughout the region and caused multiple mass mortality events of fish populations [19]. The number of VHSV-IVb susceptible Great Lakes species is presently 28 [20], with reports documenting variability in disease course and susceptibility among species [16,17]. Recently, a VHSV-IVb vaccine preparation was developed [21] in an effort to protect aquaculture facilities against the spread of VHSV-IVb, but also with the goal of developing a vaccination program that might be used to protect wild fish populations. The vaccine consists of a DNA plasmid containing the VHSV-IVb glycoprotein (G) gene under the control of a cytomegalovirus (CMV) promoter [21,22]. The preparation has shown to be highly efficacious in protecting both muskellunge (Esox masquinongy) and representative Great Lakes salmonid species [22]. In muskellunge, 95% relative percent survival (RPS) was achieved following only a single administration [22]. Despite the efficacy of this VHSV-IVb vaccine, the lack of understanding as to whether a herd immunity response can be elicited in aquatic population leads to some uncertainty as to the potential usefulness of this vaccine in combatting outbreaks of the disease in wild populations. A successful demonstration of the concept of aquatic herd immunity would present the possibility of using the extensive hatchery system within the Great Lakes to actively combat pathogens. Hatchery propagated individuals could be immunized prior to stocking in public waters to supplement the herd immunity and establish a critical immunization threshold. To this end, the goal of this study was to examine whether aquatic herd immunity against VHSV-IVb could be demonstrated in a laboratory setting.
Results
In nearly all trials (i.e., replicates), positive shedding by infected muskellunge was detected within the first week of initiation, and then in most cases decreased to near zero or below detectable limits by weeks 3 and 4 post infection, at which point all infected muskellunge had succumbed to infection. The only exceptions to this observed shedding dynamic was Trial 6 where no shedding was detected during any of the sampling events and the third trial where shedding increased from Week 1 to Week 2. During the first two weeks of each trial, infected muskellunge in the source tanks clearly showed signs of acute VHSV-IVb infection, including extensive petechial hemorrhage along dorsal surface and mortalities exhibited extensive hemorrhage throughout the musculature, liver, swim bladder and renal mesentery.
Within two weeks following the initiation of each trial, signs of VHSV-IVb infection were observed in muskellunge held in the downstream tanks, particularly in muskellunge from the all naïve treatments. Numerous fish exhibited severe petechial hemorrhage, erratic swimming, and pale gills. In most trials, the observation of these morbid individuals was followed by a steep increase in mortalities. In all trials, VHSV-IVb was re-isolated from all mortalities.
We found no significant difference in shedding by infected muskellunge among the trials (F = 0.23; df = 7, 14; p-value = 0.9706). Despite this lack of a significant difference, we still chose to include log e + 1 transformation of shedding of infected muskellunge in the trials as a covariate in the mixed-effect models.
Cumulative Mortality
The highest mortalities in the experiment involved naïve muskellunge housed alone, ranging from 30% to 100%, with an average cumulative mortality of 80.2% (Table 1). Conversely, vaccinated muskellunge stocked alone experienced the lowest mortality rates during the experiment, with mortality rates in all cases being less than 16.7% and an average cumulative mortality of 3.1% (RPS = 96.2%) (Table 1) The cumulative mortality of naïve muskellunge stocked with vaccinated muskellunge ranged from 0% to 100% with an overall average of 36.0% (RPS = 55.1%) ( Table 1). The cumulative mortality of vaccinated muskellunge housed with naïve muskellunge ranged from 0.0% to 50.0% with an overall average of 6.0% (RPS = 92.5%) ( Table 1). There was an overall significant difference among the housing combination × organism of interest interaction (F = 12.81; df = 3, 21.39; p-value < 0.0001). The estimated coefficient for shedding of infected muskellunge was 0.0619 (SE = 0.1786), which was not significantly different from zero (F = 0.12, df = 1, 7.02; p-value = 0.7388). The estimated variance for the trial effect was 1.539 (SE = 1.552), whereas the variance for the trial × housing combination × organism of interest interaction was 2.009 (SE = 1.216).
Cumulative mortality was significantly greater in naïve muskellunge compared to vaccinated muskellunge when housed separately ( Table 2). Based on the predicted marginal means of cumulative mortality from the fitted model, the odds of naïve muskellunge housed alone succumbing during the experiment was approximately 455 times greater than that of vaccinated muskellunge housed alone ( Table 2). Cumulative mortality was also significantly greater in naïve muskellunge housed alone compared to naïve muskellunge housed with vaccinated muskellunge ( Table 2). The odds of naïve muskellunge housed alone succumbing was approximately 14 times greater than that of naïve muskellunge housed with vaccinated muskellunge based on the fitted model ( Table 2). We did not find a significant difference in cumulative mortality between vaccinated muskellunge housed alone and vaccinated muskellunge co-occurring with naïve muskellunge ( Table 2). Table 2. Pairwise comparison of cumulative mortality. Results from pairwise comparisons of cumulative mortality between tank treatments. The t-statistic, degrees of freedom, and p-value are included for each comparison. The odds ratio (OR) and upper and lower 95% confidence limit for the ORs are also shown. The ORs measure how much more likely fish in housing combination 1 would experience mortality versus individuals in housing combination 2. ORs were calculated using the predicted marginal mean cumulative mortalities from the fitted model.
Circulating Anti-VHSV Antibodies
The highest OD values were observed in vaccinated muskellunge when housed with naïve muskellunge (mean = 1.04) ( Table 3), whereas the lowest OD values were observed in naïve muskellunge when housed with vaccinated muskellunge (mean = 0.11). Mean OD values for the other housing combinations ranged from 0.21 for naïve muskellunge stocked along to 0.28 for vaccinated muskellunge stocked alone. Table 3. Anti-VHSV-IVb antibodies following exposure. OD values indicating the presence of anti-VHSV-IVb antibodies in challenged muskellunge sera collected from muskellunge kept at different housing combinations at the termination of cohabitation Trials 4, 5, 6 and 8. The mean OD values were determined using an indirect enzyme-linked immunosorbent assay (ELISA) and were calculated by averaging over survivors within individual tanks and then averaging across tanks.
Post-Exposure Survivors Samples Mean OD Value
Naïve muskellunge There was an overall significant difference in OD values among the housing combination × organism of interest interaction levels (F = 20.64; df = 3, 6.84; p-value = 0.0008). The estimated coefficient for shedding of infected muskellunge was 0.103 (SE = 0.051), which was not significantly different from zero (F = 4.07, df = 1, 2.334; p-value = 0.1627). The estimated variance for the trial effect was 0.111 (SE = 0.171), whereas the variance for the trial × stocking combination × organism of interest interaction was 0.011 (SE = 0.113). Pairwise comparisons indicated that the OD values for vaccinated muskellunge housed with naïve muskellunge were significantly greater than those from all other housing combination × organism of interest levels (Table 4). We did not find significant differences in OD values for any of the other pairwise comparisons (Table 4). Table 4. Pairwise comparison of anti-VHSV-IVb antibody levels. Pairwise comparisons of circulating binding anti-VHSV-IVb antibody OD values in surviving muskellunge following VHSV-IVb exposure. OD values were modeled using a mixed effect model following log e transformation of the data. The t-statistic, degrees of freedom, and p-value are shown for each comparison.
Housing Combination 1
Housing
Viable VHSV-IVb Concentrations in Survivors
Viable VHSV-IVb concentrations in surviving fish were the greatest in naïve muskellunge housed alone. The average concentration was 16.8 pfu·mg −1 , with VHSV-IVb detected in 10 of 21 surviving individuals (Table 5). Conversely, 12 out of 20 of the surviving naïve muskellunge when housed with vaccinated muskellunge were found to be actively infected with VHSV-IVb, although, in this case, the average concentration was 3.0 pfu·mg −1 . Viable VHSV-IVb concentrations were detected in 10 of 19 of the surviving vaccinated muskellunge housed with naïve muskellunge with an overall average concentration of 0.7 pfu·mg −1 . Meanwhile, viral concentrations were lowest in vaccinated muskellunge housed alone, detecting viable virus in only 3 of 35 individuals and a mean concentration of 0.2 pfu·mg −1 . Table 5. VHSV-IVb concentrations in survivor tissues. Number of positive detections and mean viable VHSV-IVb concentrations in the posterior kidney of surviving muskellunge from cohabitation Trials 4, 5, 6 and 8. Plaques were enumerated using a VPA as previously described. The mean viral concentrations were calculated by averaging over survivors within individual tanks and then averaging across tanks. There was a significant difference in the percent positive VHSV-IVb detections among the housing combination × organism of interest interaction levels (F = 5.09; df = 3, 7.261; p-value = 0.0335). The estimated coefficient for shedding of infected muskellunge was 0.088 (SE = 0.100), which was not significantly different from zero (F = 0.77, df = 1, 3.812; p-value = 0.4307). The estimated variance for the trial effect was 0.271 (SE = 0.534), whereas the variance for the trial × stocking combination × organism of interest inte raction was 0.061 (SE = 0.488). Pairwise comparisons indicated that the percent positive VHSV-IVb detections for vaccinated muskellunge was significantly lower than all other housing combination × organism of interest levels ( Table 6). We did not find significant differences in percent positive detections for any of the other pairwise comparisons (Table 6). We observed no significant difference in VHSV-IVb titers among the housing combination × organism of interest interaction levels (F = 2.18; df = 3, 8.418; p-value = 0.1641). We attribute the lack of an overall difference in viral concentrations in survivors from the housing combinations to the high number of zero concentrations that were observed. The estimated coefficient for shedding of infected muskellunge was 0.0259 (SE = 0.08802), which was not significantly different from zero (F = 0.09, df = 1, 2.298; p-value = 0.7932). The estimated variance for the trial effect was 0.387 (SE = 0.494), whereas the variance for the trial × housing combination × organism of interest interaction was 0.333 (SE = 0.206).
Discussion
This study was designed to examine whether aquatic herd immunity against VHSV-IVb could be elicited in a laboratory setting. The initial results demonstrate that when naïve muskellunge co-occur with vaccinated muskellunge at an equal abundance level, naïve muskellunge experience significant protection and a 55.1% RPS rate. Thus, these results suggest that herd immunity can be elicited in an aquatic setting. The vaccinated muskellunge housed with naïve muskellunge exhibited a vigorous humoral response with significantly higher OD values than those obtained from vaccinated muskellunge alone. This heightened immune response likely resulted from increased viral exposure from co-mingling with the naïve muskellunge following challenge.
Tanks containing vaccinated and naïve muskellunge housed together or alone were exposed to VHSV-IVb via shedding from infected muskellunge in a common-source tank, which mimics a natural course of exposure. Additionally, this method of exposure allowed viral concentrations to naturally vary and the controlled distribution of water allowed differing tanks to be exposed to identical viral concentrations within a trial. However, the stochasticity of viral shedding between trials also accounts for the variability in mortality and RPS of naïve muskellunge housed with vaccinated muskellunge. The small number of fish per tank likely also attributed to the variability in mortality, though utilizing numerous study replicates indicates a protective effect from comingled. Subsequent studies will undoubtedly need to increase the fish sample sizes and examine additional species to support these findings.
While the herd immunity concept seems to exist in the aquatic environment, how it results may require additional investigations as they were beyond the scope of this study. For example, pioneering work demonstrated that teleost mucosal surfaces harbor both IgM and IgT immunoglobulis [23,24], both of which can neutralize the virus in the skin and gill mucus layer of vaccinated fish. It is also possible, though not scientifically proven, that both antibody types can be shed into the water column resulting in viral neutralization by the vaccinated individuals. Regardless, if the immunoglobulins are bound to mucus or shed in the water, they can account for decreased viral transmission. In this fashion, simply stocking immunized individuals into an aquatic system would aid to the establishment of the critical immune threshold that is beneficial to both vaccinated (as it increases their antibody responses) and naïve (as it improves survival) fish.
Few researchers have examined the concept of herd immunity in aquatic environments as epidemics can be difficult to visualize and quantify. Epidemics occur when a high number of susceptible individuals leads to efficient transmission or contact (i.e., a high R o value). The R o value is linked to a critical immunity threshold which is the proportion of the population that must be immune or vaccinated in order to prevent further transmission of a pathogen [25]. The R o and transmission dynamics are largely uninvestigated for aquatic pathogens such as VHSV. Moreover, disease dynamics vary based on virulence, host susceptibility and behavior, innate immunity, contact rates, environmental conditions, etc. In addition, for VHSV, shedding rates can be transient and differ on the order of magnitudes [26]. This variability complicates the calculation of R o , however, the results of this study can be used to forgo this calculation and inform a modeling effort to assess aquatic herd immunity and the number of fish needed to be stocked to elicit a herd immunity effect under a range of conditions on a much larger scale than possible in a laboratory setting.
Experimental Fish and Care
All fish used in the included study were certified disease free in accordance to World Organization for Animal Health (OIE) testing guidelines [27] prior to use. Two groups of juvenile muskellunge were used throughout the study. The first group was used for the first four experimental trials and was obtained 14 weeks post-hatch (average 14.2 cm (SD = 1.4), 11.9 g (SD = 3.8)) from the Chautauqua State Fish Hatchery (New York Department of Environmental Conservation, Chautauqua, NY, USA). The second group of juvenile muskellunge was used in the remaining experimental trials and was obtained 16 weeks post-hatch (average 12.7 cm (SD = 0.9), 16.1 g (SD = 3.7)) from the Wolf Lake Fish Hatchery (Michigan Department of Natural Resources, Mattawan, MI, USA). All muskellunge were fed live fathead minnows (Pimephales promelas) obtained from Anderson Farms Inc. (Lonoke, AR, USA) and certified as free of important disease. An additional 60 minnows were necropsied and underwent additional testing according to the American Fisheries Society Fish Health Section [28].
All experimental fishes were acclimated in a 500-L circular fiberglass tank in a continuous flow-through system with facility-chilled well water and supplemental aeration. Fish were housed and all experiments were conducted in the University Containment facility (Michigan State University, East Lansing, MI, USA). Fish were fed ad libitum throughout study except for the 1st week post-viral challenge when food was withheld. Two weeks prior to immunization, randomly selected fish were transferred and acclimated to 72-L polyethylene flow-through tanks (Pentair Aquatic Eco-Systems, Apopka, FL, USA) containing supplemental aeration.
The care and use of laboratory animals were followed in accordance with the ethical guidelines defined by Michigan State University's (MSU) Institutional Animal Care and Use Committee (AUF 03/14-047-00).
Construction of pVHSivb-G Plasmid
The pcDNA 3.1 (+) is a commercially available vector containing the human CMV immediate-early promoter. The DNA vaccine construct containing the VHSV-IVb G gene, designated pVHSivb-G [21,22], was modeled after successful DNA vaccines against VHSV genotype I [7,31] and IHNV [32]. The construction and production of this plasmid were outsourced to Life Technologies (Carlsbad, CA, USA). In brief, an EcoRI restriction site (G/AATTC) followed by a kozak consensus sequences terminating with the first amino acid of the complete MI03GL VHSV-IVb isolate G gene (1524 bp) was synthesized. An XbaI restriction site (T/CTAGA) was then added following the 3' termination codon. The assembled fragment was then digested using the described endonucleases and sub-cloned into the eukaryotic expression vector pcDNA 3.1(+) (Invitrogen). The plasmid was transformed and propagated into K12 Escherichia coli. Sequencing confirmed the correct glycoprotein gene sequence and orientation. The final vector, designated pVHSivb-G, was diluted to 1 mg·mL −1 in sterile phosphate buffered saline (PBS) and stored at −80 • C.
Experimental Design
This experiment involved eight replicate trials housing naïve and vaccinated muskellunge separately or in combination and exposing them to VHSV-IVb via virus shedding from infected muskellunge. It was anticipated that there could be considerable variation in the measured responses of this experiment because of our trying to mimic a natural course of exposure to VHSV (see below); the high number of replicate trials was therefore needed to separate treatment effects from the variability across trials [33]. Vaccinated muskellunge were inoculated according to a pre-developed regime that in previous studies resulted in an average RPS of 95% [22]. Immediately prior to vaccination, vectors were thawed and diluted to 10 µg in 100 µL of sterile PBS. Individual fish were randomly allocated to treatment tanks prior to anesthetization with 0.1 g·L −1 of tricaine methanesulfonate (MS-222) (Western Chemical, Ferndale, WA, USA), buffered with 0.3 g·L −1 sodium bicarbonate. Muskellunge were vaccinated intramuscularly with 10 µg of the pVHSivb-G plasmid in the left epaxial muscle slightly posterior to the pectoral fins. Vaccinated individuals that would be housed in tanks with naïve individuals were intramuscularly marked with 9-mm passive integrated transponder (PIT) tags (Biomark © Inc., Boise, ID, USA) so that naïve and vaccinated could be identified. A pool of muskellunge was vaccinated at six-week intervals and maintained 72-L polyethylene flow-through tanks (Pentair Aquatic Eco-Systems, Apopka, FL, USA). Following vaccination, fish were allowed to react to the antigen for 1880 degree days (20 weeks at 13 • C).
The experimental layout for this research consisted of three 72-L polyethylene flow-through tanks containing supplemental aeration that all received water flow from a single common source tank. Water initially flowed into a tank containing infected muskellunge. Naïve muskellunge obtained in the original batch of muskellunge from Chautauqua State Fish Hatchery were used as shedders throughout all trials. Muskellunge were intraperitoneally (IP) infected with a low dose of VHSV-IVb that previous studies indicated would elicit shedding [17,26]. To infect muskellunge, immediately prior to infection, muskellunge were anesthetized as previously described. Thawed VHSV-IVb was diluted to a concentration of 1.98 pfu in 100 µL of sterile PBS and administered using intraperitoneal injection (IP). Following infection, muskellunge were held in a 72-L polyethylene flow-through tank for up to seven days until each trial was initiated. In each replicate trial the number of initial infected muskellunge placed into the shedder tank was equal to the number of downstream tanks i.e., three infected muskellunge for three downstream tanks. Once an infected individual succumbed, that individual was removed and not replaced. Additionally, if all the infected individuals succumbed, water continued to flow through the empty shedder tank.
The water from the tank containing infected individuals was then equally distributed to downstream tanks. A total of 8 replicate trials were conducted. All trials included tanks consisting solely of naïve muskellunge and vaccinated muskellunge and a tank where naïve and vaccinated muskellunge were housed together. Trials differed only slightly in the initial stocking densities, tanks contained between 10 and 14 fish·tank −1 so as to limit density effects on survival. In co-occurring tanks, an equal number of vaccinated and naïve muskellunge were used. The assignment of housing combinations (i.e., experimental treatments) to the tanks (i.e., experimental units) in each trial was randomly determined. Throughout all trials, tanks containing populations of naïve and vaccinated muskellunge housed together, the populations were assessed separately.
One week prior to the initiation of each trial, fish from each treatment were randomly allocated into their population tanks and acclimated prior to the introduction of virus into the system. Water temperatures were maintained at 11 ± 1 • C throughout all trials through the use of a chiller system. Following the introduction of virus into the system, the experiment was run for 28 days during which each population was monitored for morbidity and mortality. The only exception to this was Trial 4, which was run for 60 days due to low mortality rates in any of the housing combinations. Moribund fish were allowed to actively shed and infect other individuals rather than being removed from their respective tanks.
Mortalities
All mortalities were necropsied and kidney, spleen and heart samples were aseptically collected and homogenized using a Biomaster Stomacher (Wolf Laboratories Ltd., Pocklington, UK) on high speed for 2 min. Homogenates were diluted 1:4 (w/v) with MEM, supplemented with 12 mM tris buffer (Sigma), penicillin (100 IU·mL −1 ), streptomycin (100 µg·mL −1 ) and Amphotericin B (250 µg·mL −1 ) (Invitrogen). Samples were centrifuged at 2700× g for 30 min at 4 • C and inoculated onto EPC monolayers. After 14 days supernatant was removed, frozen at −80 • C, thawed at centrifuged at 2700× g for 15 min at 4 • C. Supernatant was then re-infected onto fresh EPC monolayer and incubated for 14 days before being examined for viral cytopathic effect (CPE). The presence of VHSV-IVb was confirmed using real-time reverse transcription polymerase chain reaction (RT-PCR) assay specific for VHSV-IVb [34,35]. The cumulative mortality of the tank containing all naïve muskellunge in each trial was used to calculate RPS of vaccinated or naïve muskellunge housed with vaccinated [36]. RPS = 1.0 − % cumulative mortality vaccinated % cumulative mortality of naive × 100%
VHSV-Shedding
Viral shedding rates of the IP infected muskellunge were assessed once a week during each trial for as long as individuals remained. Shedding rates were used as a covariate for explaining variability in results among the trials. Shedding was assessed using a modified protocol of that described by Kim and Faisal [26]. First, the entire flow though system was turned off and fish remained in their respective tanks with supplemental aeration. After 90 min, water was mixed and a 50 mL water sample was taken from each tank and the flow was resumed. Water samples were stored at 4 • C until processing within 24 h. For processing, samples were vortexed and centrifuged at 2700× g, at 4 • C for 10 min. After centrifugation, a viral plaque assay (VPA) was conducted as previously described. After 6 days, cell monolayers were stained with crystal violet (Sigma) and 18% formaldehyde (Avantor Performance Materials Inc., Center Valley, PA, USA). Viral plaques were counted and the theoretical shedding rate (pfu·hour −1 ) for the tank was determined.
Circulating Anti-VHSV Antibodies
Levels of circulating anti-VHSV antibodies in surviving muskellunge from Trials 4, 5, 6 and 8 were assessed using a newly develop indirect enzyme-linked immunosorbent assay (ELISA) [22]. At the termination of each trial, surviving muskellunge were euthanized with 0.3 g·L −1 of tricaine methanesulfonate buffered with sodium bicarbonate. Blood was collected using a caudal venipuncture, stored at 4 • C for 2 h and centrifuged at 2700× g for 10 min at 4 • C. The serum was then aliquoted and stored at −80 • C until analysis.
Microtiter assay plates were coated with 100 µL·well −1 of purified VHSV-IVb at 1 µg·mL −1 and incubated overnight (14-16 h) at 4 • C in a humid chamber. After the overnight incubation, plates were washed and unbound sites were blocked with the addition for 430 µL·well −1 of PBS containing 5% NFDM (PBS-5%; Sigma) and incubation at 37 • C for 1 h. Heat inactivated and diluted test and control muskellunge sera was then added to duplicate wells at 100 µL·well −1 . After incubating at 25 • C for 1 h, plates were washed and 100 µL·well −1 of 1:30,000 dilution of a mouse anti-muskellunge mAb (designated 3B10) was added and incubated at 25 • C for 1 h. Plates were washed and 100 µL of 1:4000 dilution of a commercially available goat anti-mouse secondary antibody horseradish peroxidase (HRP) conjugate (Invitrogen) was added to each well and incubated at 25 • C for 1 h. Plates were developed by the addition of 100 µL of 0.4 mg·mL −1 o-phenylenediamine (Sigma) in phosphate citrate buffer (Sigma) containing 3 mM hydrogen peroxide (Avantor Performance Materials Inc.). The reaction proceeded for 30 min at 25 • C in the dark. Without washing, the reaction was stopped with the addition of 50 mL of 3 M sulfuric acid (H 2 SO 4 ; Avantor Performance Materials Inc.). The optical density (OD) was read at 490 nm on a BioTek, ELx808™ plate reader (BioTek) using Gen5 software (BioTek). The average value of blank wells was subtracted from test and control wells prior to analysis.
Viable VHSV-IVb Concentrations in Survivors from Each Population
Viable viral concentrations were assessed in the survivors of Trials 4, 5, 6 and 8. At the termination of each trial, all survivors were euthanized as previously described. A sample of the posterior kidneys was collected aseptically from each individual and stored at 4 • C until processed individually within 24 h. The tissue was homogenized and diluted 1:10 (w/v) with MEM as previously described. Samples were vortexed and centrifuged at 2700× g for 30 min at 4 • C and supernatant was used to conduct a VPA as previously described. The number of viral plaques was used to determine viable viral concentrations (pfu·mg −1 ).
Data Analysis
Differences in viral shedding rates of the IP infected muskellunge among the trials were tested using one-way analysis of variance following log e + 1 transformation of the shedding rates. Each of the response measures from the experimental fish described in Section 4.5 was analyzed using generalized linear mixed-effect models. For cumulative mortality and percent positive VHSV-IVb detections in surviving fish from the trials, a binomial distribution was assumed, whereas for the other response measures a Normal distribution was assumed following either log e (circulating antibodies) or log e + 1 (viral concentration in survivors) transformation. Each of the mixed-effect models included as fixed-effect variables the housing combination × organism of interest (vaccinated or naïve) interaction and log e + 1 transformation of viral shedding of infected muskellunge in their respective trials as fixed-effect variables. The mixed effect models also included trial and trial × housing combination × organism of interest interaction as random effects. If an overall significant difference among the housing combination × organism of interest interaction was detected for a response measure, then pre-planned comparisons of particular levels of interest were conducted (e.g., survival of naïve muskellunge when housed alone versus when housed with vaccinated muskellunge). Denominator degrees of freedom for the tests of overall differences among treatments and the pre-planned pairwise comparisons were set using the Satterthwaite approximation. All analyses were conducted using PROC GLIMMIX in SAS [37].
Conclusions
In this study, we examine the concept of "herd immunity" in an aquatic system. The cumulative results from eight trails indicates that cohabitation of immunized muskellunge with naïve individuals indeed provides a protective effect to the naïve cohort. This represents an innovative concept that certainly warrants further investigation, such as the mechanism of this protective effect, and whether protection is observed with other species and pathogens. Undoubtedly, this research and subsequent examinations will provide valuable insight in disease prevention of aquatic species.
|
2017-01-15T08:35:26.413Z
|
2016-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "31629ce654ad072916e8ab9bf67542420a2628ad",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms17111898",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31629ce654ad072916e8ab9bf67542420a2628ad",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
211259348
|
pes2o/s2orc
|
v3-fos-license
|
Sketching Transformed Matrices with Applications to Natural Language Processing
Suppose we are given a large matrix $A=(a_{i,j})$ that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, $f(A):=(f(a_{i,j}))$ for some function $f$. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation afterwards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling. In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to a concrete application: low-rank approximation. We show that our approach obtains small error and is efficient in both space and time. We complement our theoretical results with experiments on synthetic and real data.
Introduction
Matrix datasets are ubiquitous in machine learning. However, many matrix datasets are usually too large to fit in the computer memory in large scale applications, e.g., image clustering [PPP06], natural language processing [MSA + 11], network analysis [MS04,GL16], and recommendation systems [KBV09]. Many techniques have been proposed to perform the learning tasks on these data in an efficient way; see, e.g., [Mah11, Woo14, ZWSP08, GNHS11] and the references therein. However, challenges arise when the learning task is performed on an entrywise transformation of the matrix, which prevents applying many linear algebraic techniques. Furthermore, due to large sizes, these matrices are often constructed by entrywise updates, i.e., the entries of the matrix are constructed from a stream of updates where each update adds some value on some entry. More specifically, there is a very large underlying matrix A (that cannot be stored in memory easily) whose entries are constructed by a data stream where each item in the stream is of the form (i, j, ∆) with ∆ ∈ {±1} representing the update A i,j ← A i,j + ∆. The downstream learning task (e.g., low rank approximation), however, needs to take input as matrix M where M i,j = f (A i,j ) for some transformation function f (e.g., f (x) = log(|x| + 1)).
A concrete example is word embedding in natural language processing (NLP). Word embedding methods aim to embed each word to a vector space. It becomes a basic building block in many modern NLP systems. Many of these systems achieve the state of the art performance on various tasks via word embedding [PSM14, MSC + 13, WSC + 16]. A basic routine in word embedding is to explicitly or implicitly perform low rank approximation of an entry-wise transformed matrix [LG14,LZM15]. For instance, the transformation is to apply a log likelihood function on each entry. The matrix itself is the so-called co-occurrence count matrix, which can be constructed by scanning the text corpus, e.g., the entire Wikipedia database. This matrix is usually of size millions by millions.
Similar examples include regressions on huge accumulated datasets in economics [DVF13,Var14], where different transformations on covariates are often used to reduce biases. Other examples include visual feature extraction [BPL10], kernel methods [RR08], and M -estimators [Zha97]. These large scale applications make it impractical or hard to implement existing methods, which keep the matrix in memory. Some other approaches exploit the problem structure to get around the huge space requirement. For instance, some of them propose sequential models of the data, and design online algorithms for computing the embeddings (e.g., [MSC + 13,BGJM16]). These methods, however, are more task-specific and cannot be applied to other tasks involving more general entrywise matrix transformations.
In this paper, we show that learning based on transformed large matrices is possible even when storing such a matrix is not feasible. Our main contributions are: • For a general class of transformation function f , we provide an efficient one-pass matrix-product sketch for computing the product of a given small matrix B with the transformed matrix f (A) with provable error bounds. This algorithm uses space at most the size of the output. The method assumes no statistical model about the updates and can handle a general family of transformations. In particular, these transformations include logarithmic functions and small degree polynomials. This method can also be used as building blocks for downstream tasks: any algorithm requires access to the transformed matrix via a matrix product can apply our algorithm to obtain space saving.
• We demonstrate the application of our algorithm in a concrete task: low rank approximation.
To the best of our knowledge, our algorithm is the first one that is able to compute low rank approximation of large matrices under entrywise transformations. We plug in our matrix product sketch into known algorithms as black boxes. We provide theoretical analysis on the tradeoff between the space and the accuracy of these algorithms. We show that our algorithms are space efficient and almost match the accuracy of using the full matrix. These theoretical guarantees are complemented by experiments for low rank approximation on synthetic and real data. The empirical results show that our algorithm can reduce the space usage by orders of magnitude while the error is almost the same as the optimum. We show that our algorithms beat the baseline of using uniform sampling on columns of the transformed matrix by a large margin. We also provide results on linear regression in the appendix.
Road Map. We provide definitions and basic concepts in Section 3. In Section 4, we introduce our basic routine called the matrix product sketch. We use our sketching algorithms to compute the low rank approximation of a transformed matrix in Section 5, and the application on linear regression is in Appendix E. In Section 6, we use numeric experiments to justify our approach. The appendix provides a list of related works, the complete proofs, details of the experiments, and also additional theoretical and empirical results.
Related Work
There exists a large body of work on fast algorithms for large scale matrices. Some are based on randomized matrix algorithms and use techniques like sampling and sketching; see [Mah11,Woo14] and the reference therein. Some others are based on optimization algorithms like Alternating Least Square and Stochastic Gradient Descent and their variants; see [ZWSP08,GNHS11] for some examples. However, most existing approaches do not apply to the settings considered in this paper. The closest work is [WZ16], which considers low rank approximation of the element-wise transformation of the sum of several matrices located in different machines. This distributed setting is different from our setting and naïvely applying their algorithm will lead to a large space cost. Furthermore, our sketching method can be applied to learning tasks beyond low rank approximation. Our work is built on techniques from numerical linear algebra and streaming data analysis in the recent decade. There are numerous research works along this line. Here we list a few but far from exhaustive. Low-rank approximation or matrix factorization of a matrix is an important task in numerical linear algebra. In this problem, we are given a n×d matrix A and a parameter k, the goal is to find a rank-k matrix A so as to minimize the residual error A − A 2 F , where the Frobenius norm is defined as A F = ( n i=1 d j=1 A 2 i,j ) 1 2 . Note that an optimal A provides a good estimation to the leading eigenspace of the matrix A. Classical way of speeding up low-rank approximation via sketching requires showing two properties for sketching matrix: subspace embedding [Sar06, LWW20, WW19] and approximate matrix product [NN13,KN14]. Low-rank approximation algorithm via combining those two properties has been presented in several papers [CW13,MM13,SWZ19b]. The classical sketching idea is easy to be made a streaming algorithm, since we usually use linear sketching matrix, which we don't need to explicitly write down during the stream. However none of these methods are applicable to our setting, which is much harder than the classical streaming low-rank approximation problem. This is mainly because the transformation f that acts on an the matrix A completely destroyes the linear algebraic property of matrix A; see Appendix D for some discussions. The storage of A can also be indefeasibly large to be stored and apply the above mentioned methods.
Streaming algorithms have gained great progress since its first systematic study by [AMS99]. Classic streaming problems ask how to estimate a function over a vector, which is under streaming updates. For instance, [AMS99] approximates v p while observing a sequence of updates to the coordinates of v. The usual assumption is that v ∈ R n and n is so large that v cannot be stored in memory easily. Since [AMS99], a line of research works (e.g. [Ind00, IW05, BYKS02, BKSV14, KNW10]) gradually improve the algorithm and obtain nearly optimal upper and lower bounds. Very recently, [BO10b, BO10a, BVWY17] attempts to handle a more general set of functions. [BVWY17] gives a nearly optimal characterization of this problem. [BBC + 17] studies a more general setting, i.e., functions that do not have a summation structure f : R n → R. They give optimal characterization for streaming all symmetric norms. Given theses advances, none of them solves our problem directly since a streaming estimation only gives a value of vector, that is unrelated to the matrix formulation of the input.
[n] denotes the set {1, 2, · · · , n}. For a vector x ∈ R n , |x| ∈ R n denotes a vector whose i-th entry is |x i |. For a matrix A ∈ R n×n , let A denote its spectral norm, σ i (A) to denote its i-th largest singular value, and [A] k denote its best rank-k approximation. Also let det(A) denote its determinant when A is square. For a function f , M = f (A) means entrywise transformation M ij = f (A ij ). We also denote A i * as the i-th row of matrix A and A * j as its j-th column.
Problem Definition. The problem of interests is defined as follows. Suppose we have a underlying large matrix A = (A i,j ) ∈ R n×n initialized as a zero matrix. 1 Now, we have observed a sequence of updates of the form (i 1 , j 1 , ∆ 1 ), (i 2 , j 2 , ∆ 2 ), . . . , (i m , j m , ∆ m ) for some m = poly(n), i t , j t ∈ [n] and ∆ t ∈ {−1, 1}. At the t-th update, we are updating the underlying matrix by a it,jt ← a it,jt + ∆ t . We assume that m is bounded by poly(n). Note that the assumptions of integer updates is without loss of generality. For instance, if the updates is not an integer, we can round them to a specified precision ǫ > 0 and then scale them to integers. The polynomially bounded length is also a usual and reasonable assumption. At the end of the stream, one would like to perform some learning task (such as low-rank approximation) on the matrix M = f (A) for some fixed function f : R → R and would like to do so using as small space as possible, in particular, avoid storing the large matrix A. Some examples of the transformation functions are f (x) = log(|x| + 1), or f (x) = |x| α , ∀α ≥ 0. (1) Functions of this form are important in machine learning. For example, f (x) = log(|x| + 1) corresponds to the log likelihood function and f (x) = |x| α corresponds to a general family of statistic models or feature expansion. In this paper we would like to design a space efficient method for approximating Z = f (A)B for a given matrix B, where f (A) ∈ R n×n and B ∈ R n×k for some integer n and k with k ≪ n. We would like to design algorithms that uses space O(nk) instead of O(n 2 ). This can then be used as a plug-in primitive and turn learning algorithms into space efficient ones if they only access f (A) by matrix product with small B. More formally, Problem 3.1 (approximate transformed matrix and matrix product). Given a fixed matrix B and function f : R → R, design an algorithm that makes a single pass over an update stream of a matrix A, output an approximated value of f (A)B with high probability. We require the algorithm to use as small space as possible (without counting the space of B).
We call our method the sketch for f -matrix product. We then demonstrate its effectiveness in the applications of linear regression and low rank approximation on M = f (A). Linear regression is to minimize M x − b 2 2 , and low rank approximation is defined as follows.
Problem 3.2 (low-rank approximation). Given integers k ≤ n, an n × n matrix M , two parameters ǫ, δ > 0, the goal is to output an orthonormal n × k matrix L such that 4 Sketch for f -Matrix Product Our goal in this section is to compute the matrix product f (A)B where B is given and A is under updating or can only be read entry by entry. We observe that each entry of Z = f (A)B can be written as a vector product: Thus, we will first design a primitive to compute each Z i,j using small space. Running a primitive in parallel for each entry Z i,j results in our full algorithm for computing the matrix product. In the following sections, we will first introduce the vector sketch problem and present our vector product primitives for different functions f . Lastly, we will combine them to form a unified algorithm for matrix product.
Sketch for f -Vector Product
Recall that for given vectors x, y ∈ R n , the inner product is defined as x, y = n i=1 x i y i . In our setting, we are also given a function f : R → R and a vector x ∈ R n where the storage of x is free, but not directly given y. The f -vector product is defined as x, f (y) , where f is applied to y coordinate-wisely. The updates to y is a stream, i.e., we observe a sequence of integer pairs (z t , ∆ t ) for t = 1, 2, . . . , m, where each z t ∈ [n] and ∆ t ∈ {−1, 1}. Thus, we initialize y as a y (0) ← 0, a zero-vector, and at time t, the update to y is described by y (t) ← y (t−1) + ∆ zt · e zt where e zt is the standard unit vector with only the z t -th coordinate non-zero. Our goal is to approximate x, f (y) without storing y, where x is given to the algorithm without storage cost. Formally, we define the following problem.
Problem 4.1 (approximate transformed vector and vector inner product). Given a fixed vector x and function f : R → R, design an algorithm that makes a single pass over an update stream of a vector y, output an approximated value of f (y), x with high probability. We require the algorithm to use as small space as possible (excluding the space of x).
We note that a naïve algorithm would be storing the vector y as a whole. However such an algorithm is not feasible when n is large or the demand of computing such inner products is too high (e.g., in our matrix applications for computing Z = f (A)B ∈ R n×k , each entry of Z is an inner product. If each inner product requires space n, then final space can be O(n 2 k) which is prohibitively high.). In Section 4.2 below, we design an algorithm that accomplish this task for function f (y) = log(|y| + 1), which only uses O(1) bits of memory. In Section B.3, we present a general framework that works for a general family of functions f with nearly optimal space complexity.
Sketch log(| · | + 1)-Vector Product
Recall that, when f (·) = log(|·|+ 1), we are designing an algorithm for computing the inner product log(|y| + 1), x , where x, y ∈ R n are two vectors, x is given to the algorithm for free and y is under updating. Our full algorithm is Algorithm 1, which is composed of 3 sub-procedures: procedure Initialize is called on initialization with given vector x, procedure Update is called when we go over the update stream of the vector y, and procedure Query is called at the end to report the Algorithm 1 1: data structure LogSum ⊲ Theorem 4.2 2: procedure Initialize(x) 3: Sample a log n-wise independent hash function h j : [n] → {0, 1} such that ∀i ∈ [n] : Pr[h j (i) = 1] = min(p j , 1).
7:
Sample a K-set structure KSet j with error parameter Θ(δ/t) and memory budget ǫ −2 poly(log n/δ) Pick the largest j such that KSet j does not return "Fail"
19:
Let v be the output of KSet j , denote S j = supp(v) 20: return 2 j i∈S j x i log(|v i | + 1) 21: end procedure 22: end data structure answer. The detailed analysis of Algorithm 1, can be found in Appendix B. We here sketch the high level ideas for how it works. For ease of representation, we consider x has no zero coordinates, since otherwise we can simply ignore these coordinates and change our universe [n] to supp(x) accordingly. Our algorithm is originated from [BO10b] but it is much simplified in this paper. From a high level, our algorithm can be viewed as an ℓ 0 -sampler, namely, sample uniformly at random from the support of an updating vector y. Note that the support of y is changing over time. Thus it is non-trivial to maintain a uniform sample while using only small space. We also note that it is necessary to sample coordinates from the support of y, since otherwise we can always construct worst-case examples for algorithms that sample coordinates uniformly from [n].
We design our algorithm thus by maintaining independently Θ(log n) many sub-vectors of the vector y. Each sub-vector is generated by sampling a set of coordinates uniformly from [n] with geometrically decreasing probabilities. For instance, in our algorithm, we first generate Θ(log n) many hash functions, each defines a set S j ⊂ [n]. For each i ∈ [n], we demand that i ∈ S j with probability 2 −j . Thus if the size of the support of y is of order Θ(2 j ), then we are expected to sample Θ(1) samples of y using the set S j . We now describe how to maintain these sampled coordinates in memory. For convinience we assume γ = 1 in line 3 in Algorithm 1.
For the case of insertion-only stream (once a coordinate of y becomes larger than 0, it stays so), maintaining the sub-vector y S j is a trivial task since the number of coordinates of y S j is expected to be O(1). However, for j ′ ≤ j, the sub-vectors y S j ′ s contain too many coordinates. We handle this quite straightforwardly: if any of them exceeds our memory budget, we just ignore them. For the case of general stream, in which coordinates can be 0 even they were non-zero at some time-point. We will be using the K-set data structure presented in [Gan07]. This data structure supports insertion and deletion of data points and can maintain the samples only if the number of final samples is under the memory budget. The formal guarantee of the K-set data structure presented in Theorem B.1.
Suppose now we have collected sufficiently many samples from the support of the vector y. Suppose the set of samples is collected using set S j . We can have an empirical estimator for the inner product as 2 j i∈S j x i log(|y i | + 1). Notice that this estimator is unbiased. Also since the variance of the estimator is bounded by where m is the length of the stream and is usually assumed to be of oder poly(n), thus we only need poly log n samples to obtain an accurate estimation. We summarize the main guarantee in the following theorem, while the formal proof can be found in Section B.
Theorem 4.2 (approximate inner product of transformed vector and vector). Suppose vector x ∈ R n is given without memory cost. There exists a streaming algorithm (data structure LogSum in Algorithm 1) that makes a single pass over the stream updates to a vector y ∈ R n and outputs Z ∈ R, such that, with probability at least 1 − δ, log(|y i | + 1).
Remark 4.3. We also note that our algorithm naturally works for f (y) := log c (|y| + 1) for any constant c. To modify our algorithm, we only need to keep slightly larger space and change the final estimation to be 2 j i∈S j x i log c (|v i | + 1). It also enjoys the same relative error guarantee in Theorem 4.2.
From Vector Product Sketch to Matrix Product Sketch
With the f -inner product sketch tools established, we are now ready to present the result for sketching the matrix product, Z = f (A)B. Notice that each entry Z i,j := f (A i ), B j is an inner product.
Thus our algorithm for the matrix sketch is simply maintaining an f -inner product sketch for each Z i,j . In our algorithm, we assume that matrix B is given to the algorithm for free. Thus, if B ∈ R n×k for some k ≪ n, we only need to keep up to O(nk) vector product sketches, which cost in total O(nk) words of space. For the ease of representation, we present our guarantee for matrix product for f (z) := log c (|z| + 1) for some c or for f (z) = z p for 0 ≤ p ≤ 2, and for matrix B ∈ {−1, 0, 1} n×k . Our results can be generalized to a more general set of functions and matrix B using the results presented in Section B.3. The proof of the following theorem is a straightforward application of Theorem 4.2 and B.2.
8:
Let S + and S − be its positive and negative parts of S. 9: 10: Step 3 : Computing approximation solutions 20: Compute the top k singular vectors W of Π 23: L ← Q y W 24: return L 25: end procedure Theorem 4.4 (approximate each coordinate of the transformed matrix). Given a matrix B ∈ {−1, 0, 1} n×k , and a function f (x) := log c (|x| + 1) for some c or f (x) := |x| p for some 0 ≤ p ≤ 2, then there exists a one-pass streaming algorithm that makes a single pass over the stream updates to an underlying matrix A ∈ R n and outputs a matrix Z, such that, with probability at least 1 − δ, for all i, j, The algorithm uses space ǫ −2 nk poly(log(n/δ)) and has an nk poly(log n, 1/ǫ) query time.
Remark 4.5. We note that our sketch in the last theorem can be easily used to approximate the 2-norm of each row of the matrix f (A). In this case, we simply choose B ∈ R n×1 as the all-1 vector and change f (·) to be f 2 (·). For f (x) = poly log(|x| + 1) or f (x) = |x| p with 0 ≤ p ≤ 1, it can be easily verify that our output is a (1 ± ǫ) approximation to f 2 (A) · 1, hence the approximation of 2-norm squared of each row of f (A).
Application to Low Rank Approximation
This section considers the concrete application of rank-k approximation for M where M i,j = log(|A i,j | + 1), i.e., finding k orthonormal vectors L such that M − LL ⊤ M F is minimized. Our algorithm for rank-k approximation is presented in Algorithm 2. Low rank approximation for other functions f follows the same algorithm and similar analysis.
There exists a large body of work for low rank approximation (see, e.g., [HMT11, DMIMW12, Woo14, CW13, MM13, NN13, CW15, RSW16, SWZ17, CGK + 17, SWZ18, BW18, KPRW19, SWZ19a, SWZ19b, SWZ19c, Son19, BBB + 19, DJS + 19, BCW19, IVWW19, BWZ19] and references therein) but most of them are designed for the case without transformation and thus cannot be directly applied. As mentioned in previous sections, if an algorithm only accesses the transformed matrix via a matrix product, plugging in our sketching method leads to a suitable algorithm. We design an algorithm that applies generalized leverage score sampling approach [DMIMW12, BLS + 16] for low-rank approximation. Leverage score sampling is a non-oblivious sketching technique that is widely used in numerical linear algebra and has been successfully applied to speed up different problems such as Readers may refer to Appendix C.1 for more detailed discussion on leverage score sampling.
On a high level, we would like to sample matrix M ∈ R n×n according to its leverage scores. It turns out it is sufficient to use the leverage scores of SM where S is a sketching matrix. We apply Algorithm 1 to do so and obtain the sampled set P (Step 1). We then apply the technique of adaptive sampling to refine the sampling and obtain Y (Step 2) so that we have better control over the rank, and finally compute the solution using Y by taking projection and computing singular vectors (Step 3). Detailed description and analysis of Algorithm 2 can be found in Appendix C. Overall we have the following guarantee.
Theorem 5.1 (low-rank approximation). For any parameter ǫ ∈ (0, 1) and integer k ≥ 1, there is an algorithm (procedure LowRankApprox in Algorithm 2) that runs in O(n) · k 3 · poly(1/ǫ) time, takes O(n) · k 3 /ǫ 2 spaces, and outputs a matrix L ∈ R n×k such that holds with probability at least 9/10, where M 1,2 = ( j M * ,j 2 1 ) 1/2 . For a large n and fixed ǫ, our algorithm uses much less space than storing the full matrix. Note that our algorithm still needs to make several passes over the stream of updates. Whether there exists a one-pass algorithm is still an open problem, and is left for future work.
Experiments
To demonstrate the advantage of our proposed method, we complement the theoretical analysis with empirical study on synthetic and real data. We consider the low rank approximation task with f (x) = log(|x| + 1). We adjust the constant factors in the amount of space used by our method and compare the errors of the obtained solutions. In the appendix, we describe more experimental . The x-axis is the ratio between the amount of space used by the algorithms and the total amount of space occupied by the data matrix. The y-axis is the ratio between the error of the solutions output by the algorithms and the optimal error. details. We also provide additional experiments in the appendix to show that the method also works for f (x) = |x|. We furthre demonstrate the robustness of the parameter selections in the algorithm. Setup. Given a data stream in the form of (i t , j t , δ t ), we use the algorithm in Section 5 to compute the top k = 10 singular vectors L, and then compare the error of this solution to the error of the optimal solution (i.e., the true top k singular vectors). Let A denote the accumulated matrix, M = f (A) denote the transformed one, and U denote the top k singular vectors of M . Then the evaluation criterion is Clearly, the error ratio is at least 1, and a value closer to 1 means a better solution.
Besides demonstrating the effectiveness, we also exam the tradeoff between the solution quality and the space used. Recall that there are constant parameters in the sketching methods controlling the amount of space used. We vary its value, and set the parameters in other steps of our algorithm so that the amount of space used is dominated by that of the sketch. We then plot how the error ratios change with the amount of space used. The plotted results are averages of 5 runs; the variances are too small to plot. Finally, we also report the results of a baseline method: uniformly at random sample a subset T of columns from A, and then compute the top k singular vectors of f (T ). The space occupied by the columns sampled is similar to the space required by our algorithm for fair comparison. We choose uniform sampling as baseline because to the best of the authors' knowledge, our algorithm is the first one to deal with low-rank approximation on transformed matrix in the stream setting, and we are not aware of any other non-trivial algorithm working in this setting.
Synthetic Data
Data Generation. The data sets LogData are generated as follows. First generate a matrix M of n × n where the entries are i.i.d. Gaussians. To break the symmetry of the columns, we scale the norm of the i-th column to 4/i. Finally, we generate matrix A with A ij = exp(M ij ) − 1. Each entry A ij is divided into equally into 5 updates (i, j, A ij /5), and all the updates arrive in a arbitrary order. The size n can be 10000, 30000, and 50000.
Parameter Setting. In our algorithm for low rank approximation, an FJLT matrix S is used [Ach03, AC06]. For the sketching subroutine, instead of specifying the desired ǫ, we directly set the size of the data structure (line 19 in LogSum), so as to exam the tradeoff between space and accuracy. We set m c = m s = m a and set their value so that the space used is at most that used by the sketch method.
Results
. Figure 1 top row shows the results on the synthetic data. In general, the error ratio of our method is much better than that of the uniform sampling baseline: ours is close to 1 while that of uniform sampling is about 4. It also shows that our method can greatly reduce the amount of space needed, e.g., by orders of magnitude, but still preserve a good solution. This advantage is more significant on larger data sets. For example, when n = 50000, to obtain 5% error over the optimum solution, we only needs space corresponding to 5% of the size of the matrix.
Real Data
We experiment our method on the real world data from NLP applications, which are the motivating examples for our approach. Our method with f (x) = log(|x| + 1) is used. The parameters are set in a similar way as for the synthetic data.
Data Collection. The data set is the entire Wikipedia corpus [Wik12] consisting of about 3 billion tokens. Details can be found in the appendix and only a brief description is provided here. The matrix to be factorized is M with M ij = p j log( where N ij is the number of times words i and j co-occur in a window of size 10, N i is the number of times word i appears, N is the total number of words in the corpus, and p j is a weighting factor depending on N j (putting larger weights on more frequent words). Note that N i 's and N can be computed easily, so essentially the only dynamically update part is log N ij . The data stream is generated by considering each window of size 10 along the sentences in the corpus and collecting the co-occurrence counts of the word pairs in that window. We consider the matrix for the most frequent n words, where n = 10000, 30000, and 50000.
Results
. Figure 1 bottom row shows the results on the real data. The observations are similar to those on the synthetic data: the errors of our method are much smaller than the baseline, and are close to the optimum. These results again demonstrate the accuracy and space efficiency of our methods.
Conclusions
We considered the setting where a large matrix is updated by a data stream and the learning tasks is performed on an element-wise transformation of the matrix. We proposed a method for computing the product of its element-wise transformation with another given matrix. For a large family of transformations, our method only needs a single pass over the data and provable guarantees on the error. Our method uses much smaller space than directly storing the matrix. Our approach can be used as a building block for many learning tasks. We provided a concrete application for low-rank approximation with theoretical analysis and empirical verification, showing the effectiveness of this approach. [SWZ17] Zhao Song, David P Woodruff, and Peilin Zhong. Low rank approximation with entrywise ℓ 1 -norm error. In Proceedings of the 49th Annual Symposium on the Theory of Computing (STOC). ACM, https://arxiv.org/pdf/1611.00898, 2017.
A.1 CountSketch and Gaussian Transforms
Definition A.1 (Sparse embedding matrix or CountSketch transform). A CountSketch transform is defined to be Π = ΦD ∈ R m×n . Here, D is an n × n random diagonal matrix with each diagonal entry independently chosen to be +1 or −1 with equal probability, and Φ ∈ {0, 1} m×n is an m × n binary matrix with Φ h(i),i = 1 and all remaining entries 0, where h : [n] → [m] is a random map such that for each i ∈ [n], h(i) = j with probability 1/m for each j ∈ [m]. For any matrix A ∈ R n×d , ΠA can be computed in O(nnz(A)) time.
Definition A.2 (Gaussian matrix or Gaussian transform). Let S = 1 √ m · G ∈ R m×n where each entry of G ∈ R m×n is chosen independently from the standard Gaussian distribution. For any matrix A ∈ R n×d , SA can be computed in O(m · nnz(A)) time.
We can combine CountSketch and Gaussian transforms to achieve the following: Definition A.3 (CountSketch + Gaussian transform). Let S ′ = SΠ, where Π ∈ R t×n is the CountSketch transform (defined in Definition A.1) and S ∈ R m×t is the Gaussian transform (defined in Definition A.2). For any matrix A ∈ R n×d , S ′ A can be computed in O(nnz(A) + dtm ω−2 ) time, where ω is the matrix multiplication exponent.
A.2 Pythagorean Theorem, matrix form
Here we state a Pythagorean Theorem for matrices.
A.3 Adaptive Sampling
We described a t-round adaptive sampling algorithm. The algorithm is originally proposed in [DRVW06]. We will use π V (A) to denote the matrix obtained by projecting each row of A onto a linear subspace V . If V is spanned by a subset S of rows, we denote the projection of A onto V by π span(S) (A). We use π span(S),k (A) for the best rank-k approximation to A whose rows lie in span(S).
• Start with a linear subspace V . Let E 0 = A − π V (A) and S = ∅ • For j = 1 to t, do -Pick a sample S j of s j rows of A independently from the following distribution : row i is picked with probability P Theorem A.5 ([DRVW06], see also Theorem 3 in [DV06]). After one round of the adaptive sampling procedure described above,
B.1 Proofs of Sketch log(| · | + 1)-Vector Product
Theorem B.1 ([Gan07], K-Set). There exists a data structure supports updates of the form (i, ∆) to a vector v ∈ R n , where i ∈ [n] and ∆ ∈ {−1, 1}, and supports a query operation at any time. The algorithm either returns the current vector v ∈ R n or "Fail". If the supp(v) ≤ k, then the data structure returns "Fail" with probability at most δ ∈ (0, 1). The algorithm uses space O[k log n log(k/δ)] bits.
Proof of Theorem 4.2. Firstly, in the algorithm, for the level j, we sample the universe with probability p j = min( ǫ −2 poly(log n/δ) 2 j , 1). Suppose the true support of x satisfies | supp(x)| = Θ(2 j ). We argue that with high probability, there exists an j * ≥ j such that KSet j * succeeds. To show this, it is suffice to show that KSet j succeeds with high probability. By Chernoff bound, with probability at least 1 − Θ(δ), the number of coordinates sampled in level j is Θ(ǫ −2 poly(log n/δ)). By Theorem B.1, the KSet j instance succeeds to return the sampled sub vector with probability at least 1 − O(δ). Since the coordinates sampled in KSet j * is with probability at least p j , we can bound the variance of unbiased estimator by where the first step uses the fact Pr[i ∈ S j ]2 j x i log(|y i | + 1), the second step expands the square, the fourth step uses Pr[i ∈ S j ] = 2 −j , the fifth step uses the fact that i a i b i ≤ (max i a i ) · i b i for b i ≥ 0, and the last step uses the fact that max i∈ [n] 2 j log(|y i | + 1) ≤ 2 j · log m · min i∈ [n] log(|y i | + 1) ≤ log m · n i=1 log(|y i | + 1) Applying Bernstein's inequality, we conclude the proof.
B.2 Sketch | · |-Vector Product
Our algorithm for sketching | · |-Vector product is based on the algorithm established in [BVWY17]. The algorithm is formally presented in Algorithm 3. We first present an algorithm that approximates the inner product for only non-negative x. In the theorem, we will show the inner product for general x can be approximated as well. The high level idea is similar to the p-stable distribution algorithm established in [Ind00]. However this algorithm is much simpler in terms of hashing function chosen and distribution design. In this algorithm, we used the distribution called p-inverse distribution ([BVWY17]) over positive integers such that Pr[X ≤ z] = 1 − 1/z p , where X is the p-inverse random variable. Then we scale each coordinate of |x| 1/p y by a random variable drawn from the p-inverse distribution. After this, we run a count-sketch to find the largest few coordinates in the updating scaled vector. It can be shown that the median value of these output coordinates serve as a good estimation for the p-norm of the vector |x| 1/p y. A similar idea of this kind can be found in [And17]. For the √ ·-case, we simply chose p = 1/2 then y p p is a good estimation to i |x i | |y i |.
Theorem B.2. Given a fixed vector x ∈ R n and number p ∈ (0, 2]. There exists an one-pass streaming algorithm that makes a single pass over the stream updates to an underlying vector y ∈ R n , and outputs a number Z, such that, with probability at least 1 − δ, The algorithm uses space O(ǫ −2 poly(log(n/δ))) (excluding the space of x).
Proof. The proof of the this theorem is a straightforward application of the results in [BVWY17] by splitting x into positive and negative parts.
B.3 More General Functions f
Furthermore, our framework can be applied to a more general set of functions. This set of function includes nearly all "nice" functions for n variables. For the ease of representation, we neglect the formal definition of the this set. It can be understood that a function in this set satisfies three properties: slow-jumping, slow-dropping and predictable. Readers that are interested, please refer to [BCWY16]. Here we give three examples for the the functions that we are able to approximate. For example, x 2 · 2 √ log x , (2 + sin x)x 2 , 1/ log(1 + x). Using our proposed general framework and [BCWY16], we have the following result, Theorem B.3. Given a vector x ∈ {−1, 0, 1} n , and a function f that satisfies the above regularity condition, then there exists a one-pass streaming algorithm that makes a single pass over the stream Let p-inverse distribution be defined as Pr[z < x] = 1 − 1 x p
3:
Let D denote the pairwise independent p-inverse distribution.
Proof. The proof is a straightforward application of [BCWY16] by considering the positive part and negative part of x separately.
Remark B.4. We remark that the algorithm in [BCWY16] is quite complicated but has the potential to be simplified. We also note that x is not necessarily restricted on {−1, 0, −1}, but the complexity depends on ratio of the absolute values of the maximum non-zero entry and minimum non-zero entry (in absolute value) of x.
B.4 From Vector Product Sketch to Matrix Product Sketch
With the f -vector product sketch tools established, we are now ready to present the result for sketching the matrix product, M = f (A)B. Notice that each entry M i,j := f (A i ), B j is an inner product. Thus our algorithm for the matrix sketch is simply maintaining a f -vector product sketch for each M i,j . In our algorithm, we assume that matrix B is given, i.e., hardwired in the algorithm. Thus, if B ∈ R n×k for some k ≪ n, we only need to keep up to O(nk) inner product sketches, which cost in total O(nk) words of space. For the ease of representation, we present our guarantee for matrix product for f (x) := log c (|x|) for some c or for f (x) = x p for 0 ≤ p ≤ 2, and for matrix B ∈ {−1, 0, 1} n×k . Our results can be generalized to a more general set of functions and matrix B using the results presented in Section B.3.
Theorem B.5. Given a matrix B ∈ {−1, 0, 1} n×k , and a function f (x) := log c (|x|) for some c or f (x) := |x| p for some 0 ≤ p ≤ 2, then there exists a one-pass streaming algorithm that makes a single pass over the stream updates to an underlying matrix A ∈ R n with updates of absolute value at least 1 2 and outputs a matrix M , such that, with probability at least 1 − δ, for all i, j, The algorithm uses space O(ǫ −2 nk poly(log(n/δ))).
Proof. The proof of this theorem is a straightforward application of Theorem 4.2 and Theorem B.2.
C Application in Low Rank Approximations
C.1 Leverage score and its application on samping Classic approaches of low rank approximation first compute the leverage scores of the matrix M , and then sample rows of M based these scores.
Definition C.1 (Leverage scores, [Woo14, BSS12]). Let U ∈ R n×k have orthonormal columns with n ≥ k. We will use the notation p i = u 2 i /k, where u 2 i = e ⊤ i U 2 2 is referred to as the i-th leverage score of U .
Definition C.2 (Leverage score sampling, [Woo14, BSS12]). Given A ∈ R n×d with rank k, let U ∈ R n×k be an orthonormal basis of the column span of A, and for each i let p i be the squared row norm of the i-th row of U . Let k · p i denote the i-th leverage score of U . Let β > 0 be a constant and q = (q 1 , · · · , q n ) denote a distribution such that, for each i ∈ [n], q i ≥ βp i . Let s be a parameter. Construct an n × s sampling matrix B and an s × s rescaling matrix D as follows. Initially, B = 0 n×s and D = 0 s×s . For the same column index j of B and of D, independently, and with replacement, pick a row index i ∈ [n] with probability q i , and set B i,j = 1 and D j,j = 1/ √ q i s.
We denote this procedure Leverage score sampling according to the matrix A.
However approximating these scores is highly non-trivial, especially in the streaming setting. Fortunately, it suffices to compute the so-called generalized leverage scores, i.e., the leverage scores of a proxy matrix. We describe the resulting algorithm (Algorithm 2) and the intuition here and provide the complete analysis later. Definition C.3 (generalized leverage score). Consider two accuracy parameters α ∈ (0, 1), δ ∈ (0, 1), and two positive integers q and k with q ≥ k. If there is a matrix E ∈ R n×n with rank q and that approximates the row space of M ∈ R n×n as follows, then the leverage scores of E are called a set of (1 + α, δ, q, k)-generalized leverage scores of A. Suppose E has an SVD decomposition U ΣV ⊤ , where U ∈ R n×q , V ∈ R n×q are orthonormal matrices, then its leverage scores are ℓ i = V i 2 2 where V i is the i-th row of V , ∀i ∈ [n]. These scores can be computed easier. We first need to find such an matrix E. Let S be a subspace embedding matrix (i.e., SM x 2 ∈ (1 ± α) M x 2 , ∀x ∈ R n , a sufficient large matrix with random +1, −1 entries will have this property).
Then E = SM satisfies the requirement in Definition C.3, and thus we can simply use our sketching method to approximate SM and then compute its leverage scores. In Algorithm 2, we will use the concatenation of the positive and negative parts of S, since it also satisfies the requirement and empirically has better accuracy than S. The quality of the generalized scores (i.e., α and δ) will depend on the parameter s in the algorithm that are specified in our final Theorem 5.1.
The scores then can be used for sampling. Let P be a set of columns of M sampled based on these scores (defined in Line 11 of Algorithm 2). It is known that, when the scores are (O(1), 0, q, k)generalized leverage scores, then the span of a P with Ω(q log q) columns will contain a rank-q matrix which provides a O(1)-approximation to M [DMIMW12, BSS12, BW14, CEM + 15, SWZ19b]. It is tempting to set q = k to match our final goal of rank-k approximation, but all existing fast methods require q > k. To improve the rank-q to rank-k, we use adaptive sampling.
Adaptive sampling samples some extra columns from M according to their squared distances to the span of P . For a column M * i , we thus need to use our sketching method to estimate M * i 2 2 − Γ * i 2 2 , where Γ * i is its projection on to the span of P . This introduces some additive errors but they can be handled by thresholding. Let Y be the sampled columns. Adaptive sampling ensures that there is a good rank-k approximation in the span of Y := Y ∪ P as long as we have sampled sufficiently many columns. To obtain our final rank-k approximation, it suffices to project M to the span of Y and compute the top k singular vectors. The projection can be done by sketching and the errors are, again, small.
C.2 Proof of Theorem 5.1
Recall that there are three steps in computing the top singular vectors (see Algorithm 2): • Compute the generalized leverage scores and sample a set P according to the scores, • Adaptive sampling to get a set Y , • Project to the span of Y and compute the approximation solution there.
Below we present the complete proofs for each step.
For simplicity, we use the following notion.
Definition C.4. We say that the span of P has a (1 + ǫ, ∆)-approximation subspace for M if there exists C such that
C.3 Sampling by Generalized Leverage Scores
First, recall the definition of generalized leverage scores and related property from [BLS + 16].
Lemma C.5 (Lemma 2 in [BLS + 16]). Suppose 0 < k ≤ q ≤ m ≤ n, α > 0, ∆ > 0, and A ∈ R n×n . Let B ∈ R n×b be b = O(α −2 q log q) columns sampled from A according to a set of (1 + α, ∆, q, k)generalized leverage scores of A. Then with probability ≥ 0.99, the col-span of B ∈ R n×b has a rank-q (1 + 2α, 2∆)-approximation subspace for A. That is, there exists C ∈ R b×n such that We need the following result about subspace embedding.
(Approximate Matrix Product). For any fixed A ∈ R n×n and B ∈ R n×k We are going to show that in Algorithm 2, the span of P has a good approximation subspace. Intuitively, E = R · M ∈ R 2s×n approximates the row space of M ∈ R n×n and E ∈ R 2s×n approximates E ∈ R 2s×n , so by the definition, the leverage scores {ℓ i } of E ∈ R 2s×n are the generalized leverage scores of M . Then the conclusion follows from Lemma C.5. Formally, we have the following lemma.
Lemma C.7 (Sampling leverage scores). Let s = O(k log k). Let d 1 = O(k log 2 k). Recall that P ∈ R n×d 1 is the matrix sampled with the leverage score of (S · M ) ∈ R s×n , as constructed in Line 11 as in Algorithm 2.
There exists matrix S ∈ R s×n , such that there exists C satisfying where ∆ 1 = O(ǫ 2 /s) M 2 1,2 . Proof. First, s is large enough so that S ∈ R s×n is a 0.1-subspace embedding matrix for subspace of dimension k; see [BLS + 16, Woo14]. Then it is known that there exists Z ∈ R n×s satisfying Let E = RM ∈ R 2s×n . Then , the second step follows from Eq. (2) and the definition E = RM ∈ R 2s×n .
Consider the second term.
where the last step we use X = [Z, Z] ∈ R n×2s . Hence we have where the first step follows from our guarantee on our sketching method in Theorem 4.2, the second step follows from the construction of R, i.e. the range of each entry of the CountSketch matrix. By Lemma C.6, we can rewrite Z as follows: Putting it all together, we have This satisfies the definition of generalized leverage scores. Then the statement follows from Lemma C.5.
C.4 Adaptive Sampling
where ∆ 1 is defined as Lemma C.7, and ∆ 2 = O(ǫ √ d 1 + ǫ 2 d 1 ) M 2 F . Proof. If p i 's are larger than a constant times the true square distances s i 's, then the statement follows from Theorem A.5. So consider the difference between s i and s i .
Let Γ = Q ⊤ p M ∈ R d 1 ×n where Q p is obtained from QR-decomposition as in Line 13 of Algorithm 2.
By our guarantee in Theorem 4.2, where the last inequality follows since (Q p ) j 's are basis vectors and have length 1. So where the third step follows from triangle inequality, and the last step follows Eq. (3). And where the first step follows from triangle inequality, the second step follows from (4), the third step follows form Cauchy-Swartz inequality, the fourth step follows Γ i = Q ⊤ p M i ∈ R d 1 and Q p ∈ R d 1 ×n is an orthonormal matrix. Therefore, Suppose that the algorithm sets η : and thus where the first step follows from the definition of p i , i.e. p i = max{ s i , η z i } and the assumption η z i ≤ 2δ i , the second step follows from the assumption i∈C s i ≥ 2 i∈[n] δ i , the third step uses (5), the fourth step is because s i ≥ 0 and C ⊂ [n]. So we are done in this case.
In the other case when i∈C s i < 2 i∈[n] δ i , we have where the first step follows from δ i = |s i − s i | and triangle inequality, the second step uses the construction of the set C and the third step uses the assumption i∈C s i < 2 i∈[n] δ i . This means that Γ is close to M , and thus [Γ] k (the best rank-k approximation to Γ) will be the desired approximation in the span of P (and thus the span of Y since P ⊆ Y ). This completes the proof.
C.5 Computing Approximation Solutions
Lemma C.9. Let d 1 = O(k log 2 k) and d 2 = O(k/ǫ). There is an algorithm that outputs a matrix L , such that Proof. Since Q ∈ R n×(d 1 +d 2 ) is orthonormal, Q ⊤ Q = I d 1 +d 2 . We need the following auxiliary result: for any A ∈ R (d 1 +d 2 )×n , This is because where the second step uses Q ⊤ Q = I d 1 +d 2 . Then (6) simply follows from Theorem A.4. We also need the following result: for any A ∈ R (d 1 +d 2 )×n , This is because where the second step we uses the fact that Q ⊤ Q = I d 1 +d 2 .
Let X ∈ R n×n denote the Y C ∈ R n×n in Lemma C.8. Recall that Q y is obtained from QRdecomposition of Y ∈ R n×(d 1 +d 2 ) , so we can write Y = Q y R y . For simplicity, let Q denote Q y and R denote R y . Then we have where the first step uses (6) by setting A = [Q ⊤ M ] k , the second step uses (7), the third step uses the fact that rank(RC) ≤ rank(QRC) = rank(Y ) ≤ k and [Q ⊤ M ] k ∈ R (d 1 +d 2 )×n is the best rank-k approximation for Q ⊤ M ∈ R (d 1 +d 2 )×n , the fourth step again uses Eq. (7), the fifth step uses the Eq. (6) by setting A = RC, and the last step uses that QRC = Y C = X ∈ R n×n . Therefore, where the first step follows from (8), the second step follows from Lemma C.8. Let W ∈ R (d 1 +d 2 )×k denote the top k singular vectors of Q ⊤ M ∈ R (d 1 +d 2 )×n . Since W ∈ R (d 1 +d 2 )×k are the top k singular vectors of Π ∈ R (d 1 +d 2 )×n , we have where the first step uses the fact that is the best rank k approximation of Q ⊤ M , the third step uses the fact that AB F ≤ A 2 · B F for any matrices A, B, and W W ⊤ Q ⊤ M = [Q ⊤ M ] k , since W are the top k singular vectors of Q ⊤ M , the fourth step uses AA ⊤ − I 2 ≤ 1 for all orthonormal matrix A ∈ R (d 1 +d 2 )×k since (AA ⊤ − I) 2 = I − AA ⊤ , the fifth step uses convergence grantee in Theorem 4.2, the sixth step follows from Q is an orthonormal matrix with d 1 + d 2 columns.
We now bound the error using the above two claims. Noting L = Q W ∈ R (d 1 +d 2 )×k , hence by (7) we have Lemma C.7 error from oblivious sketching matrix Lemma C.9 error from adaptive column sampling Therefore where the first step uses the fact that L = Q W and (6) with A = W L ⊤ , the second step uses (12) and (10), the third step uses (6) with A = [Q ⊤ M ] k , the fourth step uses Lemma C.8, and the last step is the definition of ∆ 3 .
C.6 Main result
Theorem C.10. There exists an algorithm (procedure LowRankApprox in Algorithm 2) that with parameter settings as in Table 1, runs in query time nk poly(log n, 1/ǫ) and space O(nk/ǫ 2 ), outputs a matrix L ∈ R n×k so that holds with probability at least 9/10. Proof of guarantee.
where the first step uses Lemma C.9, the third step uses the definition of d 1 = O(k log 2 k) and Proof of time and space. The largest matrix we ever need to store during the process has size n × (d 1 + d 2 ) = O(ǫ −1 nk log 2 k). The space needed by LogSum is bounded by O(ǫ −2 nk) by Theorem 4.2. So the overall space used is at most O(ǫ −2 nk).
Since we only call LogSum 4 times in the whole process, the query time hence follows from Theorem 4.2.
Notice that M 1,2 ≥ M F ≥ M − [M ] k F , so we can rescale ǫ to get Theorem 5.1.
D.1 rank(A) ≫ rank(log A)
In this section, we provide a matrix A ∈ R n×n with rank-n, however, the rank(log A) = 1.
Recall the definition of Vandermonde matrix.
Definition D.1. An m × n Vandermonde matrix usually is defined as follows Theorem D.2. Let A denote a n×n Vandermonde matrix with α i = α j , ∀i = j. Then rank(A) = n and rank(log(A)) = 1.
Proof. By definition of
Note that, we can compute the determinant of matrix A, Since α j = α i , ∀j = i, thus det(A) = 0 which implies rank(A) = n. By definition of log(A), we have, Therefore rank(log(A)) = 1.
D.2 rank(A) ≪ rank(log A)
In this section, we provide a matrix A ∈ R n×n with rank-n/2, however, the rank(log A) = n.
Theorem D.3. There is a matrix A ∈ R n×n such that rank(A) = n/2 and rank(log(A)) = n.
Proof. Let B denote a 2 × 2 matrix as follows It is not hard to see that rank(B) = 2 and rank(log(B)) = 1. We define matrix A by copying B by n/2 times on A's diagonal blocks, Then we have rank(A) = n/2 and rank(log(A)) = n.
Due to the following fact, copying rank-1 matrix several times won't give a better theorem D.3.
Proof. Without loss of generality, let's assume A can be written as Let B denote log(A), then it is easy to that B i,j = log(α i ) + log(β i ). Therefore matrix B can be decomposed into the following case B = log(α) · 1 + 1 ⊤ log(β) ⊤ Thus, rank(B) ≤ 2.
E Application of f -Matrix Product Sketch in Linear Regression
In this section, we consider the application to linear regression. Linear regression is a fundamental problem in machine learning, and there is a long line of work using sketching/hashing idea to speed up the running time [CW13, MM13, PSW17, LHW17, ALS + 18, DSSW18, SWZ19b, CWW19].
Recall that for a matrix M ∈ R n×d , we use log(M ) to denote the n × d matrix where the entry at i-th row and j-th column of matrix log M is log(M i,j ).
Theorem E.1 (Linear regression). Given matrix M ∈ R n×d and vector b ∈ R d where n ≫ d. Let A = log M ∈ R n×d . There is an one-pass algorithm (Algorithm 4) that uses poly(d, log n, 1/ǫ) space, receives the update of M in the stream, and outputs vector x ∈ R d such that holds with probability at least 9/10 and where τ = b 2 / poly(d/ǫ).
4:
Choosing a sketching matrix S ∈ R s×n 5: SA ← SketchLog(S, M ) 6: x ← min x∈R d SAx − Sb 2 7: return x 8: end procedure Proof. Without loss of generality, we assume that A 2 = 1 in the proof.
Let x * ∈ R d denote the optimal solution of this problem, By property of sketching matrix, we have Note that x ′ = (SA) † Sb. Let x ∈ R d denote the optimal solution of min x∈R d SAx − Sb 2 .
It means x = ( SA) † Sb. We have where the first step follows by triangle inequality, the second step follows by Ax ′ −b 2 ≤ (1+ǫ) OPT, the third step follows by definition of x ′ and x * . Now the question is how to bound the term C 1 in Eq. (13). We can upper bound C in the following way, the third step follows by A 2 = 1, and the the last step follows by Sb 2 = O(1) · b 2 . Next, we show how to bound the term C 2 in the above equation, using Lemma F.1, F.3 and F.2, we have where the second step follows by Lemma F.1, the third step follows by Lemma F.3, the fourth step follows by property of sketching matrix S, the fifth step follows by size of S and Lemma F.2 the last step follows by A 2 = 1.
F Tools
In this section, we introduce several basic perturbation results.
[Wed73] presented a perturbation bound of Moore-Penrose inverse the spectral norm, Proof. Given the definition of B, we can rewrite BB ⊤ into four terms, Since we can bound Thus, AA ⊤ − ǫσ min (A)/3 · I BB ⊤ AA ⊤ + ǫσ min (A)/3 · I Thus, This completes the proof.
Theorem F.4 (Generalized rank-constrained matrix approximations, Theorem 2 in [FT07]). Given matrices A ∈ R n×d , B ∈ R n×p , and C ∈ R q×d , let the SVD of B be B = U B Σ B V ⊤ B and the SVD of is of rank at most k and denotes the best rank-k approximation to
G Complete Experimental Results
To demonstrate the advantage of our proposed method, we complement the theoretical analysis with empirical study on synthetic and real data. We consider the low rank approximation task with f (x) = log(x) and f (x) = √ x, vary the amount of space used by our method, and compare the errors of the solutions obtained to the optimum. The we provide additional experiments testing some other aspects of the method such as robustness to the parameter values.
Setup Given a data stream in the form of (i t , j t , δ t ), we use the algorithm in Section 5 to compute the top k = 10 singular vectors L, and then compare the error of this solution to the error of the optimal solution (i.e., the true top k singular vectors). Let A denote the accumulated matrix, M = f (A) denote the transformed one, and U denote the top k singular vectors of M . Then the evaluation criterion is Clearly, the error ratio is at least 1, and a value close to 1 demonstrates that our solution is nearly optimal.
Besides demonstrating the effectiveness, we also exam the tradeoff between the solution quality and the space used. Recall that there is a parameter in the the sketching methods controlling the amount of space used (line 20 in LogSum and line 6 in PolySum). We vary its value, and set the parameters in other steps of our algorithm so that the amount of space used is dominated by that of the sketching. We then plot how the error ratios change with the amount of space used. The plotted results are the average of 5 runs; the variances are too small to plot.
Finally, we also report the results of a baseline method: uniformly at random sample a subset T of columns from A, and then compute the top k singular vectors of f (T ). The space occupied by the columns sampled is similar to the space required by our algorithm for fair comparison. Since our algorithm is randomized, the expected amount of space occupied is used to determine the sample size of the baseline, and is also used for the plots. In the experiments, the actual amount occupied is within about 10% of the expected value.
Implementation and Parameter Setting. In our algorithm for low rank approximation, an FJLT matrix S is used [Ach03, AC06]. In the step of adaptive sampling, instead of setting the threshold η, for simplicity we let q i = max{ s i , 0} and set p i = q i + i q i /n.
For the sketching subroutine, instead of specifying the desired ǫ, we directly set the size of the data structure, so as to exam the tradeoff between space and accuracy. Then we set s = d 1 = d 2 and set their value so that the space used in the corresponding step is at most that used by the sketch method. In particular, we set them equal to the size upper bounds in line 20 in LogSum or line 6 in PolySum.
G.1 Synthetic Data
Data Generation. The following data sets are generated. Note that although we don't provide theoretical analysis for f (x) = √ x, one could follow that for f (x) = log(x) to get similar guarantees, and we also generate synthetic data to test our method in this case.
1. LogData: This is for the experiments with f (x) = log(x). First generate a matrix M of n × n where the entries are i.i.d. Gaussians. To break the symmetry of the columns, scale the length of the i-th column to 4/i. Finally, generate matrix A with A ij = exp(M ij ). Each entry A ij is divided into equally into 5 updates (i, j, A ij /5), and all the updates arrive in a random order. The size n can be 10000, 30000, and 50000.
2.
SqrtData: This is for the experiments with f (x) = √ x. The data and update stream are generated similarly as LogData, except that A ij = M 2 ij . We tested on sizes n = 10000 and n = 30000.
Results. Figure 2 shows the results on the synthetic data LogData, and Figure 3 shows those on SqrtData. In general, the error ratio of our method is much better than that of the uniform sampling baseline: ours is close to 1 while that of uniform sampling is about 4. It also shows that our method can greatly reduces the amount of space needed by orders while merely comprising the solution quality, and this advantage is more significant on larger data sets. For example, when n = 50000, using space about 5% of the matrix size leads to only about 5% extra error over the optimum. Finally, we note that these observations are consistent on both f (x) = log(x) and f (x) = √ x.
G.2 Real Data
We exam our method on the real world data from the NLP application word embedding, which is a motivating example for proposing our approach. Our method with f (x) = log(x + 1) is used. The parameters are set in a similar way as for the synthetic data.
Data Collection. The data set is the entire Wikipedia corpus [Wik12] consisting of about 3 billion tokens. Details can be found in the appendix and only a brief description is provided here. The matrix to be factorized is M with M ij = p j log N ij N N i N j where N ij is the number of times words i and j co-occur in a window of size 10, N i is the number of times word i appears, N is the total number of words in the corpus, and p j is a weighting factor depending set to p j = max{1, (N j /N 10 ) 2 }, which (c) LogData, n = 5 · 10 4 Figure 2: Error ratios on the synthetic data LogData. The x-axis is the ratio between the amount of space used by the algorithms and the total amount of space occupied by the data matrix. The y-axis is the ratio between the error of the solutions output by the algorithms and the optimal error. (c) SqrtData, n = 5 · 10 4 Figure 3: Error ratios on the synthetic data SqrtData. The x-axis is the ratio between the amount of space used by the algorithms and the total amount of space occupied by the data matrix. The y-axis is the ratio between the error of the solutions output by the algorithms and the optimal error. . The x-axis is the ratio between the amount of space used by the algorithms and the total amount of space occupied by the data matrix. The y-axis is the ratio between the error of the solutions output by the algorithms and the optimal error. Error ratio Figure 5: Error ratios when using different sample size in the algorithm. The x-axis is the ratio between the amount of space used by the algorithms and the total amount of space occupied by the data matrix. The y-axis is the ratio between the error of the solutions output by the algorithms and the optimal error.
puts larger weights on more frequent words since they are less noisy [PSM14,LG14]. Note that N i 's and N can be computed easily, so essentially the only dynamically update part is log N ij . The data stream is generated as a window of size 10 slides along the sentences in the corpus and we collect the co-occurrence counts of the word pairs in the window. Here the count is weighted, i.e., if two words appear in a distance of t inside the window, then the count update value is 1/t as in [PSM14]. We consider the matrix for the most frequent n words, where n = 10000, 30000, and 50000.
Results. Figure 4 shows the results on the real data. The observations are similar to those on the synthetic data: the errors of our method are much better than the baseline, and are close to the optimum; the method is very space efficient without increasing the error much. These results again demonstrate its effectiveness.
G.3 The Effect of the Sample Size
In our algorithm we have parameters s = d 1 = d 2 that determine the sample sizes in different steps of the algorithm. In previous experiments, we set them equal to the size upper bounds in line 20 in LogSum or line 6 in PolySum. Here we consider varying their values. In particular, we use the Wikipedia data with n = 10000 and f (x) = log(x) and set the size upper bound of the sketch method to be 200. Then we set s = d 1 = d 2 = γ and vary the value of γ.
Results. Figure 5 shows the results with various sample sizes. It is observed that smaller sample sizes lead to worse errors as expected, but overall the results are quite stable across different sizes. This demonstrates the robustness of our method to these parameters. It is also observed that after a certain value, increasing the sample size doesn't lead to better error, which should be due to the approximation error introduced by the sketch. The results suggest that in general the sample size should be set approximately equal to the size upper bound in the sketch method.
|
2020-02-25T02:01:23.048Z
|
2020-02-23T00:00:00.000
|
{
"year": 2020,
"sha1": "31ede48853bf5aa52bc79e7b6326a06657e7ac3a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "31ede48853bf5aa52bc79e7b6326a06657e7ac3a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
261391419
|
pes2o/s2orc
|
v3-fos-license
|
The Intersection of the COVID-19 Pandemic and the 2021 Heat Dome in Canadian Digital News Media: A Content Analysis
During the 2021 Heat Dome, 619 people in British Columbia died due to the heat. This public health disaster was made worse by the ongoing COVID-19 pandemic. Few studies have explored the intersection of heat with COVID-19, and none in Canada. Considering that climate change is expected to increase the frequency of extreme heat events, it is important to improve our understanding of intersecting public health crises. Thus, this study aimed to explore media-based public health communication in Canada during the COVID-19 pandemic and the 2021 Heat Dome. A qualitative content analysis was conducted on a subset of media articles (n = 520) related to the COVID-19 pandemic which were identified through a previous media analysis on the 2021 Heat Dome (n = 2909). Many of the articles provided conflicting health messages that may have confused the public about which health protective actions to take. The articles also showed how the COVID-19 pandemic may have exacerbated the health impacts of the 2021 Heat Dome, as pandemic-related public health measures may have deterred people away from protecting themselves from heat. This study, which provides novel insight into the prioritization of public health messaging when an extreme heat event occurs concurrently with a pandemic, supports the need for consistent heat health guidance.
Introduction
The COVID-19 pandemic has had devastating impacts on the global population and health systems, uncovering vulnerabilities and weaknesses in health services, including health inequities.During the summer of 2021, the COVID-19 pandemic intersected with the 2021 Heat Dome, a prolonged period of abnormally high temperatures that covered a large geographic area of western Canada and the United States, exposing many people to hazardous heat and straining healthcare services [1].In Canada alone, this extreme heat event (EHE) killed 619 people in British Columbia (B.C.) [2] and an estimated 66 more in Alberta [3] between 24 June and 12 July 2021.In comparison, 18 individuals in British Columbia and 26 individuals in Alberta died of COVID-19 during roughly the same period (24 June 2021, to 21 July 2021) [4,5].
The combination of extreme heat and COVID-19 had devastating collateral effects, further compromising public health and severely straining emergency health services.
In response to the COVID-19 pandemic, health authorities across Canada implemented various public health measures to limit the transmission of the virus, including physical distancing, masking, lockdowns, vaccinations, and testing centers [6].However, some of these actions may have inadvertently conflicted with best practices to protect people from extreme heat (e.g., access or capacity restrictions for cooling centers to respect physical distancing mandates) [7].Additionally, COVID-19 restrictions may have discouraged people from accessing cooling centers or seeking emergency support due to the fear of being exposed to the virus [8].Therefore, public health interventions for COVID-19 may have disrupted health measures typically implemented in response to extreme heat.
With climate change projections indicating a rise in the frequency of EHEs [9] and the inevitability of future pandemics globally [10], it is vital to understand the impacts of intersecting public health crises on public health and healthcare systems.In turn, strategies can be developed to improve resilience and reduce health and social support systems' disruption in future EHEs that occur concurrently with an outbreak or pandemic.Existing investigations into these compounding impacts have primarily explored discrete impacts (e.g., hospitalizations, occupational health [11,12]), but have not examined how the combined effect of systemic social vulnerabilities and pandemic-related factors contribute to extreme heat risk (Table 1).[34][35][36][37] United States [38][39][40] Due to the limited sources of data available for exploring the combined impacts of these intersecting public health crises, we sought to address these knowledge gaps by examining references to COVID-19 in digital media articles on the 2021 Heat Dome in Canada published between June 2021 and February 2022.As the media serves as a significant information source to the public [41] and powerful medium for raising public awareness [42], analyzing media articles is one qualitative method that can help capture more details about the compounding health impacts of two or more health crises.Further, exploring news coverage can provide a unique medium for understanding how intersecting crises are presented to the public, as journalists draw on a wide array of sources and perspectives [43][44][45].Although several analyses of media coverage of individual EHEs have been undertaken globally [43,46,47], few have been conducted within the Canadian context [46], and none in relation to an EHE and COVID-19.
Thus, by harnessing the wealth of media articles circulated to the public during and after the 2021 Heat Dome, this study presents a unique opportunity to evaluate the intersecting impacts of an EHE with an ongoing pandemic in Canada.Findings from this study will assist in providing valuable guidance to public health authorities related to risk prioritization of public health messaging, health protection planning, and identifying vulnerabilities in the public health response to dual crises.Further, the findings have applications beyond the intersection strictly of COVID-19 and heat as they provide a greater theoretical understanding of the combined impacts of two health crises.
Materials and Methods
This study is part of a larger research project that examines the portrayal and communication of health risks and impacts associated with the 2021 Heat Dome in Canada within digital news articles [48].A systematic review was conducted for a thorough analysis of digital media content, which included various types of materials such as newspaper articles, blogs, newsletters, community bulletins, municipal meeting minutes, public health unit posts, radio broadcasts, and television transcripts.The search strategy was developed in consultation with a research librarian, and the final search underwent review by a second librarian before database translation (see Supplementary Material for more details).
The search strategy included eight academic databases (Medline, Embase, C.A.B. Abstracts/Global Health, Agricola, FSTA, EconLit, PsycINFO, and Scopus) and five subscription news databases (ProQuest Canadian Major Dailies, Business Source Elite, NewsDesk, Factiva, and Eureka).The scope of the search was limited to articles in English and French published within Canada between 1 June 2021 and 26 February 2022.The objectives of the search strategy were to (i) minimize reliance on prestige press (i.e., The Globe and Mail, Toronto Star, National Post) and limit outlet bias [49] and (ii) capture all news articles published following the forecasted extreme heat alert, during the heat event, and several months after its conclusion, which included subsequent weather events that were made more likely due to the 2021 Heat Dome (e.g., wildfires) [50].Content from social media platforms like Twitter and Facebook, as well as materials without transcription, such as audio and video-only content, were excluded.
In addition to the database searches, the search strategy included a list of targeted websites belonging to public and non-profit organizations for each province and territory in Canada, including national sites.Given the geographic impact of the 2021 Heat Dome, detailed web searching in the western provinces (i.e., British Columbia, Alberta, Saskatchewan, and Manitoba) was prioritized, focusing on agencies related to health, environment, agriculture, infrastructure, housing, labor, safety, hydro, school boards, municipalities, and Indigenous communities.For the remaining provinces and territories (n = 9), the search was simplified to cover health, agriculture, housing, and labor.For each targeted website (n = 997), the search terms "heat" and "2021" were entered into the search function.When a search function was not available, the authors (E.J.T. and N.G.) manually searched the website by targeting the homepage, news tabs, newsletters, and publication/resource tabs.Additionally, Advanced Google searches were performed for each province and territory to ensure coverage of open-access online news sources, using the following search string: ("location" AND "heat wave" OR "heat dome" OR "extreme heat" AND "2021").The Google searches were continued until the following notice was reached: "in order to show you the most relevant results, we have omitted some entries very similar to the X already displayed." In the larger research project, the authors created a codebook of concepts, positive indicators of a given code, and examples of the dataset.An initial round of screening was then performed to identify relevant articles captured by the complete search strategy (n = 152,597).All relevant articles (n = 5357) were uploaded to Zotero (Release 6.0, Corporation for Digital Scholarship), a reference manager software, and NVivo (Release 1.6.2,QSR International), a qualitative data analysis software.A trial coding of 500 randomly selected articles (~10% of relevant articles) was completed independently by two authors (E.J.T. and N.G.).A coding comparison query was then performed and revealed that the authors achieved a high percentage of agreement and a kappa coefficient of 0.64 for this sub-analysis, indicating a "good" strength of agreement [51].The remaining articles (n = 4857) were divided evenly and reviewed and coded by the authors (E.J.T. and N.G.).Due to the size of the dataset, full-text review and dedupli-cation co-occurred with coding.All articles included in the larger project's analysis (n = 2909) were then reviewed to identify references to extreme heat and COVID-19 as information-rich cases for this secondary analysis.Terms indicative of the COVID-19 pandemic were determined based on a pre-set list of Medical Subject Headings controlled vocabulary thesaurus terms, including SARS-CoV-2, COVID-19, COVID, and pandemic.The resulting dataset included 520 articles published in Canada between June 2021 and February 2022 (18% of the larger dataset).After coding was complete, the characteristics of the included documents and extracted data (coded findings) were analyzed using a series of NVivo query functions (e.g., date of publication and word frequency).Next, a qualitative content analysis method was used to describe the meaning of the data.The authors then met to discuss the data to reach an agreement on the broader themes and concepts.
Results
Five main themes were identified within the articles that mentioned both COVID-19 (COVID-19: n = 1709; pandemic: n = 596; SARS-CoV-2: n = 7) and the Heat Dome (Supplementary Table S1).Quotations from the analyzed news articles are used throughout to provide evidence and supplement the themes and concepts identified.Table S1 in the Supplementary Materials provides additional details for each theme, including concepts and their related counts, positive indicators (keywords) related to each concept, as well as additional quotes from the media.
Communicating the Burden of Multiple Intersecting Public Health Crises
Many articles conveyed the burden of being faced with multiple intersecting health crises.For example, one article noted "first COVID and now this" [52], and another described the series of cascading natural hazards "from the global pandemic to opioid crisis to heat dome to atmospheric rivers" [53].The media frequently mentioned the various health crises in the context of the 2021 provincial election in British Columbia, emphasizing their influence on voters and election outcomes.In other cases, these references were used to communicate opinions on emergency preparedness in general.For example, "we are still very much in the throes of dealing with the last crisis, even as the next crisis is upon us-the fallout of climate change.And like the COVID-19 pandemic, we are only partially prepared" [54].In contrast, other articles emphasized the demand on the health system, infrastructure, and specific individuals/positions affected by the intersecting crises, as illustrated by the following quotes: "officials are balancing COVID restrictions with the need for people to stay cool" [55] and "the City of Vancouver has implemented additional measures to protect residents facing compounded challenges of COVID-19 and the heat" [56].
The media articles also communicated the burden of the two crises on health workers and the health system.The texts frequently referred to workers being "burdened by the crushing COVID-19 pandemic and record-breaking heat wave" [57] as "many health professionals were already working seven days a week on COVID-19. . .The heat wave on top of this [was] almost the straw that could break the camel's back" [58].A few articles reported that staff were shifted from non-emergency care to emergency services to accommodate the demand on the health system.For example: "nurses are being moved into the hospital's emergency department to help people deal with the current heat wave, the heavy smoke from wildfires and COVID" [59].Concerns were raised about the prolonged wait time for emergency care.For example, one health authority in British Columbia asked that "people who don't need emergency care to visit walk-in clinics and primary care centres instead of emergency rooms, as hospitals deal with a spike in patient numbers" [60].Emergency medical services across British Columbia reported being extremely strained by three-fold spikes in call volumes during the EHE.On a day when "Vancouver Fire attended 365 calls, including cardiac emergencies, heat emergencies and overdoses" [61], some firefighters spent up to 11 hours with a patient waiting for an ambulance.
The mental health impacts on the public from multiple intersecting public health crises were frequently reported (n = 47) by the media.For example, one article noted that Canadians had "been dealing with a lot of grief lately, from the pandemic and opioid health crises. . . to a deadly heat wave" [62].The president of the Paramedic Association of Canada added that "extreme weather events, like last summer's deadly heat wave in BC, have highlighted the need for more staff and better mental health supports for first responders" [63].In addition to the physically demanding working conditions faced during extreme heat, one paramedic described how mentally taxing the job can become: "You're wearing a respirator, you're wearing a face shield, of course, you're wearing plastic gloves, and a plastic gown.Even in regular weather, that's very taxing, but imagine during the heat dome what it would have been like. . .That sort of plays into your mind, too, because people are upset-'Come on, hurry up, help'-but you've got to protect yourself, too" [64].
Many articles also discussed the mental health toll in the agricultural industry.For example, one farmer said, "the stress of this year is piled on top of the stress from the pandemic" and further commented on being "worried about the mental health of producers [as] this just adds to the stress because there's absolutely nothing you can do.You're watching your crop burn away" [65].In the restaurant industry, one business owner closed his doors for a week to give himself and his employees a mental health break: "it's not just COVID-19 that's contributed to unprecedented stress levels for himself and his staff, but also. . .this summer's deadly heatwave and yet another harrowing wildfire season" [66].
Prioritizing Crises and Conflicting Public Health Messaging
The media articles showed conflicting public health messaging and differences in which crises were more prominently covered.While public health authorities were recommending how the public could access cool indoor spaces, a series of articles also suggested that people gather outdoors, despite the extreme heat, terming it "much safer than gathering indoors" [55].In contrast, other articles communicated the opposite, such as: "we would prefer that people avoid the exposure to extreme heat outdoors. . .We also realize that after a year-and-a-half of COVID restrictions, if people can take advantage of the weather and gather, they will do it.But it is super important to pay attention when it comes to smoke or heat" [67].
In the media, public health messaging often prioritized heat protection over COVID-19, suspending existing infectious disease protocols such as masking and physical distancing.For example, one health agency "clarified that COVID-related occupancy restrictions, physical distancing, and wearing of masks at cooling centres is not required" [68].Similarly, another public health agency advised "that risks from extreme heat exceed risks from COVID-19" [69] emphasizing that "COVID-19 protocols take [a] back seat during a heat wave" [70].Some articles also quantified this prioritization with headlines like "global heating surpassed COVID-19 as the existential crisis at the forefront of our thoughts" [71] and through reporting the difference in deaths between COVID-19 and the EHE.For example, "In the week before Canada Day, over 700 people in British Columbia died in Western Canada's record-breaking heat wave-triple the number that would normally occur.In the same period, 10 people in the province died from COVID-19" [72].Another comparison emerged when "the Vancouver School board decided to close all schools. . .something not even COVID-19 could convince them to do this year" [71].
Numerous articles cited how prioritizing heat over COVID-19 translated practically into real-time revisions of public health measures to accommodate the difficult circumstances.For example, some reports stated that "while the Extreme Heat Alert is in place, cooling centres will be open, and no one should be denied access to these centres because of concerns about crowding or physical distancing" [73].Similarly, a few articles cited that "no one should be denied entry to a cooling centre for not wearing a mask" [74] or that "if people [were] wearing a mask and have difficulty breathing, they should remove the mask, whether they are indoors or outside, as wearing a mask may impact thermal regulation during heat events" [75].These changes to COVID-19 protocols also happened in healthcare settings.For example, "with respect to the removal of masks within healthcare settings, for the duration of the heat wave, we temporarily recommend allowing patients, clients or visitors to remove masks if they feel it is causing difficulty breathing due to the heat" [76].Lastly, some municipalities responded to the EHE by reactivating water fountains that were previously closed due to COVID-19 protocols.Some media articles raised the "inadequacy" of British Columbia's response to the heat dome in comparison to the province's COVID-19 response [77].For example, one opinion article conveyed that "the B.C. government was so caught up in celebrating the latest phase of the COVID-19 restart last week that it was caught off-guard by the record-setting heat wave and the early start of the wildfire season" [78].Of the 11 reports expressing similar sentiments, 10 of them specifically referred to the Premier of British Columbia's response, noting he was "a bit giddy at the prospect of saying goodbye to the state of emergency and stepping into the third step of [B.C.'s] reopening plan" [79].Articles criticized the Premier's comments that the government "didn't think of it as catastrophic hot weather.We thought of it as hot weather" and his statements that fatalities are "part of life" and that emergency responses involve an "element of personal responsibility" [80].One individual, who lost her grandmother due to the extreme heat, expressed her disappointment in the provincial response to the Heat Dome: "After 18 months of clearly communicating the risks around the global COVID-19 pandemic, Gaba feels her government failed her Nana and the others who died when it came to doing the same about the heat" [81].
COVID-19 Exacerbated the Health Impacts of Extreme Heat
Media articles identified that some pandemic-related measures exacerbated heat health impacts during the Heat Dome.For example, "due to COVID-19 protocols, parents at some schools [had to] wait outdoors temporarily, until going inside to see their child cross the stage [for graduation].Some ceremonies are expected to last through the afternoon when Burnaby's temperatures are predicted to be 42 • C and feel like 48 • C with humidity" [82].Therefore, by abiding by the COVID-19 restrictions, many individuals may have been exposed to highrisk heat conditions.There were also circumstances where the lack of COVID-19 public health programming in some areas influenced the implementation of heat health mitigation measures.Articles warned of crowding at locations that were not imposing capacity or physical distancing requirements, such as specific cooling centers and air-conditioned public spaces (e.g., libraries, malls, recreation centers).For example, one report cited a City of Regina staff warning "that these places are expected to be busy, and for those worried about COVID-19 to assess the spaces for themselves.If there is a spray pad in your community that you most often frequent, perhaps plan a backup location if you can.If you go there, it's quite crowded, and you might not be as comfortable accessing it.Maybe try the second or third one" [83].
Many articles discussed the requirement for COVID-19 public health measures to be respected during heat mitigation programs and recommended activities.These included: physical distancing (n = 80), masking (n = 71), ventilation and fan use requirements (n = 17), symptom screening at cooling centers and other public spaces (n = 10), and enhanced sanitization/cleaning requirements (n = 8).The media often portrayed the requirement to abide by these measures as a barrier to accessing various facilities and services (e.g., cooling centers, splash pads, and air-conditioned malls).For example, cooling centers were significantly restricted "in accordance with current public health orders" [84], with many offering only 13-15% capacity for people trying to escape the heat (e.g., "downtown centre's capacity [was restricted] to only about 45 people, rather than its normal 300-to-350-person capacity" [85]).
Heat health mitigation measures were reported to be impacted by measures aimed at preventing the transmission of COVID-19.This included the impacts of public health orders on the use of indoor public spaces.For example, one community service provider stated that "the COVID-19 pandemic has added an extra obstacle because capacity restrictions in many buildings mean there are fewer indoor places to go to cool down" [86] including libraries and shopping centers.Some outdoor cooling spaces were also subject to COVID-19 restrictions.For example, "spray parks and splash pads will open but will be subject to capacity restrictions due to COVID-19 restrictions until July 1st" [87].Considerations related to the COVID-19 pandemic were also embedded in messages about wellness check procedures for heat exposure monitoring: "If someone experiences these symptoms, move them immediately to cooler conditions, and have them rest and drink a cool beverage.Wear a mask and make sure you wash your hands before and after helping a loved one you do not live with.Make sure someone from their household can stay with them, and if they do not immediately feel better, seek medical attention" [88].
The COVID-19 pandemic also added barriers to the public's access to water for hydration and heat stress prevention.Several articles discussed the implications of reduced potable water access for people experiencing homelessness due to the closure by health authorities of various public facilities due to COVID-19.For example: "People experiencing homelessness don't have access to fresh water like you, and I do, and COVID has added an additional barrier to them even accessing public washrooms to find a tap" [89].Many public water fountains were also shut down due to COVID-19, with articles citing that the continuation of restricted access was due to requirements for cleaning and staffing shortages.For example, "The 15 water fountains [in the Capital Regional District] will remain shut off as the humidex is expected to near 40 • C in the region over the weekend.Phase 2 of B.C.'s [COVID-19] reopening plan requires public fountains to be cleaned no less than once per hour. . .We are at full capacity for staff work over this weekend; busy with more than full work to do in our many parks which are extremely busy. . .Each jurisdiction needs to decide what risk they will take and what they can manage" [90].The water fountain closures in several locations were reported to have reduced public access to drinking water (a critical and effective heat stress prevention action), disproportionately impacting socially marginalized populations.
The COVID-19 pandemic posed additional challenges to individuals trying to access cool water and engage in heat mitigation behaviors.For example, although indoor and outdoor pools reopened in many cities because of the extreme heat, requirements for "registering for a swim slot [was] required due to the pandemic" [91].In other cities, it was reported that the reopening of wading pools, spray parks, and splash pads was "delayed by the province's three-step COVID-19 reopening framework" [92], which resulted in these facilities not opening until mid-way through the Heat Dome.
COVID-19 Exacerbated the Health Impacts of Cascading Weather Events following the Heat Dome
A series of cascading weather events occurred in British Columbia in the days and weeks following the Heat Dome, including wildfires, poor air quality (i.e., elevated ozone levels, smoke from wildfires), drought, and significant flooding in the heat-affected areas.Some of these cascading weather events (e.g., wildfires) likely resulted from the Heat Dome, whereas others (e.g., atmospheric rivers) did not, but nevertheless impacted the same locations [93].Responses to address these subsequent weather events were also impacted by the COVID-19 pandemic.For example, in combating the wildfires, a few articles discussed that Canada's "sourcing [of] firefighters from some of its usual allies" [94] was impacted by "the COVID-19 pandemic. . .while B.C. has regularly swapped crews with Australia, New Zealand and the United States during times of need, that hasn't been possible this year.In Australia, they're on lockdown at the moment [due to COVID-19]" [94].Another example is that with the rising temperatures, the ground level ozone also led to air quality alerts in various cities, which subsequently led to articles warning of the combined health concern of poor air quality and extreme heat "for people with underlying health conditions and respiratory infections, such as COVID-19" [95].
Heat Impacting COVID-19 Public Health Efforts
Extreme temperatures were also reported to impact the provision and implementation of COVID-19 public health measures.Most articles that touched on this concept communicated the impact of extreme heat on the operation of COVID-19 vaccination and testing clinics.During the Heat Dome, many clinics and testing sites were closed to protect the health and safety of staff and clients from the "elevated internal temperatures" [96].For example, "the sweltering temperatures throughout much of Western Canada has forced the closure of two COVID-19 vaccination clinics and a testing site in the Vancouver Coastal health region" [97].Multiple outdoor pop-up sites were reported to have moved "indoors to cooler locations in preparation for the extreme heat" [98], with additional measures being added at other sites, such as "umbrellas to provide shade for people waiting outside and bottled water and cooling packs. . .for people who may become overheated" [99].A few articles also reported the impact of the extreme heat on the vaccine supply itself, citing the need for health professionals to protect the "integrity of the temperature-sensitive vaccines" [100].Although mentioned infrequently, a few articles recommended the public get vaccinated during the heat wave to bolster the public's sense of safety when accessing cool public spaces.For example, "getting vaccinated will make being inside malls or other air-conditioned places much safer for you and for others" [88].
Discussion
Health risks were amplified when the COVID-19 pandemic overlapped with EHEs globally [7,11,18].We believe that this study is the first to use content analysis of digital media articles to examine the intersection of two public health crises, specifically how the co-occurrence of the 2021 Heat Dome and the COVID-19 pandemic in Canada impacted each other.Our novel approach allowed for the interpretation of textual meanings rather than just quantifying textual features (e.g., word frequency) and, importantly, provided insight into how heat health information was disseminated to the public and how the media reported on a health and environmental issue in Canada.
This study found that the news media emphasized the newsworthiness of the intersecting crises by highlighting their compounding burden on health systems in western Canada.During the summer of 2021, the COVID-19 pandemic had already been straining health systems across Canada for over a year, and many media articles conveyed how the 2021 Heat Dome further stretched health services beyond their limits.During the 2021 Heat Dome, the media reported increased emergency call volumes, long wait times for emergency services, and overworked health workers.Previous studies have described how health workers wearing personal protective equipment for COVID-19 experience heat strain [7,11,18,21,31,32,34,35].However, our study, by analyzing media articles that combined different sources and reported lived-experience stories [43][44][45], showed the mental health effects of EHEs on healthcare providers and workers in other sectors during the COVID-19 pandemic.For instance, health workers were quoted by the media expressing mental fatigue from working in hot environments and attending to increasing calls while feeling criticized for not meeting emergency service demands.Other studies have similarly emphasized that the mental health of nurses and other medical practitioners was impacted as a result of the outbreak [101][102][103].However, we are unaware of any research that jointly investigated healthcare providers' mental health during the concurrent extreme heat conditions and pandemic.Although not specifically an occupational study, one recent article by Wilhelmi et al. [38] did find that millions of people in the United States had difficulty mentally coping with or responding to extreme heat because of the direct and indirect effects of the COVID-19 pandemic, and one-third of the population expressed worry about heat when they were at work.Correspondingly, our media analysis highlights the need for better mental health support for the public and workers impacted by concurrent crises [104].Further, our findings underscore the importance of all levels of government taking proactive actions to consider intersecting health crises in emergency plans and considering investments to support services and businesses during coinciding crises.
The media disseminated conflicting public health recommendations about the 2021 Heat Dome and COVID-19, such as: staying indoors to minimize social contact, choosing outdoor locations if gathering with others to improve ventilation, seeking indoor cool public spaces to escape the heat, and minimizing time spent outdoors to avoid exposure to extreme heat and poor air quality [105].Although heat mitigation guidance was released in anticipation and during the COVID-19 pandemic [106,107], our findings suggest that the public health experts interviewed for media articles may have been unaware of these resources.The reactive nature of the heat response and the likely impromptu nature of interviews with experts by the media during the 2021 Heat Dome may partly explain the discordance found in public health messaging.However, the factors determining how governments and public health agencies may have prioritized one health crisis above another in their messaging remain unclear.There is also a lack of information on how those who speak about the heat in the media may influence responses by the public.Members of the public interviewed in many articles criticized B.C.'s response as inadequate; the articles expressed concerns regarding the prioritization of COVID-19 in pre-planned speeches from government officials and the lack of community-level preparedness for heat [93].Although speculative, it is plausible that message fatigue and resistance to persuasion related to COVID-19 may have impacted the dissemination and reception of heat-health messaging during the 2021 Heat Dome, leading to a delayed uptake of heat mitigative actions and/or response by the health system and ultimately the loss of lives [108].Health authorities must strategize messaging during coinciding crises to emphasize the importance of risk mitigation behaviors while minimizing reluctance-a psychological defense mechanism used in response to a perceived threat or loss of behavioral freedom [109,110].Further, to ensure consistent public health messaging in the future, federal, provincial, territorial, and jurisdictional health authorities could engage more closely with media producers (e.g., journalists and editors) to help ensure evidence-based interventions are communicated to the public within their local contexts during health crises.
Our results also showed that the media often framed the COVID-19 pandemic as having exacerbated the health impacts of the 2021 Heat Dome.Similar to a recent analysis by Jin and Sanders [40], the news coverage of the 2021 Heat Dome in Canada relayed how COVID-19 restrictions amplified heat health risks through reduced access to cooling strategies.Our results showed building occupancy restrictions, requirements to maintain physical distance, and masking were the main barriers limiting access to or deterring people from cooling centers and other air-conditioned spaces (e.g., malls and libraries).Access to water for bathing and drinking was also reported to have been restricted by COVID-19driven barriers (e.g., pre-booking swimming and closed public drinking water fountains).These restrictions resulted in additional challenges to protecting people from heat, especially among socially marginalized individuals, as evidenced by reports of reduced potable water access for people experiencing homelessness.Additionally, considering that vulnerabilities to COVID-19 and extreme heat often overlap (e.g., older adults and individuals with co-morbidities) [19], community-level heat health action plans should be adjusted to preemptively protect against heat and other concurrent crises to reduce the strain on health systems (e.g., tailored initiatives to support vulnerable groups such as wellness check-ins).
As reported in the articles, many more people in British Columbia and Alberta died from heat than COVID-19 during the 2021 Heat Dome [1,3,4].Yet, in many articles, the requirements to follow pre-existing pandemic restrictions were prioritized, perhaps confusing the public.As Jin and Sanders [40] note, heat response plans vary greatly between health regions, and clear guidelines for best practices during overlapping EHEs and pandemics remain elusive.A potential solution for ensuring consistent messaging would be to engage all stakeholders involved in heat health messaging at a pan-Canadian level, to pre-emptively develop consistent, evidence-based guidance to help inform the messages they share during future EHEs.Further, when EHEs overlap with other health crises, public health plans optimally should convey that taking the recommended health-protective measure for one public health crisis should not jeopardize one's health from another crisis.
Given this study included articles published up to seven months after the EHE, we were also able to highlight how the media continued to report (frame) that the COVID-19 pandemic exacerbated the health impacts of weather events that occurred after and due to the 2021 Heat Dome.For instance, due to increased wildfires, poor air quality was observed in British Columbia and Alberta in the months following the EHE.As a result, the media published content communicating that underlying respiratory infections such as COVID-19 can aggravate the health risks associated with exposure to wildfire smoke.Moreover, many articles highlighted how the floods and wildfires that followed the 2021 Heat Dome compounded demand on and stress to the health system.This finding is significant in the context of climate change, as future EHEs and cascading weather events in Canada are likely to again lead the media to communicate during dual crises the strain on health systems and emergency response systems [1].Consequently, public health officials and other health and public safety system stakeholders should proactively engage with the media to ensure effective content, timing, and prioritization of public health messaging in news coverage during intersecting crises.Additionally, a clear line of communication between health officials and journalists/editors could be established in advance of intersecting crises, which would allow for consistent messaging to be initiated more rapidly and deliberately.
Limitations
This study has a few limitations.Importantly, we analyzed articles from mass-media outlets, associations, and agency press which contain potential bias from various stakeholders involved in creating news media content [49].Thus, our findings reflect the interpretations of these sources as content generators.However, as our intent was to broadly explore how the topic is framed/presented to the public, it was not within the scope of this analysis to look specifically at how content differed between media sources or how they approached the inclusion/exclusion of content.Therefore, this poses an opportunity for future work to expand on these findings and explore how different cultural and geographic contexts and political leanings, among other factors, may influence the reporting of intersecting public health crises.The findings are also limited to exploring Canada's public health communication landscape and do not reflect every North American region affected by the 2021 Heat Dome; the experiences in the United States could have been different and thus warrant investigation in relation to their public health system and communication channels.Another limitation of the current study is the inability to distinguish the voices behind the messaging reported in the captured articles.Given it is not always clear who contributed to the messaging or how they were found (e.g., health officials, academics, journalists, members of the public, and others), future investigations may look to address these gaps by broadening the scope of analysis and seeking additional perspective directly from the knowledge disseminators.
Conclusions
As part of the most extensive investigation to date to systematically review and contentanalyze digital media articles relating to an EHE, this study advances our knowledge of Canada's public health communication landscape during the intersection of the COVID-19 pandemic and the 2021 Heat Dome.Overall, there was conflicting public health messaging communicated in the news media.Additionally, this study provided insight into the compounding impacts of the EHE and COVID-19 pandemic and how each crisis led to the worsening of health impacts on each other.In the coming decades, health systems and public health management will need to adapt to withstand overlapping threats to public health, including but not limited to EHEs and infectious diseases due to climate change.Findings from this study highlight the need for preparedness plans and developing consistent and evidence-based public health messaging.Health authorities could do this by working with key stakeholders involved in the public and media responses to health crises.Investments could be made to strengthen existing health infrastructure and services to pre-emptively build needed surge capacity, including support for mental health during such events.With the global death tolls from extreme heat and COVID-19 still rising, it is imperative to prioritize and strengthen the communication and design of public health strategies, especially those addressing two or more intersecting health crises, to foster public resilience and readiness.
Table 1 .
Examples of literature investigating extreme heat events and the COVID-19 pandemic.
|
2023-08-31T15:20:04.743Z
|
2023-08-29T00:00:00.000
|
{
"year": 2023,
"sha1": "dc1b9fd6e8e10e629234605146b1342e693c4be8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/17/6674/pdf?version=1693294407",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58af7fa02b2c95c09cb0348683b7bb7127f99d09",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
237684212
|
pes2o/s2orc
|
v3-fos-license
|
An Experimental Investigation of Water Vapor Condensation from Biofuel Flue Gas in a Model of Condenser, (2) Local Heat Transfer in a Calorimetric Tube with Water Injection
: In order for the operation of the condensing heat exchanger to be efficient, the flue gas temperature at the inlet to the heat exchanger should be reduced so that condensation can start from the very beginning of the exchanger. A possible way to reduce the flue gas temperature is the injection of water into the flue gas flow. Injected water additionally moistens the flue gas and increases its level of humidity. Therefore, more favorable conditions are created for condensation and heat transfer. The results presented in the second paper of the series on condensation heat transfer indicate that water injection into the flue gas flow drastically changes the distribution of temperatures along the heat exchanger and enhances local total heat transfer. The injected water causes an increase in the local total heat transfer by at least two times in comparison with the case when no water is injected. Different temperatures of injected water mainly have a major impact on the local total heat transfer until almost the middle of the model of the condensing heat exchanger. From the middle part until the end, the heat transfer is almost the same at different injected water temperatures.
Introduction
This is the second paper of the series on water vapor condensation from the flue gas in a long vertical tube-model of the condensing heat exchanger. Therefore, the review of the literature related to water vapor condensation from flue gases has already been presented in [1]. The results presented in [1] revealed that, under certain inlet flue gas conditions, the initial part of the condensing heat exchanger is not used efficiently for condensation heat and mass transfer due to the cause that the flue gas has to be cooled down until its temperature reaches the dew point temperature. Usually after that, a significant increase in heat transfer was determined due to condensation of water vapor from the flue gas. Therefore, in order to use such type of heat exchanger more efficiently from its beginning, certain parameters of the flue gas at the inlet to the exchanger should be reached.
The results presented in [1] showed that there is a necessity to reduce the flue gas temperature prior to the condensing heat exchanger in order to have condensation in the heat exchanger from its very beginning. A possible way to reduce the flue gas temperature is the injection of water into the flue gas flow [2,3]. The water injected via a nozzle creates water droplets, which in the injection chamber directly contact with the flue gas and the water vapor existing in the flue gas. Then, such a "mixture" is routed into the condensing heat exchanger. In addition, the injected water additionally moistens the flue gas and increases its humidity. Due to this, more favorable conditions for condensation and heat transfer in the condensing heat exchanger should occur. Besides, the injected water allows the recovery of some heat and water due to vapor condensation [4][5][6] and also reduces the concentration of solid particles [7] in flue gas which, when released into the atmosphere, are the main pollution sources when incinerating biofuel [8]. In addition, the benefit of the injected water is that it can be used for flue gas cleaning from acidic components [9] and for flue gas desulfurization [10].
In boiler plants, condensed moisture is usually collected at the bottom part of condensing heat exchangers. Then a part of it is routed back and injected again into the flue gas flow at the top part of heat exchangers. The efficiency of heat exchangers in boiler houses depends on the rates of injected water and gas flow and also the height and diameter of the heat exchanger [11]. The use of condensing heat exchangers in boiler plants allows significant annual fuel savings [12,13].
One of the ways to achieve better heat and mass transfer results of the condensation process is the use of direct contact condensation [14]. Direct contact condensation is the process when vapor directly contacts with water, which can result in very high heat transfer rates. However, such high heat transfer rates can be obtained only in the case of pure vapor condensation. Meanwhile, in reality there might always be a huge portion of noncondensable gases, especially when incinerating biofuels.
The study presented in [15] was dealing with the condensation heat transfer of saturated vapor in a stainless-steel cylindrical vessel (158 mm in diameter and 360 mm length), when subcooled water was sprayed into the vessel with a hollow cone nozzle. The measured increase in sprayed water temperature in the flow direction was rather significant up to x/d ≈ 10. After that, the water temperature of the spray was close to the vapor temperature.
In another study [16], an analytical model was proposed for the behavior of a water spray in vapor. The spray experiments were performed in a chamber 250 mm in diameter and 300 mm in length with pure vapor. The sprayed droplet sizes used in the experiments were in the range of 0.67-1.32 mm. The water spray pattern determined was divided into several regions. The modeling results compared with the experimental results showed a reasonable agreement. When water was sprayed into the same chamber with air (no condensation) and into the chamber with vapor (with condensation), the structure of the flow streamlines showed a considerable difference between the two cases, and the breakup spray length was shorter in the vapor environment. Moreover, the droplet size was determined to be larger in vapor than in the air environment. However, the authors did not analyze heat transfer. The influence of droplet size on heat transfer was analyzed in [17]. It was determined that the larger the droplet size, the lower the total average heat transfer obtained. The droplet radius increased from~0.25 mm to~0.75 mm, resulting in an average total heat transfer decrease of about 3.3 times.
In [18], the experiments were carried out in a cylindrical vessel of 610 mm diameter and 915 mm length to observe the characteristics of water spray from full cone-type coarse nozzles into water vapor. The droplet sizes of the sprayed water were in the range between 0.25 and 1 mm. The analysis of the spray photos showed that for the flow of sprayed water, the film and the droplet phases could be distinguished. The results also indicated that increasing the number of the nozzles that generated smaller drops did not have an expected impact on the heat transfer. It was concluded that heat transfer in the film phase is about five times higher than in the droplet phase.
A series of experiments on direct contact condensation of an air-vapor mixture on a water surface in a vertical test section with the dimensions of 150W × 100D × 1510L mm was carried out in [19]. The results revealed that the average condensation heat transfer coefficient for a vapor-air mixture decreased when the air mass fraction was increased. The average heat transfer coefficients obtained for different mixture velocities and condensation film Reynolds numbers were in the range between 550 and 2000 W/m 2 K. The injection of water droplets in the range of 0.3-2.8 mm to humid hot air of relative humidity 80% and temperature 65-85 • C using a hollow cone nozzle was studied in [20]. The results showed that, due to direct contact condensation, vapor condenses on the surface of droplets because the temperature of water droplets is less than the dew point of air.
Although condensing heat exchangers are widely used in boiler plants, there is not much analysis done on local distributions of temperatures and processes of condensation heat and mass transfer when water is injected into the flue gas flow. The study described in [21] presented a numerical simulation of a quench tower (2.5 m in diameter and 10 m in height) for high inlet temperature (>500 • C) flue gas purification in the case of water droplets of 0.1 mm diameter injected using one to four nozzles located in the upper part of the quench tower. The temperature distribution in the center of the tower indicated that after the spraying, a large amount of flue gas heat was absorbed due to droplet evaporation. With the downward movement of the flue gas, the gas temperature was continuously decreasing and the droplets were gradually evaporating. At about 4 m from the top of the tower, the temperature of flue gas and water vapor were almost the same, and therefore the evaporation of the injected water droplets was negligible. It was determined that the gas with a high temperature was concentrated in the central part of the tower. The flue gas temperature change rates obtained using one to four nozzles for water injection showed that the highest temperature change rate in the case of one nozzle was between 1 and 3 m from the top of the tower, in the case of two, three and four nozzles it was between 2 and 4 m from the top of the tower. From almost the middle of the tower, the flue gas temperature change rate was negligible and this did not depend on the number of nozzles used. In general, spray cooling is an efficient way to reduce gas temperature, especially in confined spaces [22].
Experiments with a water spray for air cooling were carried out in a heat exchanger located in a channel with a rectangular cross section of 36 × 25 cm and a length of 1.7 m with the purpose of improving heat transfer [23]. An increase in the heat transfer of up to three times was reached for a counter-flow injection with the injected droplets smaller than 0.025 mm in diameter. A co-flow injection was found not to be very effective due to bad dispersion of the droplets. Such dispersion resulted in very heterogeneous cooling.
Heat transfer analysis in a flat tube heat exchanger showed that the heat transfer was increasing with an increasing water spraying rat; however, only up to a certain water spray flow rate. It was found that in the case of a high spraying rate, parts of the flow passage of the heat exchanger can be blocked by water droplets and this may result in regions with poor heat transfer. Thus, an optimum water spray rate should be achieved to give an increase in total heat transfer [24].
Flue gas might contain some components in vapor form, which, when condensed on the surfaces of the heat exchanger, can cause corrosion of the surfaces of heat exchanger. In such cases, it is necessary to make heat exchangers from corrosions resistant materials (stainless steel, various corrosion resistant alloys, etc.). Moreover, some reagents could be injected in order to decrease the acidity of vapors.
The literature review showed that a lot of studies are devoted to investigating pure vapor condensation when water is injected into short vessels, as well as flue gas cleaning from gaseous pollutants, solid particles and for the reduction of the temperature of the flow. In the case of biofuel incineration, the flue gas contains a significant amount of noncondensable gases with the remaining portion of water vapor, which, if condensed in an efficiently operating economizer, could increase the efficiency of the boiler station and guarantee huge fuel savings. Although condensing heat exchangers are widely used, there are no studies devoted to the local total heat transfer characteristics of condensing heat exchangers with water injection [25].
Therefore, the aim of this study was to investigate the influence of water injected into the flue gas flow on the distribution of temperatures and total local heat transfer along the model of the condensing heat exchanger.
Experimental Setup
The experimental setup ( Figure 1) which was used for the investigations was the same as described in [1]. However, some minor changes necessary for water injection were introduced and are described further. model of the condensing heat exchanger.
Experimental Setup
The experimental setup ( Figure 1) which was used for the investigations was the same as described in [1]. However, some minor changes necessary for water injection were introduced and are described further.
To generate flue gas from incinerated wood pellets, the automatic boiler Kostrzewa (Poland) from the first experiments (without water injection) was used. The boiler has a maximum power of 50 kW. The power can be adjusted in a certain range according to the needs. Flue gas generated with a temperature of about 180-190 °C at the exit from the boiler was routed into the experimental section. Using the economizer of the boiler, different temperatures of the flue gas could be obtained at the inlet to the test section. To generate flue gas from incinerated wood pellets, the automatic boiler Kostrzewa (Poland) from the first experiments (without water injection) was used. The boiler has a maximum power of 50 kW. The power can be adjusted in a certain range according to the needs. Flue gas generated with a temperature of about 180-190 • C at the exit from the boiler was routed into the experimental section. Using the economizer of the boiler, different temperatures of the flue gas could be obtained at the inlet to the test section.
Dampers were used for flue gas flow rate adjustment. After that, the flue gas was supplied to the top of the test section, flowed via the internal vertical calorimetric tube, passed it, flew to the flue gas chimney and discharged into the atmosphere.
The test section was made of stainless steel. It was composed of an internal calorimetric tube (length x ≈ 5.8 m, inner diameter d = 0.034 m, wall thickness δ = 2 mm, x/d ≈ 170) where condensation takes place on its internal surface and an outer tube (length x ≈ 5.9 m, inner diameter D = 0.108 m) [1].
A water injection chamber with one nozzle was installed just before the inlet into the calorimetric tube. The internal diameter of the chamber was 100 mm and the height 250 mm. The nozzle for water injection was mounted in the middle of the chamber's height at its side surface. Therefore, the directions of flue gas flow and water injection to the gas flow were perpendicular to each other. The distilled water was supplied to the nozzle via a flexible hose from the water tank with a pump. The hose was insulated using 6 mm thick polyethylene insulation.
All of the experimental section as well as flue gas chimney were also insulated using 5 cm thick rockwool insulation.
For cooling the calorimetric tube, water from a municipal water supply network was supplied in the space between the inner and outer tubes. Flow rate of the cooling water was measured at its discharge line by weighing, and the necessary rate was adjusted by the valve. To have uniform inlet temperature before entering the experimental section at its bottom, the water was mixed in the water mixer. When leaving the experimental section, the water was also mixed in the same type of mixer. During the experiments, the condensed and injected water was collected in a collection tank.
Materials and Methods
The investigations were performed using different inlet flue gas temperatures and for two different Reynolds numbers at the inlet into the calorimetric tube. The water vapor mass fraction in the flue gas of about 17% was achieved when incinerating biofuel pellets in the boiler and additionally spraying water into the furnace of the boiler due to the reasons indicated in [1].
Calibrated chromel-copel thermocouples (wire diameter 0.2 mm, accuracy ±0.3%) were installed to measure flue gas (20 pieces), inner wall of the calorimetric tube (20 pieces) and cooling water temperatures (10 pieces) along the model of the condensing heat exchanger. The same type of thermocouples (3 pieces) was also installed in each of the water mixers. For details see [1].
During the experiments, all the thermocouple data were collected using the Keithley data acquisition system. Flue gas inlet temperature (t in , • C) and inlet relative humidity (RH in , %) were measured using a KIMO C310 sensor installed before the inlet to the water injection chamber (250 mm before inlet into the calorimetric tube). Therefore, Re in and dew point temperatures indicated in the Figures presented in this article were calculated based on these parameters. Bellmouth with installed Pitot and Prandtl tubes connected to a differential micromanometer [1] was used to measure the inlet flue gas flow rate.
The flow rate of the cooling water was determined using weighing method. Experiments were carried out for a cooling water flow rate of 60 kg/h. The cooling water temperature at the inlet to the test section was about 9-10 • C.
The flow rate of the water injected into the flue gas stream was defined by the water level change in the distilled water tank, applying the weighing method. During experiments, the flow rate of the injected water was 33.6 kg/h (or 0.56 kg/min). According to the manufacturer of the injector, the injector used in experiments was the full cone injector and the droplets generated by it are in the range of 36-864 µm. The Sauter mean diameter of the droplet is 481 µm and the surface mean diameter is 361 µm.
The temperature of the injected water was measured by two thermocouples installed at the bottom of the distilled water tank-in the zone where the water take-up by the pump was realized. The distilled water tank was equipped with electric heating coils, and a mixer installed in the tank allowed to have a uniform temperature of the water. The temperature of the water injected into the injection chamber was about 25 • C. For the purpose of comparison between the heat transfer results, it was also increased to about 40 • C.
In the formulas presented further the properties (c p , λ, etc.) of the flue gas and water vapor mixture were calculated using formulas presented in [26]. The properties were calculated based on the flow temperature measured in the center of the calorimetric tube.
The total local heat flux was obtained as where m H2O is inlet mass flow rate of the cooling water, kg/s, c pH2O is specific heat of the water, kJ/kg· • C, dt H2O /dx is the slope of the cooling water temperature gradient, determined as the least squares polynomial fit of the coolant temperature as a function of the length of the heat exchanger model, and d is the inner diameter of the calorimetric tube, m. The local total heat transfer coefficient was calculated as where t c is the temperature measured in the center of the calorimetric tube, • C, t w is the measured inner wall temperature of the calorimetric tube, • C. The total Nusselt number: To evaluate the performance of the condensing heat exchanger, the condensation efficiency (%) was used [27]. Condensation efficiency was calculated as where m cd is the mass flow rate of the condensate, kg/s, m H2Oin is the inlet flow rate of the water vapor, kg/s. The condensate mass flow rate was obtained: where m Σ is the total water mass collected in the collection tank, kg, m inj is the mass of the injected water, kg and h is the time, s. Flue gas Reynolds number at the inlet to the calorimetric tube was calculated based on the flue gas parameters before the water injection section: where u in is the flue gas bulk velocity in the calorimetric tube, m/s, d is the inner diameter of the calorimetric tube, m, ν in is the kinematic viscosity, m 2 /s. The uncertainty of the data evaluated by using the methodology presented in [28] showed that for the Nusselt number the uncertainty is 6-14%.
Results and Discussions
Experiments with water injection were carried out for different injected water temperatures, and different flue gas temperatures (t in ) and Reynolds numbers (Re in ) at the inlet into the calorimetric tube. Figure 2 presents typical distribution of temperatures along the model of the condensing heat exchanger at the same inlet Reynolds number (Re in ), different inlet flue gas temperatures and in cases when no water was injected and when water was injected into the flue gas flow. The dew point temperature at the inlet to the test section for the cases presented was calculated according to t in , RH in and using the equations presented in [29] Processes 2021, 9, 1310 7 of 18 and was in the range of~58-64 • C (the dew point temperature was calculated according to t in , RH in , i.e., results before injection section).
Water Injection with a Temperature of 25 • C
and was in the range of ~58-64 °C (the dew point temperature was calculated according to tin, RHin, i.e., results before injection section).
If the condenser wall temperature from the flue gas flow side is lower than the dew point temperature, water vapor should start to condensate on the wall of a calorimetric tube. If the temperature of the flue gas in the center of a calorimetric tube reaches the dew point temperature, water vapor should start to condense in all its volume.
For all the cases shown in Figure 2, the tube wall temperature (curve 2) is much lower than the dew point temperature. Therefore, condensation on the wall of the calorimetric tube should start from the beginning of the tube and the results of heat transfer presented in Figure 3 (curve 1) confirm that. Temperature distribution along the model of the condensing heat exchanger at Re in ≈ 9500, different flue gas inlet temperatures and in the case when water is not injected (a) Re in ≈ 9500, t in ≈ 80 • C, (c) Re in ≈ 9500, t in ≈ 115 • C and is injected (b) Re in ≈ 9500, t in ≈ 87 • C, (d) Re in ≈ 9500, t in ≈ 122 • C: (1) center of the calorimetric tube, (2) inner wall of the calorimetric tube, (3) cooling water in the middle of the inner and the outer tubes, (4) dew point temperature at the inlet to the calorimetric tube.
If the condenser wall temperature from the flue gas flow side is lower than the dew point temperature, water vapor should start to condensate on the wall of a calorimetric tube. If the temperature of the flue gas in the center of a calorimetric tube reaches the dew point temperature, water vapor should start to condense in all its volume.
For all the cases shown in Figure 2, the tube wall temperature (curve 2) is much lower than the dew point temperature. Therefore, condensation on the wall of the calorimetric tube should start from the beginning of the tube and the results of heat transfer presented in Figure 3 (curve 1) confirm that.
The flue gas temperature measured in the center of the calorimetric tube rather rapidly and almost linearly decreases from x/d = 0 to x/d ≈ 20 for about 20 • C (Figure 2a, curve 1), then some stabilization of the temperature between x/d ≈ 20-40 is observed and after that, until the end of the tube, it decreases gradually. The stabilization of temperature (Figure 2a, curve 1) could mean that the condensation of vapor occurred in the whole cross section of the calorimetric tube, and due to this, a sudden increase in the total heat transfer was obtained in the x/d range between 20-40 ( Figure 3, curve 1). Further on, as some vapor is condensed and the difference between the flue gas and the tube wall temperature is decreasing (Figure 2a, curves 1 and 2), heat transfer is also gradually decreasing; however, The flue gas temperature measured in the center of the calorimetric tube rather rapidly and almost linearly decreases from x/d = 0 to x/d ≈ 20 for about 20 °C (Figure 2a, curve 1), then some stabilization of the temperature between x/d ≈ 20-40 is observed and after that, until the end of the tube, it decreases gradually. The stabilization of temperature ( Figure 2a, curve 1) could mean that the condensation of vapor occurred in the whole cross section of the calorimetric tube, and due to this, a sudden increase in the total heat transfer was obtained in the x/d range between 20-40 ( Figure 3, curve 1). Further on, as some vapor is condensed and the difference between the flue gas and the tube wall temperature is decreasing (Figure 2a, curves 1 and 2), heat transfer is also gradually decreasing; however, the results show that it is still rather high and at the end of the calorimetric tube Nut is ≈ 140 (Figure 3, curve 1).
The cooling water temperature (Figure 2a, curve 3) from the inlet into the model of the condensing heat exchanger (x/d ≈ 170) until the outlet (x/d = 0) is gradually increasing. Initially, the increase from the inlet until x/d ≈ 70 is slight and from x/d ≈ 70 until the outlet it is more pronounced. This means that the heating of water in this section of the model of the condensing heat exchanger (x/d ≈ 0-70) is more intense due to the prevailing condensation process. In total, via all of the test section, the cooling water temperature increases by about 20 °C-from ~10 °C to ~30 °C.
When water of about 25 °C started to be injected into the flue gas flow, the character of the temperatures presented in Figure 2b is different in comparison to those presented in Figure 2a when no water was injected.
Although the flue gas temperature before the water injection chamber was about 87 °C, after mixing with the injected water, at the beginning of the calorimetric tube the flue gas temperature decreases to ~52 °C (Figure 2b, curve 1). The results show that the flue gas temperature from the beginning until the end of the tube is constantly decreasing and at the exit from the tube it is about 30 °C. Therefore, the injection of water reduces the flue The cooling water temperature (Figure 2a, curve 3) from the inlet into the model of the condensing heat exchanger (x/d ≈ 170) until the outlet (x/d = 0) is gradually increasing. Initially, the increase from the inlet until x/d ≈ 70 is slight and from x/d ≈ 70 until the outlet it is more pronounced. This means that the heating of water in this section of the model of the condensing heat exchanger (x/d ≈ 0-70) is more intense due to the prevailing condensation process. In total, via all of the test section, the cooling water temperature increases by about 20 When water of about 25 • C started to be injected into the flue gas flow, the character of the temperatures presented in Figure 2b is different in comparison to those presented in Figure 2a when no water was injected.
Although the flue gas temperature before the water injection chamber was about 87 • C, after mixing with the injected water, at the beginning of the calorimetric tube the flue gas temperature decreases to~52 • C (Figure 2b, curve 1). The results show that the flue gas temperature from the beginning until the end of the tube is constantly decreasing and at the exit from the tube it is about 30 • C. Therefore, the injection of water reduces the flue gas temperature, and it is likely that at the same time, due to the evaporation of the injected water, an increase in the humidity of the flue gas flow occurs. These factors ensure better conditions for condensation in the calorimetric tube.
The characteristics of the tube wall temperature and cooling water temperature variation along the tube (Figure 2b, curves 2 and 3) are almost the same as the characteristic of flue gas temperature. Figure 2b also shows that the temperature differences between flue gas and tube wall, and between tube wall and cooling water, are almost constant through all the length of the model of the condensing heat exchanger and are in the range of 7-10 • C.
Water injection had a significant impact on total local heat transfer (Figure 3, curve 2). The heat transfer starts to increase from the beginning of the tube from Nu t ≈ 400 until Nu t ≈ 550 at x/d ≈ 60. The intensification could be related to the conditions suitable for condensation, i.e., a low flue gas temperature, and a possibly increased amount of vapor in the flue gas. Later, the heat transfer starts to decrease. This is related to the fact that some water vapor is condensed and such a tendency remains almost until the end of the tube. In general, the injection of water in comparison with experimental results when no water was injected allowed the increase of the local total heat transfer at least by two times. Condensation efficiencies for the cases when no water was injected and when it was injected did not differ very much-they were 64 and 76%, respectively.
In the case of a higher inlet flue gas temperature (Figure 2c), the flue gas temperature decreases rather sharply until x/d ≈ 90. Then, until the flue gas exits the tube, the decrease in the flue gas temperature becomes smaller (Figure 2c, curve 1). Comparing this with the case presented in Figure 2a, it is obvious that the point of a more sudden flue gas temperature decrease extends farther from the beginning of the tube, i.e., from x/d ≈ 30 to x/d ≈ 90. In this case, the characteristic of the tube wall temperature is also similar to that of the cooling water temperature (Figure 2c, curves 2 and 3). The cooling water temperature increases slightly from the inlet (x/d ≈ 170) until x/d ≈ 90, and from x/d ≈ 90 until the exit, the increase in the cooling water temperature is bigger, which indicates that the water is receiving more heat due to vapor condensation.
The comparison of local total heat transfer in cases of lower and higher inlet flue gas temperatures (Figure 3, curves 1 and 3) without water injection indicates that the main differences are at the beginning of the tube. In the case of a higher inlet flue gas temperature, lower heat transfer was obtained due to conditions unfavorable for intense condensation heat transfer. Further, as the flue gas cooled down, there were almost no differences in heat transfer from x/d > 80 for both inlet flue gas temperatures.
When the flue gas inlet temperature was increased to~122 • C and water was injected into the flow, all the temperatures (Figure 2d) measured in the test section increased only by about 3-4 • C in comparison with the case presented in Figure 2b. The decreasing characteristics and temperature differences (Figure 2d, curves 1-3) remained almost constant and in the range of 8-12 • C along the tube. Water injection in this case resulted in much higher total heat transfer in comparison to the case when no water was injected (Figure 3, cf. curves 3 and 4). Because the flue gas inlet temperature at the injection chamber (Figure 2d) is rather high and exceeds 100 • C, some injected water in the injection chamber evaporates, and the flue gas with possibly increased water vapor content enters the calorimetric tube. In the tube, it starts to condense, resulting in very high heat transfer ( Figure 3, curve 4, Nu t ≈ 600 in x/d range between 0 and~30). As some water vapor is condensed initially in the calorimetric tube, heat transfer starts to decrease gradually along the tube. This means that the influence of condensation also decreases yet is still dominant. The comparison of total heat transfer in cases when no water was injected and when it was injected (Figure 3, curves 3 and 4) indicates that water injection results in a local total heat transfer increase by at least about four times.
In this case, water injection allowed an increase in the condensation efficiency by about 12%-from 52% (without water injection) to about 64% (with water injection).
Temperature distributions in the case of a higher inlet flue gas Reynolds number (Re in ≈ 21,000) are presented in Figure 4. It should be noticed that in this case, the characteristics of the temperature distributions differ from those obtained for lower Re in numbers ( Figure 2). For all the cases presented in Figure 4, the tube wall temperature (Figure 4, curve 2) is also lower the dew point temperature.
Temperature distributions in the case of a higher inlet flue gas Reynolds number (Rein ≈ 21,000) are presented in Figure 4. It should be noticed that in this case, the characteristics of the temperature distributions differ from those obtained for lower Rein numbers ( Figure 2). For all the cases presented in Figure 4, the tube wall temperature (Figure 4, curve 2) is also lower the dew point temperature. (Figure 2a,c). After that, a not so sharp decrease in temperature is observed. The tube wall and the cooling water temperatures demonstrate identical changes with position in the model of the condensing heat exchanger. The cooling water temperature increases intensively from the inlet (x/d ≈ 170) until x/d ≈ 40, from 15 °C until 46 °C, then the increase is less intense, and finally, at the outlet the cooling water temperature is about 52 °C (Figure 4a, curve 3). Although the tube wall temperature is less than the dew point temperature, the difference between these temperatures in comparison with the results presented in Figure 2a,c is not significant: 2-4 °C. Due to this, the condensation at the beginning of the tube is not intensive ( Figure 5, curve 1). As the temperature difference (the driving force of the condensation heat transfer) between the dew point temperature and the tube wall temperatures increases with x/d (Figure 4a), the Nusselt number also increases until x/d ≈ 130 ( Figure 5, curve 1). Further on, the Nusselt number decreases slightly, possibly due to a decrease in the water vapor mass fraction in the flue gas flow, and thus, condensation also becomes weaker. (Figure 4a,c, curve 1) extends farther, i.e., until x/d ≈ 120-130 in comparison with lower Re in numbers (Figure 2a,c). After that, a not so sharp decrease in temperature is observed. The tube wall and the cooling water temperatures demonstrate identical changes with position in the model of the condensing heat exchanger. The cooling water temperature increases intensively from the inlet (x/d ≈ 170) until x/d ≈ 40, from 15 • C until 46 • C, then the increase is less intense, and finally, at the outlet the cooling water temperature is about 52 • C (Figure 4a, curve 3). Although the tube wall temperature is less than the dew point temperature, the difference between these temperatures in comparison with the results presented in Figure 2a,c is not significant: 2-4 • C. Due to this, the condensation at the beginning of the tube is not intensive ( Figure 5, curve 1). As the temperature difference (the driving force of the condensation heat transfer) between the dew point temperature and the tube wall temperatures increases with x/d (Figure 4a), the Nusselt number also increases until x/d ≈ 130 ( Figure 5, curve 1). Further on, the Nusselt number decreases slightly, possibly due to a decrease in the water vapor mass fraction in the flue gas flow, and thus, condensation also becomes weaker. Processes 2021, 9, x FOR PEER REVIEW 11 of 17 In the case with water injection at higher flue gas Rein, the characteristics of temperature variations (Figure 4b) are different in comparison to those obtained for a lower Rein number with water injection (Figure 2b). Due to water injection, the flue gas temperature decreases from 97 °C to 58 °C. Further, in the calorimetric tube, the flue gas temperature decreases slightly from the inlet until x/d ≈ 110 (Figure 4b, curve 1). After that, a more pronounced decrease is observed. The distribution of the cooling water temperature indicates that the water is gaining heat rather intensively from the inlet (x/d ≈ 170) until x/d ≈ 50 (Figure 4b, curve 2). In this region, the water temperature increases by about 35 °C (from 15 to 50 °C). From x/d ≈ 50 until the outlet, the water temperature increased insignificantly, i.e., by ~5 °C, up to ~ 55 °C. The tube wall temperature characteristic (Figure 4b, curve 3) is similar to the cooling water temperature characteristic.
In general, from the temperature distribution results presented in Figure 4b it is evident that from x/d ≈ 50 till the end of the model of the condensing heat exchanger, the differences between flue gas, tube wall and water temperatures increase significantly.
Heat transfer data presented in Figure 5 (curves 1 and 2) indicate that condensation occurs at the beginning of the calorimetric tube, and here the Nut obtained when no water was injected and when it was injected are almost the same. However, in the case with water injection (Figure 5, curve 2), the heat transfer increases rapidly until x/d ≈ 60 and Nut reaches ~1450. Then, it decreases gradually until the end of the tube. The increase indicates that the injection of water intensifies heat transfer due to the influence of condensation, and then, as the water vapor is being condensed, the heat transfer gradually decreases. At the end of the tube, Nut is still high-about 1000.
Condensation efficiencies for the cases without and with water injection were 52% and 73%, respectively. In general, the results of condensation efficiency show that higher efficiency is obtained at lower Rein numbers. At lower Rein, the time of the flue gas travelling through the calorimetric tube is longer, and thus more water vapor is condensed.
At a higher flue gas inlet temperature (Figure 4c,d), the distribution of temperatures obtained are very similar to the case of a lower inlet flue gas temperature for the cases without and with water injection (Figure 4a,b), respectively. The difference is that the absolute values of the temperatures in the case of a higher flue gas inlet temperature are slightly higher.
In the case with water injection at higher flue gas Re in , the characteristics of temperature variations (Figure 4b) are different in comparison to those obtained for a lower Re in number with water injection (Figure 2b). Due to water injection, the flue gas temperature decreases from 97 • C to 58 • C. Further, in the calorimetric tube, the flue gas temperature decreases slightly from the inlet until x/d ≈ 110 (Figure 4b, curve 1). After that, a more pronounced decrease is observed. The distribution of the cooling water temperature indicates that the water is gaining heat rather intensively from the inlet (x/d ≈ 170) until x/d ≈ 50 (Figure 4b, curve 2). In this region, the water temperature increases by about 35 • C (from 15 to 50 • C). From x/d ≈ 50 until the outlet, the water temperature increased insignificantly, i.e., by~5 • C, up to~55 • C. The tube wall temperature characteristic (Figure 4b, curve 3) is similar to the cooling water temperature characteristic.
In general, from the temperature distribution results presented in Figure 4b it is evident that from x/d ≈ 50 till the end of the model of the condensing heat exchanger, the differences between flue gas, tube wall and water temperatures increase significantly.
Heat transfer data presented in Figure 5 (curves 1 and 2) indicate that condensation occurs at the beginning of the calorimetric tube, and here the Nu t obtained when no water was injected and when it was injected are almost the same. However, in the case with water injection ( Figure 5, curve 2), the heat transfer increases rapidly until x/d ≈ 60 and Nu t reaches~1450. Then, it decreases gradually until the end of the tube. The increase indicates that the injection of water intensifies heat transfer due to the influence of condensation, and then, as the water vapor is being condensed, the heat transfer gradually decreases. At the end of the tube, Nu t is still high-about 1000.
Condensation efficiencies for the cases without and with water injection were 52% and 73%, respectively. In general, the results of condensation efficiency show that higher efficiency is obtained at lower Re in numbers. At lower Re in , the time of the flue gas travelling through the calorimetric tube is longer, and thus more water vapor is condensed.
At a higher flue gas inlet temperature (Figure 4c,d), the distribution of temperatures obtained are very similar to the case of a lower inlet flue gas temperature for the cases without and with water injection (Figure 4a,b), respectively. The difference is that the absolute values of the temperatures in the case of a higher flue gas inlet temperature are slightly higher.
The characteristics of the local total Nusselt numbers in the case of a higher flue gas inlet temperature without and with water injection (Figure 5, curves 3 and 4) are similar to the Nusselt numbers in the case of a lower flue gas inlet temperature without and with water injection (Figure 5, curves 1 and 2). The difference is that in the case of a higher flue gas inlet temperature and using water injection, the maximum value of Nu t was ≈ 1500, and with a lower flue gas inlet temperature it was lower, i.e., about 1200 ( Figure 5, curves 2 and 4). However, condensation efficiency does not differ very much, as it is 42% and 49% for the cases when no water was injected and when it was injected, respectively.
From the results presented in Figure 5 for the cases without water injection (curves 1 and 3), it is evident that the flue gas inlet temperature, especially in the beginning of the tube (x/d = 0-80), has a definite influence on heat transfer. The lower the flue gas inlet temperature, the higher the heat transfer obtained in the beginning of the tube. This means that at a lower flue gas inlet temperature, a better condition for more intense condensation heat transfer is created. Farther on (x/d = 80-170), as the flue gas temperature decreases, the heat transfer remains almost the same independently of the flue gas inlet temperature ( Figure 5, curves 1 and 3), i.e., the same tendencies exist as for the lower Re in number (see Figure 3, curves 1 and 3).
In the case of water injection, the opposite can be noticed: the higher flue gas inlet temperature, the higher heat transfer (see Figures 3 and 5, curves 2 and 4).
Water Injection with a Temperature of 40 • C
The temperature of the injected water was chosen to be 40 • C because it is a typical temperature of condensed water vapor obtained at the bottom of condensing heat exchangers in heating power plants. Condensed water in power plants is routed back to the flue gas inlet into the condensing heat exchanger and injected into the flue gas flow.
Temperature distributions for different flue gas inlet temperatures when water was injected into the calorimetric tube are presented in Figure 6. In general, the characteristics of the curves are the same as presented for the case when water of 25 • C was injected (Figure 2b,d). The difference is that all the temperatures obtained for the current case are higher by about 3-5 • C in comparison with the case when water of 25 • C was injected. The characteristics of the local total Nusselt numbers in the case of a higher flue gas inlet temperature without and with water injection (Figure 5, curves 3 and 4) are similar to the Nusselt numbers in the case of a lower flue gas inlet temperature without and with water injection (Figure 5, curves 1 and 2). The difference is that in the case of a higher flue gas inlet temperature and using water injection, the maximum value of Nut was ≈ 1500, and with a lower flue gas inlet temperature it was lower, i.e., about 1200 ( Figure 5, curves 2 and 4). However, condensation efficiency does not differ very much, as it is 42% and 49% for the cases when no water was injected and when it was injected, respectively.
From the results presented in Figure 5 for the cases without water injection (curves 1 and 3), it is evident that the flue gas inlet temperature, especially in the beginning of the tube (x/d = 0-80), has a definite influence on heat transfer. The lower the flue gas inlet temperature, the higher the heat transfer obtained in the beginning of the tube. This means that at a lower flue gas inlet temperature, a better condition for more intense condensation heat transfer is created. Farther on (x/d = 80-170), as the flue gas temperature decreases, the heat transfer remains almost the same independently of the flue gas inlet temperature ( Figure 5, curves 1 and 3), i.e., the same tendencies exist as for the lower Rein number (see Figure 3, curves 1 and 3).
In the case of water injection, the opposite can be noticed: the higher flue gas inlet temperature, the higher heat transfer (see Figures 3 and 5, curves 2 and 4).
Water Injection with a Temperature of 40 °C
The temperature of the injected water was chosen to be 40 °C because it is a typical temperature of condensed water vapor obtained at the bottom of condensing heat exchangers in heating power plants. Condensed water in power plants is routed back to the flue gas inlet into the condensing heat exchanger and injected into the flue gas flow.
Temperature distributions for different flue gas inlet temperatures when water was injected into the calorimetric tube are presented in Figure 6. In general, the characteristics of the curves are the same as presented for the case when water of 25 °C was injected (Figure 2b,d). The difference is that all the temperatures obtained for the current case are higher by about 3-5 °C in comparison with the case when water of 25 °C was injected. Heat transfer results indicate that in the case of a lower flue gas inlet temperature (Figure 7, curve 1), heat transfer at the beginning of the tube is higher by at least 1.3 times and in the case of a higher flue gas inlet temperature (Figure 7, curve 2), it is higher by at Heat transfer results indicate that in the case of a lower flue gas inlet temperature (Figure 7, curve 1), heat transfer at the beginning of the tube is higher by at least 1.3 times and in the case of a higher flue gas inlet temperature (Figure 7, curve 2), it is higher by at least 1.5 times in comparison with the results when water of 25 • C was injected (Figure 3, curves 2 and 4).
Heat transfer data obtained for different flue gas inlet temperatures (Figure 7) show a decreasing trend along the calorimetric tube. From about the middle of the tube (x/d > 100) there are almost no differences in heat transfer for different flue gas inlet temperatures, and from x/d > 150 heat transfer is almost the same (Nut ≈ 350-380), as obtained for the case when the injected water temperature was 25 °C (Figure 3, curves 2 and 4).
The same trends of heat transfer were obtained in the calorimetric tube as described previously for the cases with water injection: the higher the flue gas inlet temperature, the higher the heat transfer. However, it should be noticed that for the case with injected water of 40 °C, higher heat transfer for different flue gas inlet temperatures remains only over a short distance along the calorimetric tube (up to x/d ≈ 50, then the difference lessen). Although the distribution of Nut shows some differences in Figure 7, the condensation efficiencies obtained for different flue gas inlet temperatures tin ≈ 88 °C and tin ≈ 125 °C do not differ very much and are about 77-78%.
The characteristics of temperature distributions at a higher inlet flue gas Reynolds number (Figure 8) do not differ from those presented in the case of a higher inlet flue gas Reynolds number and using a lower injected water temperature (Figure 4b,d); the temperatures obtained are higher by a few degrees.
The influence of flue gas inlet temperature on total local heat transfer is presented in Figure 9. For both flue gas inlet temperatures, the heat transfer is almost the same up to x/d ≈ 20. After that, the heat transfer is about 1.2 times higher in the case of the higher flue gas inlet temperature. This tendency and distribution characteristic remain almost until the end of the calorimetric tube for both flue gas inlet temperatures.
Condensation efficiency in this case almost does not differ and is about 48-49% for both flue gas inlet temperatures. The same trends of heat transfer were obtained in the calorimetric tube as described previously for the cases with water injection: the higher the flue gas inlet temperature, the higher the heat transfer. However, it should be noticed that for the case with injected water of 40 • C, higher heat transfer for different flue gas inlet temperatures remains only over a short distance along the calorimetric tube (up to x/d ≈ 50, then the difference lessen).
Although the distribution of Nu t shows some differences in Figure 7, the condensation efficiencies obtained for different flue gas inlet temperatures t in ≈ 88 • C and t in ≈ 125 • C do not differ very much and are about 77-78%.
The characteristics of temperature distributions at a higher inlet flue gas Reynolds number (Figure 8) do not differ from those presented in the case of a higher inlet flue gas Reynolds number and using a lower injected water temperature (Figure 4b,d); the temperatures obtained are higher by a few degrees.
The influence of flue gas inlet temperature on total local heat transfer is presented in Figure 9. For both flue gas inlet temperatures, the heat transfer is almost the same up to x/d ≈ 20. After that, the heat transfer is about 1.2 times higher in the case of the higher flue gas inlet temperature. This tendency and distribution characteristic remain almost until the end of the calorimetric tube for both flue gas inlet temperatures.
Comparison of Heat Transfer Results for Different Injected Water Temperatures
Heat transfer data obtained at almost the same flue gas inlet temperature (≈87-88 °C) but different injected water temperature in the case of Rein ≈ 9500 show ( Figure 10, curves 1 and 2) that a higher temperature of the injected water results in a substantial increase in heat transfer along the tube. The increase is especially pronounced in the initial part of the tube (x/d = 0-60). At the end of the tube (x/d > 150) heat transfer data in case for both injected water temperatures do not differ very much.
In the case of a higher flue gas inlet temperature (≈122-125 °C) the same tendencies in heat transfer (Figure 10, curves 3 and 4) for different injected water temperatures were noticed: the higher the injected water temperature, the higher the heat transfer obtained in the tube, especially in its initial part. The results also show that heat transfer for both cases decreases along the tube and from x/d > 105 there are almost no differences in heat transfer for different injected water temperatures.
Comparison of Heat Transfer Results for Different Injected Water Temperatures
Heat transfer data obtained at almost the same flue gas inlet temperature (≈87-88 °C) but different injected water temperature in the case of Rein ≈ 9500 show ( Figure 10, curves 1 and 2) that a higher temperature of the injected water results in a substantial increase in heat transfer along the tube. The increase is especially pronounced in the initial part of the tube (x/d = 0-60). At the end of the tube (x/d > 150) heat transfer data in case for both injected water temperatures do not differ very much.
In the case of a higher flue gas inlet temperature (≈122-125 °C) the same tendencies in heat transfer (Figure 10, curves 3 and 4) for different injected water temperatures were noticed: the higher the injected water temperature, the higher the heat transfer obtained in the tube, especially in its initial part. The results also show that heat transfer for both cases decreases along the tube and from x/d > 105 there are almost no differences in heat transfer for different injected water temperatures. Condensation efficiency in this case almost does not differ and is about 48-49% for both flue gas inlet temperatures.
Comparison of Heat Transfer Results for Different Injected Water Temperatures
Heat transfer data obtained at almost the same flue gas inlet temperature (≈87-88 • C) but different injected water temperature in the case of Re in ≈ 9500 show ( Figure 10, curves 1 and 2) that a higher temperature of the injected water results in a substantial increase in heat transfer along the tube. The increase is especially pronounced in the initial part of the tube (x/d = 0-60). At the end of the tube (x/d > 150) heat transfer data in case for both injected water temperatures do not differ very much. Processes 2021, 9, For higher Rein numbers, heat transfer results presented in Figure 11 show that at the beginning of the calorimetric tube, heat transfer does not differ very much for the cases of different inlet flue gas and injected water temperatures. However, in this case, the distribution of total heat transfer along the tube differs very much from distributions presented for lower Rein numbers ( Figure 10). For the lower flue gas inlet temperature (≈97-103 °C), results show that for both injected water temperatures (Figure 11, curves 1 and 2), heat transfer increases rather steeply In the case of a higher flue gas inlet temperature (≈122-125 • C) the same tendencies in heat transfer ( Figure 10, curves 3 and 4) for different injected water temperatures were noticed: the higher the injected water temperature, the higher the heat transfer obtained in the tube, especially in its initial part. The results also show that heat transfer for both cases decreases along the tube and from x/d > 105 there are almost no differences in heat transfer for different injected water temperatures.
For higher Re in numbers, heat transfer results presented in Figure 11 show that at the beginning of the calorimetric tube, heat transfer does not differ very much for the cases of different inlet flue gas and injected water temperatures. However, in this case, the distribution of total heat transfer along the tube differs very much from distributions presented for lower Re in numbers ( Figure 10).
For the lower flue gas inlet temperature (≈97-103 • C), results show that for both injected water temperatures ( Figure 11, curves 1 and 2), heat transfer increases rather steeply until x/d ≈ 55-65. The maximum total heat transfer in the case of a higher injected water temperature is about 1.2 times higher in comparison with data at the lower injected water temperature. After both heat transfer curves reach their maximum values (at x/d ≈ 65 and 55, respectively), then heat transfer starts to decrease and from x/d > 100 only an insignificant difference in heat transfer is observed.
For the higher flue gas inlet temperature (≈135-138 • C), the characteristics of the heat transfer distribution along the tube (Figure 11, curves 3 and 4) for different injected water temperatures are very similar to those described before. However, the maximum heat transfer obtained in this case is higher in comparison with data from the lower flue gas inlet temperature. After the maximum heat transfer is reached (at x/d ≈ 55), then both curves (Figure 11, curves 3 and 4) show decreasing heat transfer along the tube: i.e., the tendency is the same as for the lower flue gas inlet temperature. From x/d > 110, heat transfer along the tube for both injected water temperatures remains almost the same.
For higher Rein numbers, heat transfer results presented in Figure 11 show that at the beginning of the calorimetric tube, heat transfer does not differ very much for the cases of different inlet flue gas and injected water temperatures. However, in this case, the distribution of total heat transfer along the tube differs very much from distributions presented for lower Rein numbers ( Figure 10). For the lower flue gas inlet temperature (≈97-103 °C), results show that for both injected water temperatures ( Figure 11, curves 1 and 2), heat transfer increases rather steeply Figure 11. Distribution of the total local Nusselt number along the model of the condensing heat exchanger at Re in ≈ 21,000 for the different inlet flue gas and injected water temperatures: (1) t in ≈ 97 • C and injected water temperature 25 • C, (2) t in ≈ 103 • C and injected water temperature 40 • C, (3) t in ≈ 135 • C and injected water temperature 25 • C, (4) t in ≈ 138 • C and injected water temperature 40 • C.
Conclusions
After analysis, the following conclusions have been made:
1.
Performed investigations revealed the regularities of the local heat transfer during operation of condensing heat exchangers with water injection.
2.
Water injection drastically changes the distribution of temperatures and has a significant effect on heat transfer along the calorimetric tube.
3.
In the case of the water injection with a temperature of 25 • C, at lower Re in numbers, the local total heat transfer along the tube increased by at least four times, and at higher Re in numbers, at least by two times in comparison with the case without water injection. 4.
In the case of water injection with a temperature of 40 • C, at lower Re in numbers, the local total heat transfer increased by at least 2.3 times and at higher Re in numbers, by at least 1.7 times in comparison with the case without water injection. 5.
At higher flue gas inlet temperatures, the effect of water injection on heat transfer is also stronger. For lower Re in , the effect is more pronounced in the initial part of the tube (up to x/d ≈ 60), and for higher Rei n , it is in the x/d range between 20 and 110. 6.
Condensation efficiency increases with decreasing Re in number, flue gas temperature and injected water temperature. 7.
To optimize the operation of condensing heat exchangers with water injection, it is necessary to perform wider investigations on the effect of different flue gases, cooling water and injected water parameters.
|
2021-09-09T20:46:40.672Z
|
2021-07-29T00:00:00.000
|
{
"year": 2021,
"sha1": "b8cb76c9d75f067e76bc2cbba092dc8d94df48cd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/9/8/1310/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ca2d94fd5fa672bc0580dd8bff9b27bdb1c3dac0",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119620622
|
pes2o/s2orc
|
v3-fos-license
|
An application of spherical geometry to hyperk\"ahler slices
This work is concerned with Bielawski's hyperk\"ahler slices in the cotangent bundles of homogeneous affine varieties. One can associate such a slice to the data of a complex semisimple Lie group $G$, a reductive subgroup $H\subseteq G$, and a Slodowy slice $S\subseteq\mathfrak{g}:=\mathrm{Lie}(G)$, defining it to be the hyperk\"ahler quotient of $T^*(G/H)\times (G\times S)$ by a maximal compact subgroup of $G$. This hyperk\"ahler slice is empty in some of the most elementary cases (e.g. when $S$ is regular and $(G,H)=(\operatorname{SL}_{n+1},\operatorname{GL}_{n})$, $n\geq 3$), prompting us to seek necessary and sufficient conditions for non-emptiness. We give a spherical-geometric characterization of the non-empty hyperk\"ahler slices that arise when $S=S_{\text{reg}}$ is a regular Slodowy slice, proving that non-emptiness is equivalent to the so-called $\mathfrak{a}$-regularity of $(G,H)$. This $\mathfrak{a}$-regularity condition is formulated in several equivalent ways, one being a concrete condition on the rank and complexity of $G/H$. We also provide a classification of the $\mathfrak{a}$-regular pairs $(G,H)$ in which $H$ is a reductive spherical subgroup. Our arguments make essential use of Knop's results on moment map images and Losev's algorithm for computing Cartan spaces.
Context.
A smooth manifold is called hyperkähler if it comes equipped with three Kähler structures that determine the same Riemannian metric, and whose underlying complex structures satisfy certain quaternionic identities. Such manifolds are known to be holomorphic symplectic and Calabi-Yau, and they are ubiquitous in modern algebraic and symplectic geometry. Prominent examples include the cotangent bundles [22] and (co-)adjoint orbits [6,19,23,24] of complex semisimple Lie groups, moduli spaces of Higgs bundles over compact Riemann surfaces [14], and Nakajima quiver varieties [29,30]. Many examples arise via the hyperkähler quotient construction [15], an analogue of symplectic reduction for a hyperkähler manifold endowed with a structure-preserving Lie group action and a hyperkähler moment map. However, one always has the preliminary problem of determining whether the given hyperkähler quotient is non-empty. While the above-described emptiness problem is likely intractable in the generality described above, one might hope to solve it for particular classes of hyperkähler quotients. It is in this context that one might consider Bielawski's hyperkähler slices [4,5], which require fixing a compact, connected, semisimple Lie group K with complexification G := K C . Each sl 2 -triple τ = (ξ, h, η) in g := Lie(G) determines a Slodowy slice S τ := ξ + ker(ad η ) ⊆ g, and hence also an affine variety G × S τ . This variety is a hyperkähler manifold carrying a tri-Hamiltonian action of K, and its symplectic geometry is reasonably well-studied (see [1,4,8,9]). Now suppose that K acts in a tri-Hamiltonian fashion on a hyperkähler manifold M , and that this action extends to a holomorphic, Hamiltonian G-action with respect to the holomorphic symplectic structure on M . The hyperkähler slice for M and τ is then defined to be (M ×(G×S τ ))/ / /K, the hyperkähler quotient of M ×(G×S τ ) by K. Several well-known hyperkähler manifolds are realizable as hyperkähler slices, as discussed in the introduction of [5]. pairs (G, H) for which h ⊥ contains a regular element. This is the stage at which spherical geometry becomes relevant, as we explain below.
Inside of G, fix a maximal torus T and a Borel subgroup B satisfying T ⊆ B. These choices allow us to form the Cartan space of G/H, denoted a G/H ⊆ t := Lie(T ). We refer to the pair (G, H) as being a-regular if a G/H contains a regular element of g, and we use Knop's description of the moment map image µ(T * (G/H)) to prove the following equivalences (see Proposition 15,Corollary 17,and Corollary 19): H) is a-regular ⇐⇒ h ⊥ contains a regular element ⇐⇒ Z G (a G/H ) = T ⇐⇒ the identity component of H * is abelian where Z G (a G/H ) is the subgroup consisting of all elements in G that fix a G/H pointwise, H * is the generic stabilizer for the H-representation h ⊥ (see 5.2), c G (G/H) is the complexity of G/H, and rk G (G/H) is the rank of G/H. The first equivalence further reduces our emptiness problem to one of classifying the a-regular pairs (G, H), thereby connecting our work to Losev's results [25]. We then classify all such pairs (G, H) (i.e. we solve the emptiness problem for (T * (G/H) × (G × S reg ))/ / /K) in each of the following three cases: • G is semisimple and H is a Levi subgroup of G (5.5.1); • G is semisimple and H is a symmetric subgroup of G (5.5.2); • G is semisimple and H is a reductive, spherical, non-symmetric subgroup of G (5.5.3). In each case, we reduce to the study of strictly indecomposable (see 5.3) pairs (G, H). It is in the last two cases that we obtain the most explicit results, and where we provide tables of all a-regular pairs (G, H) that are strictly indecomposable.
1.3. Organization. Section 2 establishes some of our conventions regarding symplectic and hyperkähler geometry. Section 3 then uses [10], [22], and [27] to develop the hyperkähler-geometric features of T * (G/H) needed for the subsequent discussion of hyperkähler slices. This leads to Section 4, which reviews Bielawski's hyperkähler slice construction and reduces the non-emptiness of (T * (G/H) × (G × S reg ))/ / /K to the condition that h ⊥ contain a regular element. Section 5 then forms the spherical-geometric part of our paper, where we prove the equivalences (1) and subsequently obtain our classification results.
Acknowledgements. The central themes of this paper were developed at the Hausdorff Research Institute for Mathematics (HIM), while both authors took part in the HIM-sponsored program Symplectic geometry and representation theory. We gratefully acknowledge the HIM for its hospitality and stimulating atmosphere. We also wish to recognize Steven Rayan and Markus Röser for enlightening conversations. The first author is supported by the Natural Sciences and Engineering Research Council of Canada [516638-2018].
Preliminaries
2.1. Symplectic varieties and quotients. Let (X, ω) be a symplectic variety, which for us shall always mean that X is a smooth affine algebraic variety over C equipped with an algebraic symplectic form ω ∈ Ω 2 (X). Suppose that X is acted upon algebraically by a connected complex reductive algebraic group G having Lie algebra g. We recall that this action is called Hamiltonian if it preserves ω and admits a moment map, i.e. a G-equivariant variety morphism µ : X → g * satisfying the following condition: d(µ z ) = ιzω for all z ∈ g, where µ z : X → C is defined by µ z (x) := (µ(x))(z), x ∈ X, andz is the fundamental vector field on X associated to z. If the G-action is also free, then is a smooth affine variety whose points are precisely the G-orbits in µ −1 (0). The quotient variety X/ /G then carries a symplectic form ω that is characterized by the condition π * (ω) = j * (ω), where π : µ −1 (0) → X/ /G is the quotient map and j : µ −1 (0) → X is the inclusion. The symplectic variety (X/ /G, ω) is called the symplectic quotient of X by G.
Hyperkähler manifolds.
Recall that a smooth manifold M is called hyperkähler if it comes equipped with three (integrable) complex structures I 1 , I 2 , and I 3 , three (real) symplectic forms ω 1 , ω 2 , and ω 3 , and a single Riemannian metric b, subject the following conditions: is a Kähler triple for each ℓ = 1, 2, 3, i.e. ω ℓ (·, ·) = b(I ℓ (·), ·); • I 1 , I 2 , and I 3 satisfy the quaternionic identities I 1 I 2 = I 3 = −I 2 I 1 , I 1 I 3 = −I 2 = −I 3 I 1 , One may construct new examples from existing ones via the hyperkähler quotient construction, which we now recall. Let K be a compact connected Lie group acting freely on a hyperkähler manifold M , and let k be the Lie algebra of K. Assume that the K-action is tri-Hamiltonian, meaning that K preserves each Kähler triple (I ℓ , ω ℓ , b) and acts in a Hamiltonian fashion with respect to each symplectic form ω ℓ . One thus has a hyperkähler moment map, i.e. a map µ HK = (µ 1 , µ 2 , µ 3 ) : M → k * ⊕ k * ⊕ k * with the property that µ ℓ : M → k * is a moment map for the K-action with respect to ω ℓ , ℓ = 1, 2, 3. The smooth manifold )/K is then canonically hyperkähler (see [15,Theorem 3.2]), and it is called the hyperkähler quotient of M by K. We shall let (I ℓ , ω ℓ , b), ℓ = 1, 2, 3, denote the three Kähler triples that constitute the hyperkähler structure on M / / /K. It will be advantageous to note that where π : µ −1 (0) → M / / /K is the quotient map and j : µ −1 (0) → M is the inclusion. Let M be a hyperkähler manifold and consider the complex symplectic 2-form ω C := ω 2 + iω 3 . One can verify that ω C is holomorphic with respect to I 1 , and we will refer to (M, I 1 , ω C ) as the underlying holomorphic symplectic manifold. This leads to the following definition, which will apply to many situations of interest in our paper. Definition 1. Let K be a compact connected Lie group with complexification G := K C . We define a (G, K)-hyperkähler variety is a to be a hyperkähler manifold M satisfying the following conditions: (i) the underlying holomorphic symplectic manifold is a symplectic variety (as defined in 2.1), and this variety is equipped with a Hamiltonian action of G; (ii) the G-action restricts to a tri-Hamiltonian action of K on M .
Consider the hyperkähler moment map µ HK = (µ 1 , µ 2 , µ 3 ) : M → k * ⊕ k * ⊕ k * on a (G, K)hyperkähler variety M . Define the complex moment map by which turns out to be the moment map for the Hamiltonian G-action on M . Now assume that this G-action is free. The inclusion µ −1 HK (0) ⊆ µ −1 C (0) then induces a map (4) ϕ : M / / /K → M / /G, where we recall that M / /G is defined via (2). This map defines a diffeomorphism from M / / /K to its image, the open subset (G · µ −1 HK (0))/G of µ −1 C (0)/G = M / /G. Furthermore, ϕ is an embedding of holomorphic symplectic manifolds with respect to the underlying holomorphic symplectic structure on M / / /K.
3.
The hyperkähler geometry of T * (G/H) It will be convenient to standardize some of the Lie-theoretic notation used in this paper. Let K be a compact connected semisimple Lie group, and fix a closed subgroup L ⊆ K. We will also let G := K C and H := L C denote the complexifications of K and L, respectively, noting that H is a closed reductive subgroup of G. Let k, l, g, and h be the Lie algebras of K, L, G, and H, respectively, so that g = k ⊗ R C and h = l ⊗ R C. Each of these Lie algebras comes equipped with the adjoint representation of the corresponding group, e.g. Ad : G → GL(g), g → Ad g . The symbol "Ad" will be used for all of the aforementioned adjoint representations, as context will always clarify any ambiguities that this abuse of notation may cause.
Let ·, · : g ⊗ C g → C denote the Killing form on g, which is G-invariant and non-degenerate. It follows that defines an isomorphism between the adjoint and coadjoint representations of G. With this in mind, we will sometimes take the moment map for a Hamiltonian G-action to be g-valued.
3.1. The cotangent bundle of G. Note that left and right multiplication give the commuting actions of G on itself, and that these lift to commuting Hamiltonian actions of G on T * G. To be more explicit about this point, we shall use the left trivialization of T * G and the Killing form to identify T * G with G × g. The lifts of (6a) and (6b) then become respectively, while the induced symplectic form on G × g is defined on each tangent space T (g,x) (G × g) = T g G ⊕ g as follows (see [26,Section 5, Equation (14L)]): where L g : G → G denotes left multiplication by g and d e L g : g → T g G is the differential of L g at the identity e ∈ G (see . One can then verify that are moment maps for (7a) and (7b), respectively.
3.2.
Kronheimer's hyperkähler structure on T * G. Let H denote the quaternions, to be identified as a vector space with R 4 via the usual basis {1, i, j, k}. Now consider the real vector space C ∞ ([0, 1], k) of all smooth maps [0, 1] → k. A choice of K-invariant inner product ·, · k on k makes M := C ∞ ([0, 1], k) ⊗ R H = C ∞ ([0, 1], k) ⊕4 into a Banach space with an infinite-dimensional hyperkähler manifold structure. This space carries the following hyperkähler structure-preserving action of G := C ∞ ([0, 1], K), the gauge group of smooth maps [0, 1] → K with pointwise multiplication as the group operation: (10) γ · (T 0 , T 1 , T 2 , where θ R ∈ Ω 1 (K; k) is the right-invariant Mauer-Cartan form on K. The subgroup then acts freely on M with a hyperkähler moment map that can be written in the form Φ : M → C ∞ ([0, 1], k) ⊕3 . It turns out that Φ −1 (0) consists of the solutions to Nähm's equations (as defined in [10, Proposition 1], for example), and that Kronheimer constructed an explicit diffeomorphism [22,Proposition 1]). The smooth manifold G×g thereby inherits a hyperkähler structure (I ℓ , ω ℓ , b), ℓ = 1, 2, 3. We note that ω 2 +iω 3 equals the form Ω L from (8), while I 1 is the usual complex structure on G × g (see [22,Section 2]). Kronheimer's diffeomorphism 11 has some important equivariance properties that we now discuss. Note that G 0 is the kernel of so that we may identify G/G 0 and K × K as Lie groups. The G-action on M induces a residual action of G/G 0 = K × K on M/ / /G 0 , and this residual action is known to be tri-Hamiltonian (see [10,Lemma 2]). Under (11), the action of K = {e} × K ⊆ K × K on M/ / /G 0 corresponds to the K-action (7a) on G × g. The diffeomorphism also intertwines the action of The group SO 3 (R) also has a natural manifestation in our setup. Given a point ( a pq T q , p = 1, 2, 3 and A · (T 0 , T 1 , T 2 , This action of SO 3 (R) on M descends to an isometric action on the hyperkähler quotient M/ / /G 0 . One can use (11) to interpret this as an isometric action of SO 3 (R) on the hyperkähler manifold G × g, and it is not difficult to check that this action commutes with the K-actions (7a) and (7b). It is important to note that SO 3 (R) does not preserve all of the hyperkähler structure on G×g, in contrast to the K-actions. However, one can find a circle subgroup of SO 3 (R) that preserves the Kähler triple (I 3 , ω 3 , b) on G × g. A more explicit statement is that one can find an element θ ∈ so 3 (R) whose fundamental vector fieldθ on G × g satisfies the following properties: Lθω 1 = ω 2 , Lθω 2 = −ω 1 , andθ generates a circle action on G × g that preserves (I 3 , ω 3 , b). This circle subgroup acts by rotations on span R {ω 1 , ω 2 }, and the following is (the θ-component of) a moment map for its Hamiltonian action on (G × g, ω 3 ): [10,Section 4]). This leads to the following lemma.
Lemma 2. The function ρ is invariant under each of the K-actions (7a) and (7b) on G × g.
Proof. Since ·, · k is a K-invariant inner product, the function is invariant under the action (10) of G. This function therefore descends to a G/G 0 -invariant function on the hyperkähler quotient M/ / /G 0 . The descended function is exactly ρ once we identify M/ / /G 0 with G × g via 11. Now recall that the G/G 0 -action on M/ / /G 0 corresponds to a (K × K)-action on G × g, meaning that ρ is a (K × K)-invariant function on G × g. It just remains to recall that the K-action (7a) (resp. (7b)) is the action of 3.3. The hyperkähler structure on T * (G/H). Let G act on G/H via left multiplication, and consider the canonical lift to a Hamiltonian action of G on T * (G/H). Note also that (g/h) * is a representation of H, and let G × H (g/h) * denote the quotient of G × (g/h) * by the following action of H: We then have a canonical G-equivariant isomorphism T * (G/H) ∼ = G × H (g/h) * , where G acts on the latter variety via left multiplication on the first factor. At the same time, the H-representation (g/h) * is canonically isomorphic to the annihilator h ⊥ ⊆ g of h under the Killing form. We thus have a G-equivariant isomorphism Now consider the restriction of (7b) to an action of H ⊆ G on G × g, noting that this restricted action is Hamiltonian with respect to Ω L . The moment map for this H-action is obtained by composing the g * -valued version of φ R : G × g → g with the projection g * → h * . It follows that the preimage of 0 under the new moment map is G × h ⊥ ⊆ G × g. The symplectic quotient of G × g by H is therefore given by It is straightforward to verify that the induced symplectic structure on G × H h ⊥ renders (13) a G-equivariant isomorphism of symplectic varieites. It is also straightforward to check that The above-defined holomorphic symplectic structure and Hamiltonian G-action on G × H h ⊥ turn out to come from a (G, K)-hyperkähler variety structure (see Definition 1), which we now discuss. Accordingly, recall that (7a) and (7b) define commuting, tri-Hamiltonian actions of K on G × g. Let us restrict the latter action to the subgroup L ⊆ K fixed in the introduction to Section 3, and then consider the associated hyperkähler quotient (G × g)/ / /L. Note that (7a) then descends to a tri-Hamiltonian action of K on (G × g)/ / /L. At the same time, (4) takes the form of a K-equivariant map (15) ( One can then invoke [10, Section 2] and/or [27, Theorem 3.1] to deduce the following fact. Theorem 3. The map (15) is a K-equivariant isomorphism of holomorphic symplectic manifolds. (15) is an isomorphism of hyperkähler manifolds, which by the preceding discussion makes G × H h ⊥ into a (G, K)-hyperkähler variety. To help investigate this (G, K)-hyperkähler structure, we use Lemma 2 to see that ρ descends to a K-invariant function ρ H : i.e. ω H 1 = 2i∂∂ρ H for the Dolbeault operators ∂ and ∂ associated with I H 1 .
Proof. Let (µ 1 , µ 2 , µ 3 ) : G × g → (k * ) ⊕3 denote the hyperkähler moment map for the tri-Hamiltonian K-action (7b), and let (µ H 1 , µ H 2 , µ H 3 ) : G × g → (l * ) ⊕3 be the induced hyperkähler moment map for the action of L ⊆ K. Consider the action of SO 3 (R) on G × g, recalling our description of a specific subgroup S 1 ⊆ SO 3 (R) and its action on G × g (see 3.2). This description implies that S 1 preserves µ 3 and acts by rotations on span R {µ 1 , µ 2 }. We conclude that S 1 preserves µ H 3 and acts by rotations Observe that the actions of S 1 and L on this submanifold commute, owing to the fact that the action of SO 3 (R) on G × g commutes with the K-action (7b). The quotient therefore carries a residual S 1 -action, so that we may use the hyperkähler isomorphism (15) to equip G × H h ⊥ with a corresponding S 1 -action. The relations (3) then imply that S 1 preserves ω H 3 . Now consider the element θ ∈ so 3 (R) discussed in 3.2, recalling that ρ is the θ-component of a moment map for the S 1 -action on G × g. It is then straightforward to check that ρ H is the θ-component of a moment map for the S 1 -action that preserves ω H 3 . Note also that the identities Lθω 1 = ω 2 and whereθ is the fundamental vector field on G × H h ⊥ associated to θ. These last two sentences give exactly the ingredients needed to reproduce a calculation from [15, Section 3(E)], to the effect that ρ H is a Kähler potential for (
The hyperkähler slice construction
4.1. The slice as a symplectic variety. Recall the notation established in the introduction to Section 3, and let ad : g → gl(g), x → ad x , x ∈ g denote the adjoint representation of g.
, and [h, η] = −2η, in which case there is an associated Slodowy slice We will make extensive use of the affine variety G × S τ , some geometric features of which we now develop.
Consider the isomorphisms T * G ∼ = G × g * ∼ = G × g induced by the right trivialization of T * G and the Killing form. The symplectic form on T * G thereby corresponds to such a form Ω R on G × g, described as follows on the tangent space T (g,x) (G× g) = T g G⊕ g (see [26, Section 5, Equation (14R)]: (16) ( for all y 1 , y 2 , z 1 , z 2 ∈ g, where R g : G → G is right multiplication by g, d e R g is its differential at e. It turns out that G × S τ is a symplectic subvariety of (G × g, Ω R ). The G-action is then Hamiltonian and Remark 5. Bielawski's paper [4] uses Ω R to realize G × S τ as a symplectic subvariety G × g, as opposed to using the other symplectic form Ω L (see (8)). It is for the sake of consistency with Bielawski's work that we are using the same convention. However, this is the only case in which we use Ω R preferentially to Ω L .
Now let X be a symplectic variety endowed with a Hamiltonian G-action and moment map µ : X → g. The diagonal action of G on X × (G × S τ ) is then Hamiltonian and admits a moment map ofμ : Noting that this diagonal action is free, one has the symplectic quotient Let (X, ω) be a symplectic variety on which G acts in a Hamiltonian fashion with moment map µ : X → g, and let τ be an sl 2 -triple. The following statements then hold.
In preparation for (ii), let ω denote the symplectic form on (X × (G × S τ ))/ /G and consider the inclusions j : Our objective is to prove that ϕ [by (18)].
Theorem 7. If τ is an sl 2 -triple, then G × S τ is canonically a (G, K)-hyperkähler variety. The Hamiltonian G-action and underlying holomorphic symplectic structure on G × S τ associated with this (G, K)-hyperkähler structure are precisely those described in 4.1.
Now let X be any (G, K)-hyperkähler variety. Given an sl 2 -triple τ , note that product manifold X × (G × S τ ) is naturally hyperkähler and carries a free, diagonal G-action. It is then not difficult to check that X × (G × S τ ) is a (G, K)-hyperkähler variety, with underlying holomorphic symplectic structure equal to the natural product holomorphic symplectic structure on X × (G × S τ ). With this in mind, we can define hyperkähler slices as follows.
This construction can be used to produce a number of well-studied hyperkähler manifolds, some of which are mentioned in the introduction of [5]. For several of these examples, there is a particularly concrete description of the underlying holomorphic symplectic manifold. Indeed, let X and τ be as described in the definition above. Note that (4) manifests as a map (20) ( which features in the following rephrased version of [4, Theorem 1]. Theorem 9 (Bielawski). Let τ be an sl 2 -triple, and let (X, (I ℓ , ω ℓ , b) 3 ℓ=1 ) be a (G, K)-hyperkähler variety with complex moment map µ : X → g. Consider the map from Proposition 6(i). If the Kähler manifold (X, I 1 , ω 1 , b) has a K-invariant potential that is bounded from below on each G-orbit, then (21) is an isomorphism of holomorphic symplectic manifolds.
Remark 10. Bielawski speaks of hyperkähler slices only when the hypotheses of Theorem 9 are satisfied (see [5,Section 1]). He then defines a hyperkähler slice to be a hyperkähler manifold of the form µ −1 (S τ ), where µ −1 (S τ ) is equipped with the hyperkähler structure induced through the isomorphism (21). In particular, Definition 8 mildly generalizes Bielawski's original notion.
Let us briefly consider the hyperkähler slice construction for (G, K)-hyperkähler varieties of the , as introduced in 3.3. Accordingly, recall the notation adopted in 3.3. The function ρ H is bounded from below on all of G × H h ⊥ (see (12)), while we recall that ρ H is Proposition 4). It then follows from Theorem 9 that as holomorphic symplectic manifolds for all sl 2 -triples τ in g. We exploit this fact in what follows.
4.3. The regular Slodowy slice. Recall that dim(ker(ad x )) ≥ r for all x ∈ g, and that x is called regular if equality holds. Let g reg ⊆ g denote the set of all regular elements, which is known to be a G-invariant, open, dense subvariety of g. This leads to the notion of a regular sl 2 -triple, i.e. an sl 2 -triple τ = (ξ, h, η) in g for which ξ ∈ g reg . Fix one such triple τ for the duration of this paper, and let S reg := S τ denote the associated Slodowy slice. The slice S reg is known to be contained in g reg , and to be a fundamental domain for the action of G on g reg (see [18,Theorem 8]). Note that this last sentence may be rephrased as follows: x ∈ g belongs to g reg if and only if x is G-conjugate to a point in S reg , in which case x is G-conjugate to a unique point in S reg .
As discussed in the 1.2, we wish to study the emptiness problem for hyperkähler slices of the form (G × H h ⊥ ) × (G × S reg ) / / /K. The following result is a crucial first step. (14)), reducing our task to one of proving that G · h ⊥ ∩ S reg = ∅ if and only if h ⊥ ∩ g reg = ∅. To prove this, we simply appeal to the discussion of S reg above and note that x ∈ h ⊥ belongs to g reg if and only if x is G-conjugate to a point in S reg .
5.
The spherical geometry of G/H 5.1. The image of the moment map. Let us continue with the notation set in the introduction of Section 3. Choose opposite Borel subgroups B, B − ⊆ G, declaring the former to be the positive Borel and the latter to be the negative Borel. It follows that T := B ∩ B − is a maximal torus of G, and we shall let b, b − , and t denote the Lie algebras of B, B − , and T , respectively. We thus have a weight lattice Λ ⊆ t * and canonical group isomorphisms Λ ∼ = Hom(T, C × ) ∼ = Hom(B, C × ), where Hom is taken in the category of algebraic groups. We also have sets of roots ∆ ⊆ Λ, positive roots ∆ + ⊆ ∆, negative roots ∆ − ⊆ ∆, and simple roots Π ⊆ ∆ + . Note that by definition where g α is the root space associated to α ∈ ∆.
We now establish two important conventions. To this end, recall the isomorphism (5) between the adjoint and coadjoint representations of G. Our first convention is to use (·) ∨ for both (5) and its inverse, so that the inverse will presented as As for our second convention, note that the map g * → t * restricts to an isomorphism from the image of t under (5) to t * . We will use this isomorphism to regard t * as belonging to g * . Now let Y be a smooth, irreducible G-variety having field of rational functions C(Y ), noting that C(Y ) is then a G-module. The weight lattice of Y can also be viewed as the character lattice of a quotient of T , once we appeal to Knop's local structure theorem [34,Theorem 4.7]. This theorem gives a parabolic subgroup P ⊆ G that contains B, has a Levi decomposition P = P u L with T ⊆ L, and satisfies the following property: there exists a locally closed affine P -stable subvariety Z ⊆ Y such that P u × Z → Y maps surjectively onto an open affine subset Y 0 of Y. One also has [L, L] ⊆ L 0 ⊆ L, where L 0 is the kernel of the L-action on Z. The quotient A Y := L/L 0 is a torus that acts freely on Z, and there exists an affine variety C with a trivial L-action such that Z ∼ = A Y × C as L-varieties. It follows that Y ⊆ t and a Y ⊆ t denote the preimage and image of Λ Y and a * Y under (5), respectively, noting that (23) A Y := Λ ∨ Y ⊗ Z C × is a subtorus of T with Lie algebra a Y . We shall also refer to a Y as the Cartan space of Y .
Example 12.
In what follows, we compute the Cartan space of G/T . Let Λ + ⊆ t * denote the set of dominant weights of G, and let V λ be the irreducible G-module of highest weight λ ∈ Λ + . Recall the following classical fact about C[G/T ], the coordinate ring of G/T : We also conclude that a G/T = t. We now recall a key geometric feature of the Cartan space construction. Let Y be any smooth, irreducible G-variety and consider the canonical lift of the G-action on Y to a G-action on T * Y . The latter action is Hamiltonian with respect to the standard symplectic form on T * Y , and there is a distinguished moment map µ Y : T * Y → g. Lemma 3.1 and Corollary 3.3 from [17] then combine to give the following equality of closures in g.
Theorem 13 (Knop). If Y is a smooth, irreducible, quasi-affine G-variety, then µ Y (T * Y ) = G · a Y . 5.2. a-regularity. Recall the notation set in the introduction to Section 3, which we now use together with the notation of 5.1. It is then not difficult to prove that a G/H depends only on the pair (g, h). For this reason, we set a(g, h) * := a * G/H and a(g, h) := a G/H . We will sometimes denote a(g, h) (resp. a(g, h) * ) by a (resp. a * ) when the underlying pair (g, h) is clear from context.
Definition 14.
We say that the pair (G, H) or the corresponding pair (g, h) of Lie algebras is a-regular if a(g, h) contains a regular element of g.
We now give a few characterizations of a-regularity. In what follows, A G/H is the subtorus of T defined by setting Y = G/H in (23) and Z G ( A G/H ) consists of all g ∈ G that commute with every element of A G/H . We also let Z G (a) be the subgroup of all g ∈ G that fix a pointwise, and we let z g (a) be the subspace of all x ∈ g that commute with every element of a.
Proposition 15. With all notation as described above, the following conditions are equivalent.
(i) (G, H) is a-regular; Proof. We begin by proving that h ⊥ ∩ g reg = ∅ if and only if (G, H) is a-regular. To show the forward implication, assume that h ⊥ ∩ g reg = ∅. Identifying T * (G/H) with G× H h ⊥ and recalling the moment map ν H (see (14)), Theorem 13 implies that This amounts to the statement that G · h ⊥ = G · a.
Since h ⊥ ∩ g reg = ∅ by hypothesis, we must have G · a ∩ g reg = ∅. Note also that G·a is a constructible subset of g, so that G · a intersects every non-empty open subset of G · a. These last two sentences imply that G · a ∩ g reg = ∅, which is equivalent to a ∩ g reg = ∅. We conclude that (G, H) is a-regular.
In an analogous way, one argues that (G, H) being a-regular implies h ⊥ ∩ g reg = ∅.
We are reduced to establishing that (G, H) is a-regular if and only if Z G (a) = T . Accordingly, recall that an element of t is regular if and only if it does not lie on any root hyperplane. It follows that (G, H) is not a-regular if and only if a belongs to the union of all root hyperplanes. Since a is irreducible, this is equivalent to a being contained in a particular root hyperplane, i.e. a ⊆ ker(α) for some α ∈ ∆. This holds if and only if g α ⊆ z g (a) for some α ∈ ∆. Now note that z g (a) is a T -invariant subspace of g containing t, meaning that z g (a) = t ⊕ α∈S g α for some subset S ⊆ ∆. It follows that g α ⊆ z g (a) for some α ∈ ∆ if and only if z g (a) = t. The second of these conditions is equivalent to having Z G (a) = T , if one knows Z G (a) to be connected and have a Lie algebra of z g (a). Connectedness follows from the observation that Z G (a) = Z G ( A G/H ) (see [ Let H act on a complex algebraic variety X. A subgroup H ⊆ H is called a generic stabilizer for this action if there exists a non-empty open dense subset U ⊆ X with the following property: the H-stabilizer of every x ∈ U is conjugate to H. A generic stabilizer is known to exist if X is a linear representation of H [32]. We therefore have a generic stabilizer for the H-action on h ⊥ , and we denote it by H * . This group is known to be reductive (see [34,Theorem 9.1]).
Remark 16.
A generic stabilizer is unique up to conjugation, meaning that H * more appropriately denotes a conjugacy class of subgroups in H. However, we shall always take H * to be a fixed subgroup in this conjugacy class. Now recall our discussion of the the local structure theorem for a smooth, irreducible G-variety Y , as well as the notation introduced in that context (see 5.1). If Y = G/H, then the group L 0 turns out to be precisely H * (see [16,Section 8]). and [34,Proposition 8.14]. Since G/H is an affine variety, these two statements imply that L = Z G (a). Our task is therefore to prove that L = T if and only if the identity component in L 0 is abelian. The forward implication follows immediately from the inclusion L 0 ⊆ L, so that we only need to verify the opposite implication.
Note that L is a Levi factor of a parabolic subgroup of G, as discussed in 5.1. This means that L is connected and reductive, forcing the derived subgroup [L, L] to be connected as well. The inclusion [L, L] ⊆ L 0 thus shows [L, L] to be contained in the identity component in L 0 . If we now assume that this component is abelian, then [L, L] must also be abelian. It follows that L is itself abelian. Together with the inclusion T ⊆ L (see 5.1) and the fact that L is a connected, reductive subgroup of G, this last sentence implies that L = T . The proof is complete.
Corollary 17 can be used to easily assess a-regularity in several examples. To see this, we note that [20] fully describes the H-representation h ⊥ in many cases. Each of these descriptions can be combined with the tables ofÈlašvili [12,13] to compute H * , after which Corollary 17 can be applied. We illustrate this in the following example.
Example 18. Consider the pair (G, H) = (SL p+q , S(GL p × GL q )) with 1 ≤ p ≤ q. The vector space h ⊥ is isomorphic to (C p ⊗(C q ) * ) ⊕ ((C p ) * ⊗ C q ) as an H-representation. The Lie algebra of the generic stabilizer for this action is isomorphic to C p ⊕sl(q − p) if p < q and to C p−1 if p = q. Hence (G, H) is a-regular if and only if q − p ≤ 1.
We now formulate a numerical criterion for a-regularity in terms of spherical-geometric invariants. Recall that the rank rk G (Y ) of a G-variety Y is the dimension of a Y . The complexity c G (Y ) of Y is the codimension of a generic B-orbit in Y . We then have the following equalities, which are due to Knop [16]: where T * is a maximal torus of H * . Proof. Corollary 17 shows that (G, H) is a-regular if and only if the identity component in H * is abelian. This is in turn equivalent to dim H * = dim T * , and the result then follows from (24) and (25).
The criteria established in Corollaries 17 and 19 become effective once we are able to either determine the Cartan space a(g, h) or the generic stabilizer H * . The latter is difficult to accomplish in full generality, but Losev's work [25] makes the former achievable in a systematic way. Losev's method features prominently in the next subsection.
5.3.
The Cartan space of a homogeneous affine variety. Continuing with the notation used in 5.2, we recall Losev's algorithm [25] for determining the Cartan space of (G, H). We begin with the following definition (cf. [34,Section 10]). Definition 20. The pair (G, H) or the corresponding pair (g, h) is called: (i) decomposable if there exist non-zero proper ideals g 1 , g 2 in g and any ideals h 1 , h 2 in h such We note that the Cartan space of a decomposable pair (g 1 ⊕ g 2 , h 1 ⊕ h 2 ) is a(g 1 , h 1 ) ⊕ a(g 2 , h 2 ). At the same time, observe that (x 1 , x 2 ) ∈ g 1 ⊕ g 2 is a regular element if and only if x 1 and x 2 are regular elements of g 1 and g 2 , respectively. These last two sentences imply that (g 1 ⊕ g 2 , h 1 ⊕ h 2 ) is a-regular if and only if (g 1 , h 1 ) and (g 2 , h 2 ) are a-regular. Recognizing its relevance to later arguments, we record this conclusion as follows.
Lemma 21. Consider a collection of indecomposable pairs (g i , h i ), i = 1, . . . , n, and suppose that our pair (g, h) is given by Then (g, h) is a-regular if and only if (g i , h i ) is a-regular for all i = 1, . . . , n.
Remark 22. Note that our pair (g, h) is necessarily expressible in the form (26), i.e. there exist indecomposable pairs (g i , h i ), i ∈ {1, . . . , n}, such that g i (resp. h i ) is an ideal in g (resp. h) for all i and (26) holds. This observation follows from Definition 20 via a straightforward induction argument, and it will be used implicitly in some of our arguments.
We now resume the main discussion. Note that for a subalgebra j ⊆ h, we have an inclusion a(g, h) ⊆ a(g, j) of Cartan spaces. It follows that a(g, h) ⊆ a(g, j) for all ideals j ≤ h, which leads to the following definition (cf. [25, Definition 1.1]).
Definition 23.
A reductive subalgebra j ⊆ g is called essential if for every proper ideal i ≤ j, the inclusion a(g, j) ⊆ a(g, i) is strict. Now consider the Lie algebra h * of H * , where H * is the generic stabilizer for the H-action on h ⊥ (see 5.2). Losev shows that h * generates an ideal h ess ≤ h that is an essential subalgebra of g. This essential subalgebra is reductive and has the following properties: • h ess ≤ h is the unique ideal of h for which a(g, h) = a(g, h ess ); • h ess is maximal (for inclusion) among the ideals of h that are essential subalgebras of g. In principle, this reduces the computation of a(g, h) to the task of determining h ess and a(g, h ess ).
The preceding discussion allows us to sketch the main results of [25]. Losev classifies the essential subalgebras j ⊆ g that are semisimple, and in each such case he presents a(g, j) as the span of certain linear combinations of fundamental weights. This information may also be used to determine the Cartan space when j is non-semisimple, provided that one knows the center of j. To this end, Losev gives an algorithm for calculating the centers of non-semisimple essential subalgebras.
Preliminaries for the classifications.
We now discuss four items that are crucial to the classifications in 5.5. Our first item is the following elementary observation.
Observation 24. Let r be a complex reductive Lie algebra with a reductive ideal i ≤ r. If j is a reductive ideal in i, then j is also an ideal in r. This follows immediately from the decomposition of a reductive Lie algebra into a direct sum of its center and simple ideals, and it will be used implicitly in some of what follows.
We also need the following definition, which serves to formalize a standard idea. Definition 25. Let r 1 and r 2 be complex Lie algebras with respective subalgebras s 1 and s 2 . We refer to (r 1 , s 1 ) and (r 2 , s 2 ) as being conjugate if r 1 = r 2 and s 1 = φ(s 2 ) for some Lie algebra automorphism φ : r 1 → r 1 .
With this in mind, we have the following lemma.
Lemma 26. Assume that g is simple and let h ⊆ g be a reductive subalgebra.
(i) If (g, i) is not conjugate to a pair in Tables 1 or 2 Table I below. Since h ess is also reductive, one can find a Lie algebra automorphism of g that sends h ess into t. This implies that (g, h) is conjugate to a pair (g, h) satisfying h ess ⊆ t. We may therefore assume that h ess ⊆ t.
Recalling the properties of h ess discussed in 5.3, the first equality implies that t = a(g, h) and the second equality gives h ess = {0}. The a-regularity of (g, h) now follows from the fact that t ∩ g reg = ∅, completing our proof of (i).
To prove (ii), we first assume that (g, [h ess , h ess ]) is conjugate to a pair in Table I (g, h). The previous two sentences together show that (g, h) is a-regular.
For the converse we suppose that h ess ≤ h is not the trivial ideal. The discussion above implies that h ess cannot be abelian, so that [h ess , h ess ] ≤ h is a semisimple and non-trivial ideal. It then follows from Losev's setup in [25] that [h ess , h ess ] is conjugate to a pair in [25, Table 1] or [25, Table 2]. Hence there are three mutually exclusive possibilities: (g, [h ess , h ess ]) is conjugate to a pair in: (a) [25, Table 1], but not to one in [25, Table 2]; (b) [25, Table 1] and [25, Table 2]; (c) [25, Table 2], but not to one in [25, Table 1].
In each instance, we simply use Losev's tables to inspect all possible Cartan spaces a(g, h) and determine whether each has a regular element. We first suppose that (a) holds. Then (g, h) is a-regular precisely when (g, h ess ) is conjugate to one of the items 2 (with k = n/2, (n + 1)/2), 6 (with n = 4), 7 or 21 from [25, Table 1]. These pairs constitute the first five lines of Table I. Now suppose that (b) holds. Then (g, [h ess , h ess ]) is conjugate to one of the items 1, 2 (with n/2 < k ≤ n − 2), 10 or 19 from [25, Table 1]. A case-by-case examination reveals that (g, [h ess , h ess ]) is not a-regular, i.e. a(g, [h ess , h ess ]) ∩ g reg = ∅. It then follows from the inclusion a(g, h ess ) ⊆ a(g, [h ess , h ess ]) that a(g, h ess ) ∩ g reg = ∅. Since a(g, h ess ) = a(g, h), this means that (g, h) is not a-regular.
We last suppose that (c) holds, in which case (g, [h ess , h ess ]) is conjugate to item 6 or 7 in Table I. As argued above, the pair (g, h) is necessarily a-regular.
For the last preliminary topic, let H be any reductive subgroup of our connected semisimple group G. The coordinate ring C[G/H] then decomposes into certain irreducible, highest-weight G-modules, and the highest weights appearing in this decomposition are the so-called spherical weights. These weights form a finitely generated semigroup Λ + (G, H). With this in mind, we record the following immediate consequence of [34, Proposition 5.14].
Lemma 27.
If H is any closed, reductive subgroup of G, then a(g, h) * is spanned by Λ + (G, H).
5.5.
The classifications. We maintain the notation used in 5.3, and now address the classification of a-regular pairs (G, H) (equivalently, a-regular pairs (g, h)) in each of the following three cases: H is a Levi subgroup 5.5.1, H is symmetric 5.5.2, and H is simultaneously reductive, spherical, and nonsymmetric 5.5.3. In each case, we reduce to the classification of strictly indecomposable, a-regular pairs. We list all conjugacy classes of such pairs in each of the cases 5.5.2 and 5.5.3, where the notion of conjugacy class comes from Definition 25.
Remark 28. We emphasize that the classification of strictly indecomposable pairs works differently in each of the above-mentioned cases. In the case of Levi subgroups H ⊆ G, the classification is almost entirely based on Losev's work [25]. This is in contrast to the case of symmetric subgroups, in which we appeal to representation-theoretic results about symmetric spaces. Several of these results are not applicable to the case of a reductive spherical H ⊆ G, for which we instead harness the works of Brion [7], Krämer [21], and Mikityuk [28].
Remark 29. Note that every symmetric subgroup of G is reductive and spherical (see [34,Theorem 26.14]). The techniques and arguments in 5.5.3 thereby imply the classification results in 5.5.2. Despite this, we believe that the representation-theoretic approach taken in 5.5.2 is independently interesting and worthwhile. Further distinctions between 5.5.2 and 5.5.3 are discussed in Remark 32 and Example 36.
Levi subgroups.
Assume that H is a Levi subgroup of G, by which we mean that H is a Levi factor of a parabolic subgroup P ⊆ G. It follows that h is a Levi factor of a parabolic subalgebra p ⊆ g. Now let g = g 1 ⊕ · · · ⊕ g n be the decomposition of g into its simple ideals g 1 , . . . , g n . The parabolic subalgebra p is then a sum of parabolic subalgebras p i ⊆ g i for i = 1, . . . , n, implying that h is a sum of Levi factors h i ⊆ p i , i = 1, . . . , n. An application of Lemma 21 then shows that (g, h) is a-regular if and only if (g i , h i ) is a-regular for all i = 1, . . . , n. It therefore suffices to assume that g is simple. Our classification then takes the following form.
Proposition 30. Assume that g is simple and that h is a Levi subalgebra of g with h ess = {0}. The pair (g, h) is then a-regular if and only if it is conjugate to a pair in Table II. In this table, l 2 is any reductive subalgebra of sl 2n+1 that satisfies the following conditions: sl n+1 ∩ l 2 = {0}, l 2 commutes with sl n+1 , and sl n+1 ⊕ l 2 is a Levi subalgebra of sl 2n+1 . 1 g l 1 sl 2k s(gl k ⊕ gl k ) 2 sl 2k−1 s(gl k ⊕ gl k−1 ) 3 e 6 sl 6 ⊕ C 4 sl 2n+1 sl n+1 ⊕ l 2 TABLE II. Line 3 is to be understood as follows. Up to Lie algebra automorphism, e 6 contains precisely one subalgebra isomorphic to sl 6 ⊕ sl 2 (see [11,Theorem 5.5, Table 12, and Theorem 11.1]). By choosing a Cartan subalgebra of sl 2 and identifying it with C, one obtains a unique automorphism class of subalgebras in e 6 that are isomorphic to sl 6 ⊕ C. This turns out to be a class of Levi subalgebras in e 6 , and the reader may take any of these to be the subalgebra l in line 3.
Proof. We first assume that (g, h) is conjugate to a pair in Table II. A case-by-case analysis reveals that each pair in Table II is a-regular, implying that (g, h) is a-regular. Conversely, assume that (g, h) is a-regular. Lemma 26(ii) then implies the existence of an ideal i in h for which (g, i) is conjugate to a pair in Table I. We will therefore begin by finding the pairs in Table I for which this is possible. For each such pair (r, j), we will subsequently find the Levi subalgebras l ⊆ r that contain j as an ideal. Note that (g, h) will then be conjugate to one of the pairs (r, l) arising in this way. It will then suffice to observe that the aforementioned pairs (r, l) appear in Table II.
Let (r, j) be any of the pairs appearing in lines 3,4, and 7 of Table I. Observe that the Dynkin diagram of j is not a subdiagram in the Dynkin diagram of r. At the same time, the Dynkin diagram of any ideal in a Levi subalgebra of g must be a subdiagram in the Dynkin diagram of g. It follows that (g, i) cannot be conjugate to (r, j) for any ideal i ≤ h.
In light of the previous paragraph, we may restrict our attention to the pairs in lines 1,2,5, and 6 of Table I. Let (r, j) be any such pair, recalling that the embedding of j into r is described in [25,Section 6] (cf. the caption of Table I). This description is easily seen to imply that j is an ideal in a Levi subalgebra of r. If (r, j) is in one of lines 1,2, and 5 from Table I, then the Dynkin diagram of j uniquely determines a Levi subalgebra l ⊆ r that contains j as an ideal. The pair (r, l) is recorded in Table II. If (r, j) is in line 6 from Table I, i.e. r = sl 2n+1 and j = sl n+1 , then there are several Levi subalgebras l ⊆ r that contain j as an ideal. The Dynkin diagram of any such l is a subdiagram in the Dynkin diagram of sl 2n+1 , and it contains the Dynkin diagram of sl n+1 as a connected component. It follows that l = sl n+1 ⊕ l 2 for some reductive subalgebra l 2 ⊆ sl 2n+1 that satisfies the desired hypotheses. 5.5.2. Symmetric subgroups. Using the notation established in 5.1 and the introduction of Section 3, we assume that the subgroup H ⊆ G is symmetric. This means that H is an open subgroup of G θ , the subgroup of fixed points of an involutive algebraic group automorphism θ : G → G. It follows that (g, h) is a symmetric pair, i.e. h coincides with the set of θ-fixed vectors g θ ⊆ g for the corresponding involutive Lie algebra automorphism θ : g → g.
Lemma 31. If h is any reductive subalgebra of g, then (g, h) is a symmetric pair if and only if there exist strictly indecomposable symmetric pairs (g i , h i ), i ∈ {1, . . . , n}, such that g i (resp. h i ) is an ideal in g (resp. h) for all i and Proof. The backward implication follows from the following simple observation: if (g 1 , h 1 ) and (g 2 , h 2 ) are symmetric pairs, then (g 1 ⊕ g 2 , h 1 ⊕ h 2 ) is also a symmetric pair.
To prove the forward implication, assume that (g, h) is a symmetric pair and let θ : g → g be an involutive automorphism for which h = g θ . Note that each simple ideal of g is either θ-stable or interchanged by θ with a different simple ideal. We may therefore identify g with g ⊕2 1 · · · ⊕ g ⊕2 s ⊕ g s+1 · · · ⊕ g s+t for simple Lie algebras g 1 , . . . , g s+t , such that θ becomes the following map: (x, y) → (y, x) on each summand g ⊕2 i and x → θ j (x) on each summand g j , where θ j : g j → g j is an involutive automorphism. It follows that i : x = y} for all i ∈ {1, . . . , s}. In light of the above, it suffices to prove that the symmetric pairs (g ⊕2 i , diag(g i )) and (g j , g θ j j ) are strictly indecomposable for all i ∈ {1, . . . , s} and j ∈ {s + 1, . . . , s + t}. The strict indecomposability of the latter pair follows from the fact that g j is simple. Now observe that the simplicity of . It follows that (g ⊕2 i , diag(g i )) is strictly indecomposable if and only if it is indecomposable. However, since diag(g i ) is simple, the decomposability of (g ⊕2 i , diag(g i )) would entail diag(g i ) being contained in a proper ideal of g ⊕2 i . This is not possible, meaning that (g ⊕2 i , diag(g i )) is indeed strictly indecomposable. The proof is complete.
Remark 32. One immediate consequence is that every indecomposable symmetric pair (g, h) is strictly indecomposable. This is not true of an arbitrary reductive spherical pair (g, h) (see Example 36).
Together with Lemma 21, Lemma 31 reduces the classification of a-regular symmetric pairs to the classification of a-regular, strictly indecomposable symmetric pairs. Let (g, h) be a pair of the latter sort, and let (G, H) denote an associated pair of groups. Let us also consider an involutive automorphism θ : g → g satisfying h = g θ . This forms part of the eigenspace decomposition g = h⊕q, where q ⊆ g is the −1-eigenspace of θ. One can then find a maximal abelian subspace c ⊆ q, meaning that c is a vector subspace of q that is maximal with respect to the following condition: c is abelian and consists of semisimple elements in g (cf. [33,Corollary 37.5.4]). Now recall our discussion of the generic stabilizer H * ⊆ H and its Lie algebra h * ⊆ h (see 5.2 and 5.3). At the same time, let z h (Y ) denote the subalgebra of all x ∈ h that commute with every vector in a subset Y ⊆ g.
Proof. The H-module isomorphisms h ⊥ ∼ = g/h ∼ = q imply that H * is a generic stabilizer for the Haction on q. Note also that H · c ⊆ q is dense (see [33,Lemma 38 We now explain the classification of a-regular, strictly indecomposable symmetric pairs (g, h). Up to conjugation (see Definition 25), such pairs are parametrized by Satake diagrams (see [34,Section 26.5]). The Satake diagram for a symmetric pair (g, h) is the Dynkin diagram of g, together with extra decorations that encode the associated involution θ : g → g. Part of this decoration consists of painting some of the nodes black; these are precisely the simple roots of z h (c). At the same time, recall that Lemma 33 identifies z h (c) with h * . Appealing to Corollary 17, we see that the a-regularity of (g, h) is equivalent to the Satake diagram of (g, h) having none of its nodes painted black. This leads to the following result.
Proposition 35. A strictly indecomposable symmetric pair (g, h) is a-regular if and only if it is conjugate to one of the pairs in Table III. In this table, s denotes any simple Lie algebra.
sl n so n 2 sl 2n+1 sl n+1 ⊕ sl n ⊕ C sl 2n sl n ⊕ sl n ⊕ C 3 so 2n+1 so n+1 ⊕ so n so 2n so n ⊕ so n so 2n so n−1 ⊕ so n+1 4 sp 2n gl n 5 e 6 sp 8 6 e 6 sl 6 ⊕ sl 2 7 e 7 sl 8 8 Proof. Following the discussion above, we only need to list the symmetric pairs whose Satake diagrams have no black nodes. These diagrams can be found in [34, . We shall sometimes also require (G, H) and (g, h) to be non-symmetric, noting that the classification in 5.5.2 renders this a harmless assumption.
Example 36. In contrast to the situation considered in Remark 32, an indecomposable reductive spherical pair need not be strictly indecomposable. Set g = sl n+1 ⊕ sl 2 and let h ⊆ g be the image of This is an indecomposable spherical pair, but it is not strictly indecomposable.
We begin by assuming that our reductive spherical pair (G, H) is strictly indecomposable. Now note that Lemma 27 allows us to investigate a-regularity via Λ + (G, H), and the case-by-case analyses of [21] thereby become important. The aforementioned reference gives explicit semigroup generators of Λ + (G, H) if G is simple. If G is only semisimple, then a description of Λ + (G, H) can be obtained from [3, Table 1] as follows. If h has a trivial center, then generators of Λ + (G, H) are given in [3, Table 1]. If h has a non-trivial center, then [3, Table 1] provides a finite set {(λ 1 , χ 1 ), . . . , (λ s , χ s )} of generators for the so-called extended weight semigroup of (G, H). The λ i are dominant weights for G and the χ i are characters of H. The weight semigroup Λ + (G, H) identifies with the collection of all points in the extended weight semigroup that have the form (λ, 0). This amounts to the following statement: a dominant weight λ belongs to Λ + (G, H) if and only if λ = s i=1 n i λ i for some non-negative integers n i satisfying s i=1 n i χ i = 0. Together with an inspection of [21, Table 1] and [3, Table 1], this discussion yields the following fact.
Lemma 38. If (G, H) is a strictly indecomposable reductive spherical pair that is not symmetric, then (G, H) is a-regular if and only if (g, h) is conjugate to a pair in Table IV.
Lemma 39. Let (g, h) be a strictly indecomposable reductive spherical pair. If (g, [h, h]) is a-regular, then (g, h) is a-regular.
Proof. The statement is obviously true if h is semisimple, so we assume h to be non-semisimple. Let us prove the statement by contraposition, assuming that (g, h) is a strictly indecomposable reductive spherical pair that is not a-regular. At the same time, h being non-semisimple and an inspection of [34, Table V below. It therefore suffices to prove the following claim: if (g, h) is conjugate to a pair in Table V, then (g, [h, h]) is not a-regular.
so 10 so 7 ⊕ so 2 7 sl n ⊕ sp 2m (n > 6 or m > 2) gl n−2 ⊕ sl 2 ⊕ sp 2m−2 Suppose that (g, h) is conjugate to a pair in lines 1,2,3, or 7 of Table V. It then follows that (g, [h, h]) is a strictly indecomposable reductive spherical pair, as it appears in at least one of the classifications of Krämer [21], Mikityuk [28], and Brion [7]. At the same time, one can verify that (g, [h, h]) is not conjugate to a pair in Table III or Table IV. Proposition 35 and Lemma 38 then imply that (g, [h, h]) is not a-regular. Now assume that (g, h) is conjugate to one of the remaining pairs in Table V. Let (G, H) be a corresponding reductive spherical pair of groups, and let us take G to be simply-connected. We note that [34, We now study the a-regular, indecomposable reductive spherical pairs. Let (g, h) be an indecomposable reductive spherical pair and note that (g, [h, h]) has the following form (cf. Remark 22): where for all i ∈ {1, . . . , n}, h i is a semisimple ideal in [h, h], g i is a reductive ideal in g containing h i , and (g i , h i ) is indecomposable. Note that each pair (g i , h i ) is actually strictly indecomposable, owing to the fact that h i is semisimple.
Let π i : g → g i denote the projection onto the i th factor and set z i := π i (z(h)), where z(h) is the center of h. It is clear that z i is reductive and that it commutes with h i , from which we conclude that h i := h i ⊕ z i ⊆ g i is a reductive subalgebra. Now set It follows by construction that [h, h] ⊆ h and z(h) ⊆ n i=1 z i ⊆ h, implying that h ⊆ h and b+h ⊆ b+h for any Borel subalgebra b ⊆ g. Since (g, h) is a reductive spherical pair, the previous sentence shows (g, h) to be a reductive spherical pair. Our next result establishes that (g i , h i ) is a reductive spherical pair for all i ∈ {1, . . . , n}.
Lemma 40. Let (g, h) be an indecomposable reductive spherical pair and use the notation from above. Then (g i , h i ) is a strictly indecomposable reductive spherical pair for all i ∈ {1, . . . , n}.
Proof. Since (g, h) is spherical, there exists a Borel subalgebra b ⊆ g satisfying b + h = g. The decomposition g = g 1 ⊕ · · · ⊕ g n gives rise to a decomposition of the form b = b 1 ⊕ · · · ⊕ b n , where b i is a Borel subalgebra of g i for all i ∈ {1, . . . , n}. Now note that b i + h i = g i for all i ∈ {1, . . . , n} if and only if b + h = g. Recalling that (g, h) is a reductive spherical pair, the previous sentence implies that (g i , h i ) is a reductive spherical pair for all i ∈ {1, . . . , n}.
To complete the proof, we observe that [h i , h i ] = h i for all i ∈ {1, . . . , n}. The strict indecomposability of (g i , h i ) thus follows from the indecomposability of (g, h i ).
We may now relate the a-regularity of (g, h) to that of (g, h).
Proposition 41. Let (g, h) be an indecomposable reductive spherical pair and use the notation from above. Then (g, h) is a-regular if and only if (g, h) is a-regular.
Proof. The inclusion of subalgebras [h, h] ⊆ h ⊆ h implies the inclusion of Cartan spaces a(g, h) ⊆ a(g, h) ⊆ a(g, [h, h]), from which we deduce the backward implication.
For the forward implication, suppose that (g, h) is a-regular. The inclusion a(g, h) ⊆ a(g, [h, h]) then shows (g, [h, h]) to be a-regular, which is equivalent to all of the strictly indecomposable pairs (g i , h i ) being a-regular (see Lemma 21). Since (g i , h i ) is a strictly indecomposable reductive spherical pair (see Lemma 40) with [h i , h i ] = h i , Lemma 39 implies that (g i , h i ) must be a-regular. It then follows from Lemma 21 that (g, h) is a-regular.
We now connect this discussion of a-regularity for indecomposable reductive spherical pairs to the overarching objective -a classification of a-regular reductive spherical pairs. The following lemma is a crucial step in this direction.
Lemma 42. If h is any reductive subalgebra of g, then (g, h) is a reductive spherical pair if and only if there exist indecomposable reductive spherical pairs (g i , h i ), i ∈ {1, . . . , n}, such that g i (resp. h i ) is an ideal in g (resp. h) for all i and Proof. By virtue of Remark 22, one can find indecomposable pairs (g i , h i ) satisfying the aboveadvertised properties. The proof then becomes entirely analogous to that of Lemma 40.
The classification of a-regular reductive spherical pairs is now described as follows. By virtue of Lemmas 21 and 42, it suffices to classify the indecomposable reductive spherical pairs that are a-regular. We thus suppose that (g, h) is any indecomposable reductive spherical pair. If (g, h) is strictly indecomposable, then it is a-regular if and only if it is conjugate to a pair in Table III or Table IV. If (g, h) is not strictly indecomposable, then we consider the associated pair (g, h). The a-regularity of (g, h) is then equivalent to that of (g, h) (see Proposition 41). This is in turn equivalent to every strictly indecomposable pair (g i , h i ) being a-regular (see Lemma 21), which can be assessed via Tables III and IV. Remark 43. One might ask about the feasibility of classifying the a-regular reductive spherical pairs (G, H) satisfying c G (G/H) > 0. The complexity-one case might be tractable, largely because the papers [2] and [31] classify all strictly indecomposable reductive spherical pairs (G, H) with c G (G/H) = 1. One can thereby determine which of the strictly indecomposable, complexity-one pairs are a-regular. In analogy with 5.5.2 and 5.5.3, this might imply a classification of all reductive spherical (G, H) with c G (G/H) = 1. The case of c G (G/H) > 1 remains unclear to us.
|
2019-02-14T14:56:49.000Z
|
2019-02-14T00:00:00.000
|
{
"year": 2019,
"sha1": "bdfd1b8f0369bdb0b9400f9051adabcb9a95a23e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1902.05403",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c088b71d26bf22c3c3f494d2b43b52bb47b05941",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
267635994
|
pes2o/s2orc
|
v3-fos-license
|
Effect of meniscus modelling assumptions in a static tibiofemoral finite element model: importance of geometry over material
Finite element studies of the tibiofemoral joint have increased use in research, with attention often placed on the material models. Few studies assess the effect of meniscus modelling assumptions in image-based models on contact mechanics outcomes. This work aimed to assess the effect of modelling assumptions of the meniscus on knee contact mechanics and meniscus kinematics. A sensitivity analysis was performed using three specimen-specific tibiofemoral models and one generic knee model. The assumptions in representing the meniscus attachment on the tibia (shape of the roots and position of the attachment), the material properties of the meniscus, the shape of the meniscus and the alignment of the joint were evaluated, creating 40 model instances. The values of material parameters for the meniscus and the position of the root attachment had a small influence on the total contact area but not on the meniscus displacement or the force balance between condyles. Using 3D shapes to represent the roots instead of springs had a large influence in meniscus displacement but not in knee contact area. Changes in meniscus shape and in knee alignment had a significantly larger influence on all outcomes of interest, with differences two to six times larger than those due to material properties. The sensitivity study demonstrated the importance of meniscus shape and knee alignment on meniscus kinematics and knee contact mechanics, both being more important than the material properties or the position of the roots. It also showed that differences between knees were large, suggesting that clinical interpretations of modelling studies using single geometries should be avoided.
Introduction
The use of computational models in healthcare is increasing, for a range of purposes including research and development, preclinical testing and clinical use.In any of these applications, there is a need to understand the context in which specific computational models have been developed, their applicability and limitations (Mengoni 2021;Viceconti et al. 2021).For example, in the case of interventions for knee osteoarthritis, finite element models of the knee may be developed to assess specifically the effect of the intervention on knee kinematics (e.g.Mootanah et al. 2014;Steineman et al. 2020;Shriram et al. 2021), or on cartilage pressure (e.g.D 'Lima et al. 2009;Khoshgoftar et al. 2015;Xu et al. 2022) but may not be accurate for both types of outputs.Further, while validation studies often replicate an equivalent experimental (or computational) protocol, they often do not report on how sensitive the model outputs are to its inputs or which assumptions can be modified and still produce a "valid" outcome.Understanding the sensitivity of results to input parameters can contribute to defining the contexts for which a modelling methodology remains valid, and those for which it should not be used.
For example, for tibiofemoral models developed to analyse the stress or contact distribution in the knee cartilage, there is often little information provided on the effect of the assumptions made to model the menisci.In this type of model, the meniscus is often modelled as a transversely isotropic linearly elastic material (as reviewed in Imeni et al. 2020), with the menisci circumferential modulus three to ten times larger than the axial modulus, and little variation in the latter (e.g.Meakin et al. 2013;Zielinksa and Haut Donahue 2006;Carey et al. 2014;Klets et al. 2016;Meng et al. 2017;Steineman et al. 2020 all use a 20 MPa axial modulus, with a circumferential modulus varying between 120 and 200 MPa).While there has been analysis of the effect of this variation on the outcomes for the meniscus (Steineman et al. 2020), little has been reported on the assessment of the effect of meniscus modelling choices on the joint contact behaviour.Without this analysis, it is difficult to compare different studies to identify sources of variation in contact mechanics.
The aim of this work was to assess the sensitivity to geometrical and material assumptions used to model the meniscus for finite element models of the tibiofemoral joint developed for contact mechanics.
Methods
A generic model of a tibiofemoral condyle was used to assess the effect of the meniscus attachment shape on the contact mechanics and meniscus kinematics.In parallel, specimenspecific finite element models of human tibiofemoral joints, valid for cartilage contact mechanics, were used in a one-ata-time sensitivity analysis to understand the effect of joint alignment and meniscus shape, material model parameters and attachment properties.All finite element models were nonlinear quasi-static and run with Abaqus 2019 (Simulia, Dassault Systèmes).
Generic models
Using the first generation Open-Knee model (Erdemir 2013), methods were developed to alter the type of meniscus root attachment of a single medial condyle model containing cartilage layers and meniscus only (Meng et al. 2017).The cartilage was modelled as a neo-Hookean solid (Cooper et al. 2020 andCooper et al. 2023).The meniscus was modelled as a transversely isotropic linear elastic material (Table 1, baseline values) with an axial modulus equal to the equivalent elastic modulus of the cartilage and using a ratio between the circumferential and axial moduli of 3.5 (Cooper et al. 2023).A load of 250 N was applied to the proximal surface of the femoral cartilage alongside the femoral axis with other degrees of freedom fixed.The distal surface of the tibial cartilage was completely fixed.The cartilage and meniscus were meshed with linear hexahedral elements of reduced integration, with an element size of approximately 0.7 mm (Meng et al. 2017).
Four root configurations were developed: (G1) Using linear springs connecting each node of the truncated end of the meniscus to a single point on the tibial plateau, with the total spring stiffness equivalent to a material elasticity equating that of the meniscus in the circumferential direction (baseline model); (G2) replacing the linear springs by rigid connectors (kinematic coupling between the cartilage and the truncated end of the meniscus); and (G3 and G4) modelling explicit 3D shapes to anchor the truncated meniscus end surfaces to the tibial plateau: The 3D shapes were obtained by lofting from the truncated meniscus extremity to a circular area, along a path following the curvature of the medial cartilage.The root circular attachment surface areas were defined depending on root location representing the range of core root areas measurements in the literature (Ellman et al. 2014), creating (G3) small and (G4) large attachment areas (Table 2).This did not necessarily create a realistic root shape, but the method was used to assess the importance of the shape of the root with respect to the simplified models.The 3D root material
Specimen-specific models
In parallel, specimen-specific models of three human tibiofemoral joints were derived from previous work (Cooper et al. 2023) where they had been developed and the contact mechanics compared to in vitro data of the same knees in axial compression and full extension.Briefly, the models were based on MRI and CT data of the corresponding experimental specimens.The baseline meniscus model had a geometry based on MR and CT images, truncated to represent the meniscus roots each modelled with 15 linear springs attached to one point on the tibia (Fig. 1i).The meniscus and cartilage material model for the specimen-specific baseline FE models were modelled in the same way as for the generic baseline model (Table 1).The bones were modelled as rigid solids.The tibia was completely fixed, and a load of 500 N was applied alongside the femoral axis with all other degrees of freedom completely fixed.
In this previous study (Cooper et al. 2023), the contact force through each condyle and the total contact area on the tibial plateau for each condyle (contact between meniscus and tibial cartilage and between femoral cartilage and tibial cartilage) were evaluated for knees tested without menisci and with the menisci retained and compared to specimenspecific experimental data.Differences were within experimental error on most outputs for the double meniscectomy cases (Fig. 1ii).For the intact cases, the predicted contact pressure distributions were qualitatively well matched to the experimentally measured values (Fig. 1iii); quantitative comparison was not possible due to the contact extending beyond the edges of the pressure sensor experimentally.
In this study, a one-at-a-time sensitivity analysis was performed on each of the three knees (Table 3) for (S1) the values of the axial or circumferential moduli of the meniscus (five variations for each knee), for (S2) the root attachment location and shape (four variations), for (S3) the shape of the meniscus (1 variation) and for (S4) the relative alignment of the femur to the tibia (1 variation).
Fig. 1 Baseline data for this sensitivity study (Cooper et al. 2023).i Example axial view of an intact baseline model with the menisci (green) and their spring attachments (purple) to the tibial plateau (beige); ii comparison of experimental and computational values for two outputs of interest (contact area and contact force) on each condyle for double meniscectomy cases; iii qualitative comparison of computational and experimental contact areas for each knee (S1) Effect of material properties The range of material parameters tested (Table 1) was based on variation of values in the literature (Cooper et al. 2020;Meakin et al. 2013;Zielinksa and Haut Donahue 2006;Carey et al. 2014;Klets et al. 2016;Meng et al. 2017;Steineman et al. 2020), creating models where either the transverse modulus of the meniscus was changed with respect to the equivalent modulus of the cartilage (models labelled A and B), or the circumferential modulus of the meniscus with respect to its transverse modulus (models labelled C to E). Shear moduli values that depend on the elastic moduli were adjusted accordingly.The spring stiffness values for the root attachments were modified when the circumferential modulus of the meniscus was changed, so that the total stiffness of the root represented a material with a modulus equivalent to the meniscus circumferential modulus.
(S2) Effect of root attachments
The root attachment position in the baseline model was defined based on a prescribed length of the springs attaching each meniscus root to one point on the tibial plateau.Four other configurations were considered in this sensitivity study (Table 4): (models labelled F) A single point attachment on the tibial plateau at an anatomical position defined at a generic distance from the centre of the tibial tuberosity and the lateral or medial tibial eminence (LaPrade et al. 2014 andJohannsen et al. 2012); (models labelled G) a single point attachment slightly away from the anatomical location, representative of what could happen in a meniscus graft surgery; (models labelled H and J) a surface attachment where the springs were attached alongside a circumference encompassing an average core or total root attachment area (LaPrade et al. 2014 andCruz et al. 2017).
(S3) Effect of meniscus shape
In the baseline model (Cooper et al. 2023), the meniscus shape had been simplified to ensure convergence of FE model solutions in free rotations conditions (not used here).In this study, it was possible to use a more realistic shape of the meniscus (Fig. 2), creating a model referred to as "segmented" (models labelled K).The meniscus was segmented from MRI DESS images (3T Siemens Magnetom Prisma, Erlangen, Germany with a 3D Double Echo Steady State sequence, at a resolution of 0.36 × 0.36 × 0.7 mm 3 ), with adjustments on the axial direction to maintain a surface conforming to the cartilage surface.The springs representing the roots were attached to the anatomical position defined in S2.Finally, the full extension of the knee which had been used to replicate the in vitro testing conditions were relaxed to a mid-stance 20° angle between the tibial axis and the femoral axis (models labelled L).This change of alignment was done with respect to the "segmented" models with additional adjustments in the segmentation of the meniscus to maintain conformity with cartilage surfaces after rotation.
Outputs of interest
Finite element outputs analysed were those used in the baseline study (Cooper et al. 2023), total contact area on each condyle (two values per model) and contact force ratio through each condyle (two values per model), with the addition of the maximum relative displacement of the meniscus with respect to the cartilage on each tibial plateau (one value per model) and, where relevant, the maximum stretch values of the linear springs across the 60 springs (one value per model).
To compare the effect of the four different aspects considered in the sensitivity study (material parameters, root attachment representation, segmentation and orientation), a Kruskal-Wallis test was performed for the contact area (only output of interest with N > 3 in all four groups), with post hoc Mann-Whitney tests and Bonferroni corrections for significance level of 0.01, using R version 4.2.2.
Generic models
The baseline model (with roots assumed to behave as springs) led to a contact area of 491 mm 2 and a maximum relative displacement of the meniscus of 2.15 mm.
Modelling the root attachments with a rigid connector reduced the contact area by over 8% with respect to that of the baseline model, whereas modelling the roots as a 3D structure yielded differences in contact area smaller than 3% (Fig. 3).The 3D structures had a significant effect on the relative motion of the meniscus (Fig. 3) with an increase of over 15% from the baseline.
Specimen-specific models
The total contact area for the three specimen-specific models showed little sensitivity to the material properties of the menisci (Fig. 4i) or to the location of their root attachment (Fig. 4ii), with a smaller effect due to these changes than due to the differences between knees.Of these, the sensitivity to the axial modulus values showed the most systematic effect, with a decrease of the contact area when the axial modulus of the meniscus increased (data points D and E on each condyle in Fig. 4i).The total contact area for each condyle was more affected by adjusting the menisci segmentation (models K), with an added effect of adjusting the joint alignment from a full extension to a mid-stance position (models L) (Fig. 4iii).These differences were prominent for the first knee which had a small baseline contact area on the lateral condyle.The effect of adjusting the segmentation or alignment was significantly larger than the effects caused by other changes (Fig. 4iv), with no significant difference between a change of segmentation only and the additional The sensitivity on the force distribution between condyles (Table 5) followed a similar pattern to that of the total contact area on each condyle.The material parameter changes and the root location changes led to differences in contact forces much smaller than those produced by the changes due to the segmentation or the alignment.
The springs' stretch and maximum displacement of the menisci on the tibia were more sensitive to the material properties (Figs.5i and 6i) or to the location of the root attachment (Figs.5ii and 6ii) than the contact outputs were, with the sensitivity of the segmentation (Figs.5iii and 6iii) and alignment (Figs.5iv and 6iv) still dominating the effects.
Discussion
In this work, a systematic sensitivity analysis was performed to assess the effect of meniscus modelling assumptions on the contact behaviour of tibiofemoral finite element models, as well as on the displacement of the meniscus in such models.It demonstrated the importance of the meniscus shape over that of material parameters.The contact mechanics were quantified here by the contact area on each condyle and the force ratio through each condyle.With both these measures, variations in the meniscus segmentation and in the joint alignment had effects two to six times larger than variations in the material parameters used for the meniscus or in the assumptions made for root attachments.When assessed on specimen-specific models, the differences due to variations in the meniscus segmentation and in the joint alignment were also larger than the differences in outcomes seen between knees.
The local meniscus movements were quantified here as the maximum relative displacement between menisci and the tibia and by the maximum stretch in the menisci root attachments.With these measures, the different variations tested in this work did not have significant differences in outcomes.However, the variance between knees increased with a more precise segmentation of the meniscus, suggesting it could have a large effect on meniscus displacement.Moreover, when assessed on a generic single-condyle model, the shape of the root appears to be important for the observed meniscal displacement.
While modelling assumptions for the meniscus roots have been shown to affect the contact distribution (Haut Donahue et al. 2003), they have less influence on the peak contact pressure (Rooks et al. 2022) which is driven by the knee geometry and FE mesh.The current work shows that these assumptions have also little effect on the total contact area, when the joint is constrained.A previous sensitivity study on a single knee also showed that the material properties of the meniscus were less important than their attachment in modelling meniscus displacement (Yao et al. 2006).One sensitivity study analysed the effect of using linear versus non-linear isotropic material models for the meniscus on the tibial cartilage contact pressure (Elmukashfi et al. 2022).It showed, using one knee model, that the contact pressure difference was small, with similar distribution but higher peak contact pressure and smaller contact area for the linear models, and that the majority of these changes occurred at the meniscus attachment sites.Studies that have shown that some material parameters were more important than others for the contact mechanics (e.g.Haut Donahue et al. 2003 andYao et al. 2006), did not directly compare this to the importance of the shape or orientation.
While this sensitivity study was conducted on a limited number of anatomical variations (one generic model and three specimen-specific models), clear trends in the effect of meniscus modelling assumptions were observed.Within the tested variations of material parameters and root configurations, the changes for all outputs of interest were below or within the observed variation seen between knees using the baseline models.Conversely, the tested variations for the meniscus shape and the knee alignment resulted in Fig. 5 Specimen-specific models-effect of modelling assumptions on the root spring stretch.Data visualisation as per Fig. 4 differences that were larger than the differences between individual knees.
With all models being highly constrained (only degree of freedom being in the direction of the applied force), large differences in behaviour were seen between knees in the model outcomes.This was true both for the baseline model and the variations in the sensitivity studies.In particular, knee 1 was loaded in such a way that the baseline model (and corresponding experiments in the previous work (Cooper et al. 2023)) had most of the load going through only the medial condyle, while the other two knees had a more balanced distribution of loads.This led to a much larger variation in behaviour for knee 1 when adjustments were made to the meniscus segmentation or the joint orientation, compared to the changes observed in the other two knees.
Few studies provide direct comparison of contact behaviour between knees, for which samples have been tested experimentally with the same methodology or have been modelled computationally with a consistent methodology.The variation observed between knees in this study mirrors the qualitative differences in experimental contact distribution noted across three knees tested at varying flexion angles (Beidokhti et al. 2017), for which some condyles did not experience contact at all.The variation in computational contact area in this work is smaller than that measured experimentally across seven knees tested at their most natural alignment (Fukubayashi and Kurosawa 1980).The variation observed between two knees with similar constraints to the present study (Gu and Pandy 2020) was less pronounced than that seen in this work.
The variation in behaviour between knees, and dependence on specific joint shape and orientation, highlights that care should be taken when comparing modelling outcomes with previous published studies as a validation or verification process.As such, a "valid" comparison of modelling outcomes with single values from the literature is likely to be related to coincidence in reproducing with one shape what was obtained with another shape.However, a comparison with a range of values from the literature is likely to create an artificially large target for model validation.The confidence in validation processes for image-based models would be increased by comparing modelling outcomes directly with experimental outcomes of the same joints.Similarly, interpreting results of studies with a single image-based Fig. 6 Specimen-specific models-effect of modelling assumptions on the relative displacement of the menisci on the tibia.Data visualisation as per Fig. 4 model should be done cautiously as the mechanical role of the meniscus may be quite different from one knee to another.In particular, clinical interpretation of modelling studies using single geometries should be avoided.
The findings of this sensitivity study have significant consequences when developing models through an inverse modelling analysis used for calibration of material parameters (e.g.De Rosa et al. 2022;Elmukashfi et al. 2022;Long et al. 2022).Unless specimen-specific knee models can be developed and compared to their corresponding experimental data, the calibration of material parameters to replicate average experimental data is likely to reflect the variance in knee geometry as much as it is to reflect the variance in material behaviour.This is especially true if measuring only contact behaviour or when there are uncertainties in the meniscus geometry or the knee alignment.The calibration of material parameters would benefit from local displacement information, where possible obtained close to the transition between the meniscus tissue and the ligamentous tissue of its horns, as well as in the area where the meniscus is expected to move the most, which depends on the type of loads.Displacements close to the horns can be used to verify or calibrate computational assumptions related to the meniscus anchorage to the tibia whereas maximum displacement values can be related to the material parameters.
The findings also provide an insight into the sources of variability in contact mechanics of the tibiofemoral joint.They confirm that meniscus shape is a major contributing factor in contact mechanics even when the kinematics are highly constrained (Meakin et al. 2013 andHaut Donahue et al. 2004) and therefore that alterations to meniscus shape following trauma or intervention are likely to cause changes to the contact mechanics of the knee and cartilage damage (Makris et al. 2011;Zhang et al. 2014;Ghouri et al. 2022).They also suggest that differences in meniscus health which would cause only a change of meniscus elasticity may lead to measurable changes in the force distribution between condyles and in the local displacement of the meniscus but generally cause little changes in contact area.Finally, they suggest that disruptions to the meniscal attachments to the tibia are unlikely to be translated in contact mechanics changes when the joint is constrained.
Conclusion
Through a systematic sensitivity analysis, this work demonstrated the importance of the meniscus shape and joint alignment for modelling knee contact mechanics and meniscus kinematics, over that of the material parameters used for the meniscus or the assumptions made for root attachments, the latter having a slight effect on meniscus kinematics.
As such, when developing specimen-specific models for predicting global knee behaviour, the knee geometry needs to be specific to the specimen while generic material properties may be sufficient.It also means that, when calibrating computational model material parameters on experimental data, not only is it better to use data obtained for the specific knee, but also local displacement information is necessary.
Fig. 2 Fig. 3
Fig. 2 Comparison of meniscus shapes and corresponding attachments: i baseline model and ii "segmented" model of the same knee (models K)
Fig. 4
Fig.4Specimen-specific models-effect of modelling assumptions on the total contact area in each condyle.For each of the condyle, the first data point (in blue) is for the baseline model and data is labelled A to L as per methods.i Outcomes of the five additional material configurations.Note that the first knee did not provide contact on both condyles in model B; ii outcomes for the four additional root
Table 3
Summary of variables of interest in the 4 sensitivity studies (S1 to S4) for the three image-specific knees (leading to three times 11 models besides the three baseline models)
Table 4
Core and total root attachment area centred around anatomical positions (TT, tibial tuberosity; LTE lateral tibial eminence and MTE, medial tibial eminence)(LaPrade et al. 2014 and Cruz et al.
Table 5
Ratio of force through the medial condyle across three knees and 12 configurations
|
2024-02-14T06:18:32.946Z
|
2024-02-13T00:00:00.000
|
{
"year": 2024,
"sha1": "8f9f79ca3fece04aa2ae02d5bdd434b10126ed93",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10237-024-01822-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "581f3553987c47b324afab94ae2f2cba8bb35297",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53016131
|
pes2o/s2orc
|
v3-fos-license
|
Development and validation of a sunlight exposure questionnaire for urban adult Filipinos
OBJECTIVES To develop and validate a self-reported sunlight exposure questionnaire (SEQ) for urban adult Filipinos. METHODS The study included adults (19-76 years old) in Metro Manila, Philippines, well-versed in the Filipino (Tagalog) language and had resided in Metro Manila for at least 1 year. Exclusion criteria included pregnancy, active skin disorders, and immunocompromised states. An expert panel created a questionnaire in Likert-scale format based on a conceptual framework and 4 existing instruments. The study proceeded in 4 phases: questionnaire item development, translation and back-translation, pretesting, and construct validity and reliability testing using factor analysis, the Cronbach alpha coefficient, and the paired t-test. RESULTS A 25-item, self-administered, Filipino (Tagalog) SEQ answerable using a 4-point Likert scale was created. The questionnaire was administered to 260 adult participants twice at a 2-week interval, with all participants completing both the first and second rounds of testing. All questionnaire items possessed adequate content validity indices of at least 0.86. After factor analysis, 3 questionnaire domains were identified: intensity of sunlight exposure, factors affecting sunlight exposure, and sun protection practices. Internal consistency was satisfactory for both the overall questionnaire (Cronbach alpha, 0.80) and for each of the domains (Cronbach alpha, 0.74, 0.71, and 0.72, respectively). No statistically significant differences were observed in the responses between the first and second rounds of testing, indicating good test-retest reliability. CONCLUSIONS We developed a culturally-appropriate SEQ with sufficient content validity, construct validity, and reliability to assess sunlight exposure among urban adult Filipinos in Metro Manila, Philippines.
INTRODUCTION of 1.0, an initial sample size of 187 was calculated [3]. However, after development of the final questionnaire, the sample size was increased to 250 since we took into account the recommended 10:1 subject-to-item ratio for factor analysis [10]. Ten additional subjects were recruited to ensure that participants more equally distributed across the various brackets-15 each for the workingage groups and 10 each for the elderly groups-yielding a final sample size of 260. Epi Info version 7.0 (https://epi-info.software. informer.com/7.0/) was used for sample size calculation.
Study procedures
The study proceeded in 4 phases.
Phase I: questionnaire item development
An extensive literature review of concepts on sunlight exposure assessment was performed and relevant questionnaires were identified, using the main keywords "sunlight, " "questionnaire, " "urban, " "vitamin D, " and "osteoporosis. " There were no restrictions on language, country, or year of publication. Based on their relevance (mainly the inclusion of different sun exposure variables and/or correlation with serum 25-OHD), the SEQs developed by Cargill et al. [9] in Australia, Hanwell et al. [8] in Italy, Humayun et al. [6] in Pakistan, and Wu et al. [7] in Hong Kong were used as references for this study. In addition, an existing conceptual framework on the attitudes, behaviors, and beliefs of urban adult Filipinos on sunlight exposure was also used ( Figure 1) [11].
A panel of 3 endocrinologists, 2 dermatologists, a health social scientist, an internist, and a community medicine physician created preliminary questionnaire items using the following guidelines: (1) Is the item unbiased? (2) Is there a strong likelihood that most respondents will answer the item truthfully? (3) Do most respondents possess sufficient knowledge needed to answer the item? (4) Will most respondents be willing to answer the item? (5) Does the item avoid leading respondents to a specific answer? and (6) Is the language used clear and simple enough so that respondents are able to understand all questions? [12].
The questionnaire items were constructed in the form of a 4-point Likert scale and arranged using the following guidelines: (1) Nonsensitive questions were placed at the beginning, since they were assumed to be non-threatening and tended to put the respondent at ease; (2) Items of major interest to the study were also prioritized, since there was greater probability of the respondent completing the first part of the questionnaire; (3) Sensitive items were placed last so that any potential emotions provoked would not influence the responses to other questions; and (4) As much as possible, items on similar topics were placed close to each other [12].
All questionnaire items then underwent content validity assessment by each panel member using a 4-point ordinal scale: 4, very relevant; 3, somewhat relevant; 2, hardly relevant; and 1, not at all relevant. The content validity index (CVI) of each item was computed as the number of evaluators giving a 3 or 4 divided by the total number of evaluators. Only items with a CVI of at least 0.86 were retained in the questionnaire [13].
Exposure to ultraviolet rays (UVB) is the main source of vitamin D in humans. This is because the enteral route is not a good source of vitamin D unless foods are fortified with vitamin D [4]. A contributing factor to the increasing prevalence of VDD in the Philippines is rapid urbanization, which has resulted in more young adults having indoor jobs and thus less sun exposure, raising concerns about bone health during the period when they are achieving peak bone mass. Furthermore, air pollution in major Philippine cities decreases the amount of UVB that reaches the earth' s surface [5].
A major limitation in the area of VDD research is the lack of an appropriate, inexpensive, and easily-administered tool for measuring sunlight exposure [6]. Compared with other methods, questionnaires are considered to be the most cost-effective way of measuring sunlight exposure in population-based studies [7]. Of the available sunlight exposure questionnaires (SEQs), only 2 were validated in Asia (Hong Kong and Pakistan) [6,7] and only 3 were created in the context of VDD by correlating the questionnaire results with serum 25-OHD levels, showing moderate correlations [6,8,9]. At present, there is no existing SEQ that has been validated for Southeast Asian or tropical populations. This study aimed to develop and validate a culturally-appropriate, self-reported SEQ for urban adult Filipinos.
Study participants
The study included individuals > 19 years old who were fluent in the Filipino (Tagalog) language, lived in Metro Manila at least 5 days a week for at least 1 year, and provided written informed consent. We set 1 year as the minimum duration of urban living to account for all possible weather changes and seasonal variations. Those who were pregnant or who had known active skin disorders or immunocompromised states potentially affecting sunlight exposure were excluded. Study participants were selected by purposive sampling from any of the 17 component cities of Metro Manila using a sampling frame (Table 1). Based on an existing study on the prevalence of VDD in the Philippines (58% among 369 participants), with confidence limits of 5% and a design effect
Phase II: forward-translation and back-translation
The items of the draft questionnaire were then translated from English to Filipino (Tagalog) by 2 independent bilingual translators, one of whom was a physician with knowledge of the study and its concepts, and the other a non-medical professional with no knowledge of the study or its concepts. The linguistic and cultural quality of the 2 translations was reviewed individually by the panel experts and was consolidated into a single version by consensus. The newly-synthesized Filipino SEQ then underwent backward translation by a different bilingual translator who was uninvolved in the study. Finally, both translated and back-translated versions were reviewed by the panel experts to reach a consensus for the prefinal Filipino (Tagalog) questionnaire version [14].
Phase III: pretest
Pretesting of the prefinal questionnaire to test flow and comprehensibility was performed by administering it to a sample equivalent to thrice the number of questionnaire items. Questionnaires were self-administered and the time needed to complete the test was noted for each person. After answering the questionnaire, the participants were asked for feedback regarding comprehensibility and format using a cognitive debriefing form. The form utilized both close-and open-ended questions. Revision of the questionnaire was then carried out by the panel members based on pretest results and feedback to create the final Filipino version of the SEQ.
Phase IV: construct validity test and reliability test
The final questionnaire was administered to the sample population twice at a 2-week interval, with a similar procedure as that of the pretest. A research assistant was accordingly trained in the Internal influences -Fitzpatrick skin type -Exposed body parts -Occupation -Travel to/from work -Hobbies Reliability of the questionnaire was assessed by internal consistency and by test-retest reliability. Internal consistency was analyzed using the Cronbach alpha coefficient, with 0.7 as an acceptable cutoff value [15], while test-retest reliability was analyzed using the paired t-test. Statistical analyses were performed using Stata SE version 13 (StataCorp., College Station, TX, USA). Descriptive statistics included the mean and standard deviation for normally distributed quantitative variables, median and interquartile range for non-normally distributed quantitative variables, and frequency and percentage for qualitative variables. The threshold for statistical significance was set at 5%.
Ethics statements
Both the study protocol and informed consent forms were approved by the University of the Philippines institutional review board prior to commencement (UPMREB code: MED-2016-003-01). Study participants also received a token honorarium for their participation. Table 2 summarizes the socio-demographic characteristics of the entire study population.
Phase I: questionnaire item development
The expert panel devised an initial list of 32 questions. After content validity assessment, only 25 questions were retained since Table 3. CVI values for items in the Filipino SEQ No.
Question CVI 1 How do you describe your skin when it is exposed to the sun? 1.00 2 What part of your body is usually exposed to the sun? 1.00 3 How long do you usually spend under the sun on a weekday? 1.00 4 How long do you usually spend under the sun on a weekend? 1.00 5 How long do you usually spend under the sun during sunny weather? 1.00 6 How long do you usually spend under the sun during cloudy weather? 1.00 7 How long do you usually spend under the sun during rainy weather? 0.67 1 8 What time of the day are you usually exposed to the sun? 1.00 9 How often do you go out in the sun due to work or daily routine? 1.00 10 How often do you walk or use public transport to do the above activities? 1.00 11 How often do you engage in outdoor activities such as jogging, cycling, and swimming? 1.00 12 How often do you take calcium with vitamin D or multivitamins? 1.00 13 How likely are you to be exposed to the sun to get stronger bones and better health? 1.00 14 How likely are you to be exposed to the sun to get happier and livelier? 1.00 15 How likely are you to be exposed to the sun to get more beautiful skin? 1.00 16 How likely are you to avoid sun exposure due to the influence of family, friends, and coworkers? 1.00 17 How likely are you to avoid sun exposure due to the influence of TV, radio, and internet? 1.00 18 How likely are you to avoid sun exposure due to sunburn, skin cancer, skin allergy, and rashes? 1.00 19 How likely are you to avoid sun exposure due to heat stroke, hypertension, and dizziness? 1.00 20 How likely are you to avoid sun exposure due to sweating and fear of darker skin? 1.00 21 When going out in the sun, how often do you wear long sleeves? 0.33 1 22 When going out in the sun, how often do you wear long pants? 0.33 1 23 When going out in the sun, how often do you wear a hat? Eventually omitted from the questionnaire due to CVI < 0.86. they had CVIs of at least 0.86 (Table 3). Question #7 was removed because the panel experts agreed that sunlight exposure during rainy weather is minimal; questions #21 and #22 were considered redundant since question #2 already inquired regarding clothing and body part exposure; question #24 was removed since wearing sunglasses was considered more for protection from the sun' s glare; question #27 was also considered redundant since question #10 already inquired about the respondent's usual mode of transport; and questions #31 and #32 were removed since the use of sunlamps and sunbeds is not common among Filipinos.
Phase II: forward-translation and back-translation
During the translation process, words like "jogging, " "calcium, " "vitamin D, " "multivitamins, " "sunburn, " "allergy, " "heat stroke, " and "sunscreen" were retained in English, as these were deemed familiar terms for ordinary Filipinos. No significant disparities between the English and Filipino (Tagalog) versions were detected by the independent bilingual translators during translation and back-translation.
Phase III: results of pretest
Pretesting of the prefinal Filipino (Tagalog) SEQ was conducted with 75 participants. The average time needed to complete the 25item questionnaire was 15 minutes (range, 5-30 minutes). In general, the respondents found the questionnaire to be comprehensible with acceptable length and arrangement. No questions were considered sensitive, biased, or threatening during cognitive debriefing. For questions #17-#19, the phrase "posibilidad ng" ("possibility of") was added to the beginning of each question to emphasize the risk of sunlight exposure. For questions #23-#25, many respondents preferred the term "sunblock" to "sunscreen. " However, "sunscreen" was retained, as it affords protection from the entire ultraviolet range, as opposed to "sunblock, " which only affords protection from UVB. Furthermore, several guidelines, including those of the US Food and Drug Administration, do not recommend the word "sunblock, " as it may falsely overemphasize a product's efficacy [16].
Phase IV: results of construct validity test and reliability test
The final questionnaire was administered to the entire sample of 260 participants. There were no dropouts; all participants completed both the first and second rounds of testing. The mean age was 41.54 years, with a mean duration of urban living of 28.77 What time of the day are you usually exposed to the sun? 0.28 0.31 0. 17 8 How often do you go out in the sun due to work or daily routine? 0.43 0.16 -0. 24 9 How often do you walk or use public transport to do the above activities? 0.19 0.17 0. 12 10 How often do you engage in outdoor activities such as jogging, cycling, and swimming? 0.18 0.13 -0. 12 11 How often do you take calcium with vitamin D or multivitamins? 0.05 -0.03 -0. 23 12 How likely are you to be exposed to the sun to get stronger bones and better health? 0.62 0.10 -0.61 13 How likely are you to be exposed to the sun to get happier and livelier? 0.59 -0.11 -0.59 14 How likely are you to be exposed to the sun to get more beautiful skin? 0. months. The majority of the respondents were day shift (79.6%) and indoor (62.7%) workers. There was no significant difference in the mean time needed to complete the first (6.78 minutes) and the second (6.61 minutes) test (Table 2). Factor analysis yielded 3 principal component factors corresponding to the different questionnaire domains. Items that possessed similar factor loading values were grouped under a particular domain. The 3 domains were labeled: (1) intensity of sunlight exposure (containing questions #1-#7); (2) factors affecting sunlight exposure (containing questions #8-#19); and (3) sun protection practices (containing questions #20-#25). Table 4 shows the 3 domains of the SEQ and the factor loading values of the items in each domain.
The internal consistency assessment yielded an overall Cronbach alpha coefficient of 0.80, indicating that the questionnaire generally showed internal consistency. The 3 domains were internally consistent on their own as well, with coefficient values of 0.74, 0.71, and 0.72, respectively. Similarly, the paired t-test yielded no statistically significant differences between the responses obtained in the first and second rounds of testing for either the entire questionnaire or each of its domains, indicating satisfactory testretest reliability (Table 5).
DISCUSSION
This is the first SEQ developed and validated for use in an urban adult Filipino population. The questionnaire was designed to assess the intensity of sunlight exposure, the various factors affecting sunlight exposure, and the different sunlight protection practices utilized by urban adult Filipinos.
To ensure adequate representativeness of the sample, our sampling frame took into account age, sex, educational attainment, work shift and location, and economic status. Although it could be argued that elderly respondents should comprise a greater proportion of the sample (given that the consequences of VDD are especially strongly felt in this population), our aim was to create a more even distribution of respondents across the entire adult lifespan to maximize the questionnaire' s applicability [17]. The respondents' locations within Metro Manila were not part of the sampling frame, since each of the Philippine capital's 17 component cities are topographically similar, and hence there were no expected sig-nificant differences in sunlight exposure. The Köppen climate classification lists Metro Manila as having a uniformly tropical wet and dry climate [18].
The questionnaire development process drew on the existing instruments of Cargill et al. [9] in Australia, Hanwell et al. [8] in Italy, Humayun et al. [6] in Pakistan, and Wu et al. [7] in Hong Kong. Although the questionnaires served as important references, no questions were directly taken from any of these instruments, as they were all developed in countries of a different ethnicity, geography, and climate compared to the Philippines. Hence, we utilized a separate conceptual framework that explored additional aspects of sunlight exposure in Filipinos that may not have been covered in the existing questionnaires [11]. A unique feature of our questionnaire is the inclusion of questions pertaining to the perceived risks and benefits of sunlight exposure, which are significant determinants of an individual's sunlight exposure practices. We also added questions pertaining to the influences of other people and mass media on sunlight exposure, given the strong kinship and social ties among Filipinos and the widespread use of technology by urban residents [19]. A current disadvantage of the questionnaire is the lack of a validated scoring system and the lack of correlation with established gold standard measurements, the latter of which will be addressed in the next phase of the study.
In the construction of questionnaire items, we utilized the Likert scale, the most widely-used approach to scaling responses in questionnaire research. Unlike simple close-ended questions, the Likert scale has the ability to specify levels of agreement or disagreement in a symmetric fashion, capturing the range of intensity of feelings for a given item, which is then simplified as the sum of the questionnaire items [20]. Questionnaires in Likert scale format are also easy to use and allow more variables in a study because the format enables respondents to answer more questions in the same time required to answer fewer open-ended questions [21,22]. While we retained the same choices for many questions ("never, " "rarely, " "often, " and "always"), the choices for other questions were crafted to reflect a similarly symmetric degree of sun exposure. This was especially true for questions involving the Fitzpatrick skin classification, body part exposure, and temporal exposure.
During the translation and back-translation process, the independent bilingual translators decided to retain several words in English. This is due to the fact that these particular words are considered familiar terms for ordinary Filipinos. In the 2010 Test of English as a Foreign Language, the Philippines ranked 35th out of 163 countries worldwide, and ranked second-best in Asia after Singapore [23].
The questionnaire was assessed using content validity, construct validity, and reliability. Content validity, which refers to the representativeness or relevance of the questionnaire content, was assessed individually by the members of an expert panel [7]. These members were chosen from diverse disciplines to ensure a holistic clinical and psychosocial evaluation of the questionnaire items. While majority of the items possessed sufficient content validity, those that were removed were mostly either redundant or found to be non-contributory to sunlight exposure evaluation. Others (such as the use of sunlamps or sunbeds) were deemed not applicable to Filipino culture. Our questionnaire also possessed satisfactory construct validity. The 3 domains extracted after factor analysis corresponded well with the themes identified during creation of the conceptual framework. Specifically, the first 2 domains corresponded to the influences and perceived benefits and risks of sunlight exposure. The third domain also corresponded to perceived risks, as an increased awareness of these risks leads to an increased usage of sun protection practices (Figure 1). Furthermore, the factor analysis results also fulfilled the rule of having a minimum of 5 questions per domain to enable psychometric testing, with 7 questions in the first domain, 12 questions in the second domain, and 6 questions in the third domain [24]. Reliability, meanwhile, was likewise sufficient in terms of both internal consistency and test-retest reliability. For the latter, the decision to administer the questionnaire 2 weeks apart was made because that time frame was long enough for the participants to not remember their responses from the first test, while being short enough to not allow significant physiological changes to occur. There was also no significant difference in the time needed to complete the first and second rounds of testing, attesting to the questionnaire's consistency in ease of administration.
This study serves as part of a larger project that will eventually involve concurrent and criterion validity assessment of the questionnaire results with established objective parameters, such as dosimetry and serum 25-OHD levels. We also recommend future studies investigating the applicability of the questionnaire to a wider population, particularly rural and other urban areas in the Philippines, in addition to other Southeast Asian and tropical countries of similar ethnicity and geographical latitude.
In conclusion, this study showed that a linguistically and culturally appropriate SEQ possessed sufficient content validity, construct validity, and reliability to assess sunlight exposure among urban adult Filipinos in Metro Manila. The questionnaire results can be eventually applied to evaluate associations with serum 25-OHD levels.
|
2018-11-09T20:33:54.257Z
|
2018-10-11T00:00:00.000
|
{
"year": 2018,
"sha1": "0e6d5884c1d237cd01885b4644e2fb0388ff363e",
"oa_license": "CCBY",
"oa_url": "https://www.e-epih.org/upload/pdf/epih-40-e2018050.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27122758e14bf6c06556b785a651600fd4696ab5",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257818524
|
pes2o/s2orc
|
v3-fos-license
|
Physical and Economic Valuation for Nontimber Forest Products (NTFPs) of Surra Government Plantation in the Upper Hare-Baso Rivers Catchment, Southwestern Ethiopia
,
Introduction
Forests with other land uses are considered for poverty alleviation and food security, mainly in developing countries [1]. Specifcally, nontimber forest products (NTFPs) contribute to livelihood diversifcation, job opportunities, and sources of income and are believed to be safety nets during periods of crisis [2,3]. NTFPs contribute signifcantly not only to the livelihood of rural residents but also to the livelihood of migrants, residents of urban areas, national treasuries, and the global economy [4]. Te term NTFP is defned as all biological materials of forests other than timber that is extracted for human benefts [5,6]. For example, fuelwood, litter, medicinal plants, fodder/grass, wild edible fruits, and house-building materials such as lianas are some of the major NTFPs [7][8][9]. Te fow of a given NTFP to fnal consumption and data on value creation help to clarify the dynamics in the valuation of NTFPs [10]. Tis concept, in many contexts, is equated to a conservation-through utilization and increasing cash income to local communities and simultaneously creating incentives for the conservation of trees and forested ecosystems [11].
Forests play an important role in rural livelihoods and the national economy of Ethiopia [12][13][14]. For instance, plantation and natural forests provide 15% of the total livestock feed requirements for approximately 35 million TLU (tropical livestock units) (70-80 million herds) [15], whereas 15,000 women of Addis Ababa relied on the raking of litter from the Addis Ababa peri-urban eucalyptus energy plantation on a daily basis [16][17][18]. Many studies have demonstrated that a large number of NTFPs are important for national and local economies in Ethiopia [19][20][21][22]. Hence, accounting for the economic value of NTFPs has an advantage since it helps to ascertain the true value of the standing forest, leading to more rational decisions about the alternative uses of the forest to lessen consumption pressures [23,24]; helps to reduce extraction that contributes to forest degradation and associated emission; enables sustainable exploitation of NTFPs that can contribute to reducing degradation and deforestation, increasing values of forests and reducing consumption pressures on them, and last, providing alternative sources of income to those highly relied on the forest and caused for severe depletion. However, only a few NTFPs are accounted for in detail in Ethiopia, such as forest cofee, honey, beeswax, spices, gums, and resins [25,26], although many nontimber forest products require further investigation.
Te degraded farmland, infertile acid soil, rain-fed subsistence agriculture, and dense population [27,28] of the upper Hare-Baso rivers catchment initiated the community to rely on nonagricultural economic activities such as petty trade, weaving, and raking of litter [27,29,30]. In particular, grazing under the plantation and collecting of BLT (litter) for house consumption and market were considerable economic activities in the catchment but less overwhelmingly accounted for [31]. Diferent research conducted in the catchment demonstrates that none of them studied the physical and economic values of NTFPs from any forest, particularly grasses and litter/BLTs from plantations. In other words, the studies were conducted on woody vegetation, plant species diversity and composition, comparative analysis between sacred and nonsacred forests [32], and land-use dynamics [33]. Terefore, the focus of this study was on two valuable NTFP valuations, namely, fodder/ grass and litter/BLTs. Because these two NTFPs were integral parts of economic goods for the Surra government plantation fringe community.
Te Surra government plantation was one of the largest government plantations in the upper Hare-Baso river catchment that was established in the mid-1980s [34]. It was one of highly exploited and less managed government plantations in the catchment and beyond [35]. Accounting for the NTFP benefts obtained from plantations enhances livelihood options for users depending on a number of factors: the products concerned, the market in which they are sold, the demand of users, and the economic background of users [7,36]. Moreover, the potential for increasing the sustainability of NTFP benefts is dependent on accounting for the extraction rate and characteristics of the tree species [6,11,24]. For instance, harvesting dead wood, litter, grasses, and fruits has been shown to have high potential for sustainability because of users' positive attitudes [37]. Tus, the study aimed to account for the physical and economic values of the specifc NTFPs from the Surra government plantation in the upper Hare-Baso rivers catchment, southwest highlands of Ethiopia. It is limited to accounting for the physical and monetary values of grass/fodder and BLTs/litter of the eucalyptus tree species of the Surra government plantation.
Te topography of the study area is part of the rugged terrain of the Gamo highlands that extends north to south with rising elevations up to 4200 masl (Mt. Gughe), which is the highest peak in the southwestern highlands of Ethiopia [38]. Altitudinally, the upper Hare-Baso river catchment is confned between 2,329 masl and 3,442 masl (survey data).
Te study area is a part of the tropical highland climate (mountain climate type) that is represented by the capital letter "H" [39] and locally named dega to wurch [38]. Te area receives bimodal rainfall, and the mean annual rainfall varies from 1100 to 1300 mm. Te frst rainfall season is from March to April, while the second season is from June to August [29]. Te average minimum and maximum temperatures are 18°C and 23°C, respectively [30,33].
Te natural forests of the upper Hare-Baso rivers catchment were depleted because of old and historic settlements of people in the area [40]. However, small patches of remaining natural forests, such as graveyards, meeting places (Dubusha), and other sacred sites, are found here and there in the pocket areas [32]. Hilltops of mountains are covered by Afromontane grasses and permanently grazed [29]. In contrast to natural forests, the coverage of plantation forests, namely, woodlots of Eucalyptus globulus, Pinus radiata, and Cupressus lusitanica, community plantations, and government plantations, have increased. Te Surra government plantation was a part of government plantation that was established during the military government regime as a part of "Ethiopian highland plantation expansion projects" [34,41]. Te Surra government plantation was a mixture of Eucalyptus globulus (locally Nech-bahirzaf ), Cupressus lusitanica (locally Yeferenj tid), and Pinus radiata (locally Radiata) tree species that were planted side by side [35].
Te economic conditions of the people in the upper Hare-Baso rivers catchment were food insecure [42]. Farming was intensively practiced using hoe; oxen ploughing was insignifcant due to the scarcity of grazing lands [29]. Terefore, mixed highland subsistence and rainfed farming on fragmented small farms were a dominant economic activity, although it was not sufcient to feed the dense population [42]. Raising livestock is an integral part of the economy practiced by tethering and open grazing at homestead and communal grazing lands, respectively. Te common livestock reared were sheep, horses, and cattle, despite being insignifcant in number [43]. Te petty trade, weaving, and collecting (raking) of BLTs are nonagricultural economic activities that diversify their livelihoods [29,44,45]. However, the adoption of apple trees has given hope to enhance the income of people [46,47].).
Plot Sampling Techniques (Litter/BLTs).
Te groundbased sampling method was implemented to measure litter/ BLTs from the Surra government plantation. Because the ground survey method is more precise and efective than GPS (geographic positioning system)-based accounting in small areas and tree-dominated vegetation covers [48].
Initially, the total area of the forest was delineated using Garmin GPS 72H (GPS: global positioning system) with an accuracy of ±3 m in the open space, dense canopy, and cloudy sky [49]. As depicted in Figure 2, the sample plots of the subforest patch (eucalyptus) were delineated following the determination of size, shape, and area [50]. Te shapes of the three plots (major, minor, and small) were square since it is versatile and robust as well as the most commonly used in a ground-based survey of biomass investigation in most vegetation types [48,51].
Te areas of subforest patches (e.g., E. globulus) were redelineated and converted into a grid map using ArcGIS version 10.5 ( Figure 2) [49,52]. Te sizes of major, minor, and small plots were determined purposively by considering the recommendations of diferent studies [50,51]. Hence, the areas of major, minor, and small plots were 100 * 100 m, 10 * 10 m, and 1 * 10 m, respectively ( Figure 2). Major sample plots were drawn out using a computer-based simple random sampling procedure via ArcGIS version 10.5 [49,52].
Te grid map of major plots of eucalyptus tree species (Figure 2 right) was shifted onto ground using GPS coordinate points and threads [35]. While shifting the grid map of all plots of eucalyptus trees onto the ground, the vertices and center of major plots were purposively identifed depending on the northing and easting of the grid map ( Figure 2 left) and coded with white metallic paint [50]. Te coded vertices of each major plot of eucalyptus tree species were encircled with threads and Squadra (Squadra: angle measuring instrument). Squadra was used to stabilize the shape (squared shape) while encircling sample plots using threads [51]. Hence, fve minor plots were acquired from four vertices and a center of major plots of the eucalyptus trees (Figure 2 middle). Finally, two small plots (1 m * 1 m) were sampled from opposite corners of each minor plot of eucalyptus subforest purposively based on the recommendations [48] (Figure 2 bottom left). International Journal of Ecology seasons, one from dry and other from wet seasons per year." Before collecting litter data, four pieces of wood were erected at an angle of 90°in four vertices of minor plots (10 m * 10 m) and encircled with thread ( Figure 2). Each 1 m 2 plot (small plot) was re-encircled at the opposite angle of minor plots ( Figure 2 bottom right). Terefore, the BLT data of diferent seasons were collected from each sampled small plot using a plastic bag, weighed before summation (equation (1)), extrapolated into litter/ha (equation (2)), and extrapolated into the entire eucalyptus forest area (172.5 ha)/kg (equation (1)).
Data Acquisition
(i) Te average production potential of litter/BLTs in the two seasons was mathematically theorized as follows: where ALPS = average litter production of two seasons from small sample plots per annum L w = litter of the wet season; L d = litter of dry season; 2 = represents two seasons (wet & dry) (ii) Litter (BLTs) data collected from the small sample plots converted into hectares/month are theorized as follows: where ALPS/ha = average litter production of two seasons from a hectare of two months (seasons); 0.004 = conversion unit of sample plots into hectares. (iii) Conversion of litter (BLTs) production from small sample plots (1 m * 1 m) into entire forest/yr is mathematically theorized as follows: where ALP t = total annual litter production; 12 represents months of a year; 172.5 represents the area of eucalyptus forest in hectares.
Grass/Fodder/Grazing. Te common animal kinds
were arranged into animal classes based on standard animal unit equivalent (AUE) guides/conversion factors [53]. Te animal unit equivalent (AUE) is the coefcient or conversion factor of each animal kind into an animal class [54]. Terefore, TLU is a conversion factor for tropical livestock and/or the converted livestock numbers to a common unit [53,55] (Table 1). Te daily, monthly, and annual DM (dry matter) intake data of grazing/fodder of animal classes were acquired by quantifying the average weight of livestock classes proportional to the tropical livestock unit (TLU) and quality of pasture [56] and/or multiplying AUE by 2% [58]. 2% is a single animal DM intake per day of its weight in poor pastures [54,55]. Te Surra government plantation, therefore, was allocated under poor pasture.
Te TLU of the study area was quantifed by the "standard forage-consuming domesticated live animal for the tropical region" [55]. For example, the camel has the largest average live weight in a tropical region with an average weight of 250 kg and is represented by 1 TLU (1AUE) ( Table 1). Te average live weight of cattle in the tropical region was 175 kg [56], and its corresponding TLU (AUE) was 0.7. Tus, the TLU and corresponding weight of livestock in the study area were calculated accordingly ( Table 2).
Te carrying capacity (CC) is the capability of grazing land to feed a class of livestock for a given time [53,59]. Te CC is computed using "the estimated relative production values method" for rangelands of all grass types [60]. In other words, the CC of the rangeland is a division of AUM by the total area of the rangeland and divided by the AUE [61], which gives us AUM/ha or AUEM/ha (equation (4)) [58].
(i) Te carrying capacity (CC) of the Surra government plantation was mathematically computed as follows: where CC � carrying capacity; AUM � animal unit month (DM intake per month); TA (ha); AUE � animal unit equivalent (the conversion factor of the mass of a single animal class to the TLU standard).
Te animal unit day (AUD), animal unit month (AUM), and/or animal dry matter (DM) intake per day, #/month, and #/annual for animal classes were important to account for the fodder/grass production potential of the Surra government plantation. Each aforementioned DM intake of #/day, #/month, and #/annual is mathematically theorized as follows: (i) Te animal unit per day (AUD) was theorized as follows: where AUD � animal unit day (DM intake per day per weight); 2% indicates that the DM intake in a dry pasture is two percent of its body weight. (ii) Te animal unit month (AUM) of a single animal class was theorized as follows: where AUM � animal unit month (DM intake of a single animal class per month); WAC = weight of an animal class; 2% � DM intake of animal classes per day per weight in poor pasture; 30 � days of a month. (iii) Te animal unit annual (AUM) of a single animal class was theorized as follows: where AUA � animal unit annual (a single animal class per year); WAC � weight of an animal class; 2% � DM intake of animal classes per day per weight in a poor pasture; AUM � animal unit month (DM intake of a single animal class per month) (iv) Te total forage/fodder production potential of the Surra government plantation or the total forage intake of diferent animal classes with multiple sizes (AUE/TLU) of "a year" was theorized as follows: where TADFI � total annual dry forage intake of all animal types (species), X � one of the animal species that owned diferent animal classes (cattle), Y � one of the animal species that owned diferent animal classes (horses), and Z � one of the animal species that owned diferent animal classes (sheep).
Note that all plants in the rangeland are not eaten by livestock [62]. Because some of them are not accessible to animals [54], others are unpalatable [63], whereas further losses occur due to animal trampling [61]. In this study, therefore, the correct proper use factor (excluding factor) should be used to deduct the supposed uneaten grasses International Journal of Ecology 5 [64,65]. However, the proper use factor varies from region to region based on grass types, agroclimatic variation, and topography in Ethiopia [64,66]. For instance, the proper factor value of southern Ethiopia was 30%, while in the Somalia region, it was diferent [65]. Terefore, to account for the grass/fodder production of the Surra government plantation, 30% was preferred since the study area is a part of southern Ethiopia [65]. Consequently, the grass production potential data were collected through an indirect approach by combining a stocking rate [53], carrying capacity [56], conversion factors [57], and the proper use factor (30%) to deduct wastes (equation (9)).
(i) Total DM intake based on the correct use factor of diferent animal classes was theorized as follows:
Valuing Litter (BLTs).
For monetary valuation of litter production from the Surra government plantation, the market price value method was applied [4,8,67]. Before valuing the bales (bundles) of litter, it was weighed in kilograms (kg) for both seasons (Table 3). Consequently, the physical and monetary value data were collected concurrently (Chencha town), and their average price (per/kg) was quantifed. Te price of litter (ETB/kg) is infuenced by the variation in seasons [16,17]. For example, the market value of 966 kg of BLTs (branch, litter, and twigs) in Chencha town during January was ETB 1,159, and its average price (per/ kg) was ETB 1.20 in the same month. However, 847 kg of litter during August was ETB 1,196, and its corresponding average monetary value was ETB 1.41 in the same month (Table 3). Tis demonstrates that the average price of litter product during the dry (January) season was 14.9% cheaper than its corresponding average price in August (wet season). Due to price discrepancies, most litter (BLT)-dependent women do not supply litter to the markets during the dry season and are accustomed to storing it at home to sell during the wet (summer) season while the price rises. Tere were similar experiences of women who depended on the raking of BLTs from Addis Ababa peri-urban eucalyptus plantation [17].
(i) Te total annual monetary value of litter from the entire forest was theorized as follows: where TMVL (BLTs) � total monetary value of litter or BLTs obtained from the forest, TAPL (BLTSs) /kg � total annual production of litter per kg, and MVL (BLTs) / kg � monetary value of a single kg of litter (BLTs) in the local market.
Valuing Grazing (Grass/Fodder).
Te monetary valuation of grass (fodder) production was computed according to the market price value approach [4,8]. Te transect-walk data show that a bundle of grass carrying women were found here and there on the roads of Chencha town, particularly during the autumn season. It was assumed that the autumn season is a time when weeds are removed manually from cereal crops and sold as fodder for urban livestock owners. Terefore, the autumn season (partially wet season) was preferred to collect monetary value data of grass/fodder. Te weight of each bale and its corresponding monetary value were acquired concurrently using a checklist [68]. Te average monetary value/kg of grasses was acquired by dividing the summation of monetary prices by the total weight of wet grasses (equation (11)). Te total weight of fodder (grass) was summed, and the average weights were calculated (equations (11) and (12)).
(i) Te average monetary value of grass per kg was mathematically theorized as follows: where (AMV (grass) /kg) � average monetary value grass/kg; V1 + V2 + · · · Vn � summation of monetary price per bale of grasses; and W g � weight of grasses (summation of bales/kg). (ii) Te total monetary value of grass/kg was theorized as follows: where TMV/kg � total monetary value/kg/ha; (AMV (grass) )/kg � equation (11) (above); and 235.5 � area of plantation (ha). Te dry mass (DM) and its corresponding monetary value were considered, and thus, 1/2 kg of wet mass is a dry mass (air-dried grasses) [69]. However, to compute the monetary value of grasses (grazing) from the Surra plantation, wet masses were implemented, and if it is interesting to convert into DM, the possibility is multiplying the wet mass by half [60] and/or vice versa (equations (13) and (14)). (iii) Te conversion of wet biomass (WM) into dry biomass (DM) was theorized as follows: where DM (kg) � dry matter per kg; WM � wet matter; and 0.5 � represents "half of wet matter is dry." (iv) Te conversion of dry biomass (DM) into wet biomass (WM) was theorized as follows: where WM (kg) � wet forage/kg; DM (kg) � dry forage/ kg; and 2 � represents "twice the dry matter." Te annual total grass production and its corresponding monetary price (value) accounting were the central themes of this study. Tus, based on physical and corresponding market price data, the total monetary value/annual of the Surra government plantation was investigated (equation (15)). (v) Te total annual monetary value of grazing was theorized as follows: where TAMV (fodder) � total annual monetary value of grazing (fodder) (ETB), TAI (fodder) /kg � total annual DM intake (total animal unit year), and MV (grass) /kg � monetary value of a kilogram of grass in the local market. (Table 3). Te results demonstrate that the litter production potential during winter is greater than that during summer (Table 3). Te seasonal variation in litter production potential between dry and wet seasons might have emanated from the physiological reaction of trees to weather conditions [70,71]. Te total (kg/year) and #/ha/month of BLTS from the same plantation were 158,608 kg and 920 kg, respectively. Similar studies conducted on the Addis Ababa peri-urban eucalyptus plantation demonstrated that BLT production potential/kg/ha/month and kg/ha/year were 35,708 ton/ha and 428,500, respectively [17]. When comparing the litter/ BLT production potential of the Surra government plantation with that of the Addis Ababa peri-urban eucalyptus plantation, the peri Addis litter/BLT production potential [17] was 99.2% greater than that of Surra (Table 3), and the diference was insignifcant.
Results and Discussion
Te transect-walk data indicate that the Surra government plantation was permanently grazed (Figure 3), illegally logged (Figure 4), and encroached upon by plantation fringe dwellers, and thus, it was highly disturbed. For example, the wood stand density of Surra government plantations (e.g., eucalyptus) was twofold less than that of other counterpart government plantations in Ethiopia [35]. Moreover, the data acquisition approach of the Surra government was ground survey (stock change method) [48,50], while the peri Addis's was APR (participatory rural appraisal) [17]. Te APR information collection techniques encompass semistructured interviews with individuals or groups, transect walking or feld observation and experts' opinion data [72], and are subjective.
Monetary Values
3.2.1. BLTs/Litter. Te valuing for some NTFPs is not as easy as that for commercial goods due to the absence of market prices [4,5,8]. However, the availability of market prices for some NTFPs, such as BLTs and fodder, in the local market of International Journal of Ecology the study area enabled us to account for using active market prices. Consequently, the market price data in Table 4 Table 4 depict that the monetary value of litres/BLTs (ETB/kg) is infuenced by the variation in seasons. For example, the market price (MP) of 966 kg of litter (BLTs) product at Chencha town during January (dry season) was ETB 1,159, whereas its average price (per/kg) in the same month was ETB 1.20. Te monetary value during August 847 kg of litter (BLTs) was ETB 1,196, and its corresponding average price (per/kg) was ETB 1.41 (Table 4).
Hereby, comparing the prices of the two seasons, the monetary value/kg during the dry season was 14.9% cheaper than its corresponding average price during the wet (August) season. Due to price diferences, most litter (BLT) harvesters in the upper Hare-Baso river catchments do not supply their products to markets during the dry season and/ or are accustomed to storing and selling during the wet season (summer). Similarly, women who depended on the raking of BLTs from the peri-urban Addis Ababa eucalyptus plantation did the same [17].
Te price diference between dry and wet seasons is assumed from the excess production potential of BLTs during dry, windy, and sunny seasons ( Figure 5); ease of foot walking ( Figure 5) and availability of fuelwood biomasses from International Journal of Ecology diferent sources such as cow dungs [15,18]. In contrast, during the wet season, leaf shedding decreases [70]; barefoot walking is difcult due to dirty and muddy roads and cloudy and rainy weather ( Figure 6) [45]. However, less access to diferent sources of fuelwood (despite being easily available in the dry season) increases the demand for BLTs during the wet season that depends on fuelwood for their energy sources. Te total BLTs/litter production potential of the Surra government plantation and its corresponding monetary value were 158,608 and ETB 207,776.50, respectively. Te BLTs/kg/ha and its equivalent monetary price were 920 and ETB 1205.2, respectively (Table 5). Te physical value/ha/ year of BLTs from the Surra government plantation was less than counterpart government plantations around Addis Ababa [16,17].
Te expert opinion and pieces of unstructured interview information triangulate that approximately 40 to 60 women were dependent on the raking of BLTs on a daily basis for only income, whereas 1500 to 2100 women visit the plantation annually for income and household consumption expenses. Similar studies by Olsson [17] demonstrate that 2000 women were dependent on the raking of BLTs from the Addis Ababa peri-urban eucalyptus energy plantation as the sole source of income.
Although the litter/BLTs production potential of the Surra government plantation, particularly eucalyptus, is the most exploited and disturbed government plantation in the upper Hare-Baso rivers catchment and beyond, this was due to the economic reliance of the proximity community (Figures 3, 4, and 7).
Physical Values
3.3.1. Grass/Grazing. Te animal weight proportional DM intake (2% of its mass) per a single animal class/day, #/mon, and #/annual data was adopted from diferent sources [53,56] and adapted to the fodder production of the Surra government plantation (Table 6). Hence, DM intake for a single class of diferent animal types, such as animal unit (AUD)/day, animal unit month (AUM), and animal unit annual (AUA)/kg, was 12.5, 375, and 4347 kg, respectively (Table 6). Te carrying capacity per TLU per year of the Surra government plantation for animal classes such as cow dry, cow with calf, sheep dry, sheep with lamb, and horse was 63,70,9,11, and 72, respectively. Te carrying capacity of the total area (235.5 ha) of the Surra government plantation was 225TLU, while the proper use factor considered was 158 (TLU) ( Table 6).
Te TAUD, TAUM, TAUA, and TAUA/kg * 30% of the grazing from the Surra government plantation were 789.1, 23673, 1042380, and 312,714 kg, respectively (Figure 8/ Table 7). However, the physical value diferences between the gross and proper use factor values were due to the deduction of uneaten grasses (30% of total production) [61,66]. In other words, approximately 30% of the total grass production of the Surra government plantation was not accessed by livestock.
However, the physical and monetary values of grass/ fodder production from Ethiopian plantation forests were not accounted for [73]. For example, the Ethiopian government estimated that grass/fodder production from plantation forests was 860,993,000 kg (860,933 ton)/year, while the grass/fodder production/kg/ha/yr was 947 [15]. Te corresponding fodder production potential of the Surra government plantation was 385 kg (Figure 8). Te physical Table 6: An animal class, animal unit equivalent (TLU), the average weight of an animal class, and DM intake per day (AUD), per month (AUM), and per annual (AUA) of an animal class. Source: adapted from diferent studies [53,57]. (1) value of the fodder/grass production potential of the Surra government plantation was 38% of the Ethiopian (Figure 8), and/or the Ethiopian government is 62% greater than the Surra/kg/ha [14,15]. Te grass/fodder production potential kg/ha variation may be due to dissimilarity in accounting approaches and methodologies [18,61], the absence of standardized and harmonized NTFP accounting methodologies [61], lack of active market price data [74], less availability of related (default) data [75], and agroecology diferences [39] in growing grasses. Moreover, the proper use factor (30%) considered grass/fodder valuation is another factor [61,65]. Canopy diferences also afect the grass distribution in forests [76], whereas some types of grasses are unpalatable [64].
Te Monetary Values
3.4.1. Grass/Fodder. One of the greatest challenges in monetary valuation for NTFPs is obtaining active market prices for each NTFP [9,77]. However, for this study, the maximum eforts were made to obtain the maximum and minimum surplus values for producers and consumers [4]. Terefore, the monetary equivalent of the annual production potential of NTFPs, particularly grasses/fodder, of the Surra government plantation was embedded in the active market prices.
Te average monetary value of a kilogram of wet grass/ fodder during autumn (better available season) and winter (rarely available season) in Chencha town was ETB 0.90. Table 8 indicates that the gross total annual, total annual/ha, proper uses factor considered total annual monetary values of grasses/fodders were ETB 255669, 1085, 76701, and 327, respectively. Te gross monetary value of grasses/ha/annual of the Surra government plantation and the proper use factor-based value were ETB 1206 and ETB 142, respectively, while the Ethiopian value was ETB 0.53 [15]. Te monetary value of grass/ha/annual of the Surra government plantation was 99% (Table 8) of the Ethiopian government [15].
Monetary price diferences between the Surra and Ethiopian government plantations were supposed to be a lack of updated monetary data of NTFPs in the forest databases of the Ethiopian government [75], the grass/ fodder and its corresponding monetary value data , dry 79380 71442 337 303 23814 21433 101 91 Cow, with calf 87552 78797 372 335 26267 23640 112 101 Sheep, dry 10800 9720 46 41 3240 2916 14 13 Sheep, with lamb 15624 14062 66 59 4687 4218 20 18 Horse 90720 81648 385 347 27216 24494 115 104 Total (kg) 284076 255669 1206 1085 85224 76701 362 327 Source: adapted to the study area from diferent studies, 2021. (1) � TAUA (total animal unit annual) represents total livestock stocking (grass production) at the entire Surra government plantation (total animal dry matter intake per annum), which per annum is 284076 kg, and its corresponding monetary value is ETB 255669 (a USD�ETB53.24). (2) TAUA/ha (total animal unit annual) represents the total animal dry matter intake per hectare, which per annum (grass production per hectare/annual) is 1206 kg, and its corresponding monetary value is ETB1085. (3) Te 85224 kg represents total grass production potential of the Surra government plantation by deducting uneatable grasses (30%) and its corresponding monetary value is ETB76701. (4) Te 362 kg represents the grass production (total animal unit per annual) per hectare by considering the unpalatable grasses and its corresponding monetary price was ETB 327. Note. AUA\month indicates the grass/fodder production potential of the Surra government plantation. Source: adapted from diferent types of literature [56,57]. Note: TAC � total animal class; TAUA � total animal unit annual. Total animal unit (TAU)/day, #/month, #/annual and correct proper use factor (30%) considered grazing of the Surra government plantation. Note: Animal Unit (AU) represents the dry matter intake (DM intake). in other words, it is the grass/fodder taken (grazed) by diferent animal classes. for example, animal classes represents a given animal with its corresponding weight that given by FAO (TLU) and America animal unit equivalent (AUE). Hence, TAUD represents total animal unit per day (total animal grass (DM) intake per day; TAUM (total animal class DM intake per month); and TAUA (total animal classes DM intake per annual). Tese leads to the total fodder production of the Surra government plantation. However, an animal class in a poor pasture like the Surra intake 2% (0.02) of its weight. Te second issue is 30%. 30% represents the assumed uneaten (unpalatable) grasses that the Surra government plantation produces. Terefore, based on these information and the carrying capacity (that tell as the amount/number of animal classes) the grass/fodder production podetial per hectare per annual and the entire forest were quantifed. Hence, 789.1 is total single animal classes DM intake per day; for instance, cow dry + cow with calf + ...); 23673 represents the total single animal classes DM intake per month (789 * 30 days); 312 represents a number of animal classes, for instance, animal class of cow dry is 63 in number, cow with calf is 64 in number, and so forth. 312, 714 is for example, total animal unit annual (total animal classes DM intake per annual, and deducted 30% of it in order to deduct the uneaten or unpalatable grasses).
acquisition technique (methodology) variations [74], the devaluation of currency (rate of exchange) at the moment [78], and other factors.
. Conclusion and Recommendations
Te total BLTs/litter production potential of the Surra government plantation and its corresponding monetary value were 158,608 kg and ETB 207,776.50; #/kg/ha/year and total/kg/year were 920 kg and 158,608 kg, and their equivalent monetary prices were 920 and ETB 1205.2, respectively. Te average monetary values of litter/BLTs/kg per wet and dry season were ETB 1.40 and 1.20, respectively. Te litter production during the dry season was greater, while the price/kg/ETB during the wet season was greater. Te TAUD, TAUM, TAUA, and TAUA/kg * 30% of the grazing from the Surra government plantation were 789.1, 23673, 1042380, and 312714 kg, respectively. Likewise, TAUA, TAUA/ha, TAUA * 30, and TAUA/ha/ * 30% per kg were 284076, 1206, 85224, and 362, whereas the representing monetary prices were ETB 255669, 1085, 76701, and 327, respectively. Te litter/BLT production potential of the Surra government plantation was lower than that of the other government plantations, which demonstrates a weak management system over the government plantations. Te proper use factor considering the physical and monetary value of fodder was specifc and represented the ground facts. Te integrated grass data collection approach is applicable in permanently grazed and communal grazing lands. In other words, the integrated fodder data collection approach is suitable for collecting grass/fodder data from poor and permanently grazed pasture of plantation forests and is novel in this study. Te market price-based monetary values of grasses and litter are more specifc and representative than the default data-based results. However, multiple NTFPs are not valued physically and monetarily and seek further accounting and investigations. Since the management intensity was weaker, there should have been policy revisions and decisions.
Data Availability
Te fndings of this article are publicly available wherever necessary. Tus, the data are openly accessible and reusable. In other terms, all data created during this research are openly available for anybody and/or not restricted, and publicly can be used and shared. All fgures and images are the researchers' own work and in texted properly in the article. Tis publication is supported by multiple datasets and are openly available at public repository, and in texted and cited in the reference section of the article.
|
2023-03-30T15:32:07.704Z
|
2023-03-28T00:00:00.000
|
{
"year": 2023,
"sha1": "64d54c22e2ac8de309ea5cf71f9c9f88d06e5f79",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijecol/2023/6192340.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "88ecb21ba8d80f5ee34b256d44c3950ee362210b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
51904375
|
pes2o/s2orc
|
v3-fos-license
|
Factors Influencing Vaccination in Korea: Findings From Focus Group Interviews
Objectives Immunization is considered one of the most successful and cost-effective public health interventions protecting communities from preventable infectious diseases. The Korean government set up a dedicated workforce for national immunization in 2003, and since then has made strides in improving vaccination coverage across the nation. However, some groups remain relatively vulnerable and require intervention, and it is necessary to address unmet needs to prevent outbreaks of communicable diseases. This study was conducted to characterize persistent challenges to vaccination. Methods The study adopted a qualitative method in accordance with the Consolidated Criteria for Reporting Qualitative Research checklist. Three focus group interviews were conducted with 15 professionals in charge of vaccination-related duties. The interviews were conducted according to a semi-structured guideline, and thematic analysis was carried out. Data saturation was confirmed when the researchers agreed that no more new codes could be found. Results A total of 4 main topics and 11 subtopics were introduced regarding barriers to vaccination. The main topics were vaccine hesitancy, personal circumstances, lack of information, and misclassification. Among them, vaccine hesitancy was confirmed to be the most significant factor impeding vaccination. It was also found that the factors hindering vaccination had changed over time and disproportionately affected certain groups. Conclusions The study identified ongoing unmet needs and barriers to vaccination despite the accomplishments of the National Immunization Program. The results have implications for establishing tailored interventions that target context- and group-specific barriers to improve timely and complete vaccination coverage.
INTRODUCTION
Immunization is considered one of the most successful and cost-effective public health interventions protecting commu- nities from preventable infectious diseases [1,2]. The Korean government implemented the National Immunization Program in 2009, and has gradually expanded the program to provide free vaccinations for all children under age 12 for designated diseases at community health centers and contracted clinics. Moreover, individual immunization records have been managed digitally since 2002, enabling services such as providing an individual's immunization history for review and sending reminders of upcoming scheduled vaccinations. Such efforts resulted in the achievement of high rates (>92%) of completing each recommended vaccination series among Korean 3-year-olds born in 2012 [3].
However, the proportion of infants who have completed the recommended number of vaccinations for their age declines with age, with a 95.9% timely and complete vaccination coverage at 12 months, a 92.7% at 24 months, and an 89.2% at 36 months. As can be seen from the 2014 measles outbreak in the US [4], nearly-disappeared epidemic diseases may reemerge at any time, especially if there are clusters of children with incomplete vaccinations. That outbreak demonstrated the importance of timeliness and completeness of vaccination.
The purpose of this study was to understand the remaining barriers to immunization, as well as unmet needs, despite the remarkable improvements in the economic and geographical accessibility of vaccination through the expansion of the National Immunization Program. Focus group interviews were conducted to explore a deeper understanding of barriers to vaccination than would be provided from quantitative questionnaires.
METHODS
Focus group interviews were conducted with experts working in the immunization field to collect information based on their expertise and experience, with the purpose of identifying the barriers that lead to non-vaccination and unmet needs regarding vaccination. The focus group interview is a form of qualitative research in which a select group of people able to discuss an issue at a certain level are asked about their opinions, values, and beliefs during unstructured and natural discussions [5]. The method was described in accordance with the Consolidated Criteria for Reporting Qualitative Research checklist [6].
Research Team and Reflexivity
The research team consisted of 8 researchers, 4 of them (BP, SJC, HJC, HP) physicians and the other 4 (EJC, BP, HH, SL) students and professors in the preventive medicine field. HP, a preventive medicine professional who has been conducting vaccination-related research, led the interviews.
Participants
Immunization administrators at Korea Centers for Disease Control and Prevention (KCDC)-referred community health centers, pediatricians, and experts on multicultural families, one of the most vulnerable groups, received an explanation of the purpose of the study and a letter soliciting participation. A total of 15 participants, including 10 community health center staff members, 2 pediatricians, and 3 multicultural family experts, were selected, and no participants dropped out during the research. The research team and the subjects met for the first time at the focus group interview site and had no personal relationships or interests with each other, allowing the experts to exchange ideas as candidly and freely as possible. No compensation was provided to the participants.
Study Design
The interview guidelines were developed based on a review of previous studies, and the semi-structured interview guidelines were completed after pediatrics and infectious disease specialists reviewed the inclusiveness of the items and the adequacy of the content. The moderator explained the background and purpose of the research and led the interview following the semi-structured guidelines shown in Table 1.
The interview was carried out in a conference room near Seoul Station for 2-3 hours per expert group, until it was apparent that no more new topics were emerging. HP mediated the conversation and BP took notes while making an audio recording, observing, and keeping records of the progress. The participants were divided into 3 groups according to their occupation for the interview, reaching theoretical saturation through interviews with multiple experts.
Analysis
The assistant moderator played the recorded audio immediately after the interview to make sure that every detail was available for analysis. By applying thematic analysis [7], one of the members (BP) performed initial coding and another (HP) reviewed the outcomes. The team reached agreement through debate when the 2 members disagreed, and the results of the analysis were reviewed by all members. Theoretical saturation was confirmed when the 2 researchers agreed that no additional codes could be found. The validity of the analysis was confirmed by an experienced infectious disease specialist who was neither a team member nor a participant. The results were not shared with the participants.
Basic Attributes of the Participants
Three expert groups including community health center immunization administrators, pediatricians, and multicultural family experts attended the interviews. A total of 10 community health center workers who had been in charge of immunization for 10 months to 4 years from Seoul (4 centers), Incheon (1 center), Daejeon (1 center), Gangwon (1 center), Gyeongbuk (2 centers), Jeonnam (1 center), Chungbuk (1 center), and Jeonbuk (1 center) attended the interviews. Two pediatricians were selected as participants, one of them chairman of the Korean Association of Pediatric Practitioners and the other a member of the Infectious Disease Committee of the Korea Pediatric Society who also is working as a university hospital pediatrician. As multicultural family experts, a family policy specialist at the Ministry of Gender Equality and Family, a Multicultural Family Support Center staff member at the Korean Institute for Healthy Family, and a consultant at the call center for multicultural families participated in the interview.
Topics Introduced During the Focus Group Interview
Vaccine hesitancy, personal circumstances that impede vaccination, lack of information, and misclassification emerged as the top 4 main reasons for non-vaccination. Each main topic could be divided into 2 to 3 subtopics (Table 2).
Vaccine hesitancy
Community health center immunization administrators and pediatricians alike pointed to vaccine hesitancy as a major barrier, adding that this sentiment tends to be very strong. The subtopics for refusing vaccination were distrust in its safety, suspicion regarding its necessity, and fear of adverse effects or abnormal reactions.
The participants reported that people who strongly distrust the safety and efficacy of vaccination prefer to develop immunity naturally instead of obtaining immunity through vaccination. Pediatricians said that skepticism against vaccination is often rooted in distrust of the overall medical system, and people with such views often rely on information obtained from online communities and websites rather than by consulting doctors when making decisions.
Such individuals also deny the need for vaccination, and it can be even more difficult to convince them of the necessity of vaccinations that have limited effects or are accused of adverse effects. For instance, hesitancy about the influenza vaccination is common because of the inconvenience of receiving repeated administrations each year, its low credibility in terms of its effects, and the relatively low fear of the morbidity caused by influenza.
Parents whose child has experienced a reaction to a vaccina-
Personal circumstances that impede vaccination
Representative examples of personal circumstances that impede vaccination were frequent overseas travel, double-income families, and low accessibility of medical institutions.
The community health center immunization administrators and pediatricians agreed that there have been insufficient efforts to encourage late vaccination if a scheduled vaccination is missed, leading to skipped vaccinations among those who frequently travel overseas. The multicultural family experts also confirmed that many children from multicultural families may miss vaccinations when visiting family members abroad.
Despite the improved accessibility provided by the expansion of the National Immunization Program, which enables free vaccinations at private institutions, double-income parents can have difficulty finding time to bring their children to an institution for vaccination.
It was found that low accessibility to public health centers or clinics in rural areas still serves as a roadblock to increased vaccination.
Lack of information
Multicultural family experts pointed to the language barrier and insufficient information as the biggest obstacles for such families. Immigrant women often go through pregnancy and birth without having enough time to adjust to Korean culture. They also tend to have limited social circles, which is another factor preventing them from accessing appropriate information. To make matters worse, text messages and other guidelines sent by the KCDC are only in Korean, meaning that such families miss essential advice, such as the importance of vaccination, the recommended immunization schedule, and free immunization clinics. It is vital to encourage these immigrants to visit the Multicultural Family Support Center or to contact the call center to obtain relevant information at an early stage of immigration and to make information available in both Korean and their mother tongue.
Meanwhile, community health center experts shared their difficulties in notifying patients of missed or upcoming scheduled vaccinations due to inaccurate contact information, arguing that a systemic update is necessary.
Misclassification
If a Korean citizen receives a vaccination abroad and does not submit proof of vaccination to a domestic community health center, the nation does not have an updated vaccination history of that person. This brings about challenges in confirming whether a person has actually been vaccinated, as well as difficulties in future management. Missing records were indicated as the major reason for missed vaccinations in certain regions where many of the residents have homes both in Korea and abroad or have recently returned to Korea.
In some cases, vaccinations performed by private institutions may not be reflected in the system. The pediatricians blamed the hard-to-navigate immunization history log system for such mistakes. To make matters worse, medical institutions not only are free from any legal duty to enter vaccination data, but also are not allowed to charge for the work of data entry, giving them excuses not to keep the data up to date.
DISCUSSION
The focus group interviews of 15 vaccination experts suggested 4 main obstacles to vaccination: (1) vaccine hesitancy, (2) personal circumstances that impede vaccination, (3) lack of information, and (4) misclassification. Among them, vaccine hesitancy was identified as the most significant factor that discourages vaccination, although there seemed to be a few unmet needs despite the accomplishments of the National Immunization Program. The interviews allowed real-life experiences and examples to be shared by experts who have been working towards improving the vaccination rate, and the discussions clearly produced meaningful directions regarding how to develop effective interventions to further enhance vaccination coverage.
Barriers to Vaccination
Previous domestic studies have sought to identify major barriers to vaccination in Korea. A study published in 2010 [8] surveyed 700 parents of children aged 2 to 5 who belonged to socially vulnerable groups and found that main reasons for non-vaccination were the child being sick during the vaccination period, the financial burden of vaccination, and unfamiliarity with the vaccination schedule. In another study, a phone survey was conducted of 1000 married women aged between 25 and 39 with children 12 years old and younger [9], who suggested that time spent on vaccination (31.7%), high cost (27.8%), and inconvenient community health center service hours (12.8%) were their major complaints.
In contrast, in a 2012 study of 174 parents who skipped vaccinations for their children [10], the respondents stated that their main reasons were fear of possible side effects, suspicions regarding the necessity of vaccination, the child being sick during the vaccination period, the child having atopy, and preferring to acquire natural immunity, while only few replied with reasons of being unfamiliar with the vaccination schedule and being busy. In another study conducted in 2016 [11], 928 of 1254 newborns in 2012 who had missing vaccination records were found to have stayed abroad for an extended period of time. Other reasons included hesitancy about vaccination due to fear of possible side effects, suspicions regarding the necessity of vaccination, and personal or religious beliefs, followed by medical-related reasons such as reduced immunity, atopy, and underlying diseases that prevented vaccination. Only 2 people pointed to difficulties visiting medical institutions.
When comparing the conclusions of this study to previous studies, it seems that the major factors influencing vaccination have changed over time. Traditional barriers, including financial burdens and geographical accessibility, have been mostly replaced by new barriers, such as vaccine hesitancy and frequent overseas travel, as a result of the implementation of the National Immunization Program, the increased number of institutions providing free vaccination, and the notification service about upcoming vaccinations.
The younger generation, who has not experienced firsthand the risk of infectious diseases thanks to the triumph of successful vaccination programs that eliminated or reduced many epidemics, tend to fear such diseases less [12]. However, the same population is widely exposed to myths online regarding the risks and side effects of vaccines, leading them to become suspicious and distrustful toward immunization in general [13,14]. In this context, more parents seem to refuse vaccination for their children due to fears of possible side effects, suspicions regarding the necessity and effectiveness of vaccination, distrust of vaccination, and a preference for natural immunity [12,[15][16][17][18][19][20][21][22]. The World Health Organization (WHO) has recognized the gravity of this issue by defining such attitudes towards delaying or refusing vaccination due to issues of confidence, complacency, or convenience as "vaccine hesitancy" [23].
Missing vaccinations due to frequent overseas travel is one of the newest issues, reflecting an era in which going abroad is no longer a special occasion. The participants in our interviews suggested that notifications should be sent out to encourage those who missed a scheduled vaccination to get a shot nonetheless, and to promote the submission of certifications of vaccinations performed at overseas institutions to keep the system up-to-date. Some pointed out that institutionalizing the submission of a vaccination certification at the airport before and after travel would help manage vaccination in frequent travelers.
Meanwhile, the needs of children with underlying diseases, double-income parents, low accessibility to medical institutions in certain areas, and language and information barriers experienced by multicultural families have remained unmet, despite the continued efforts to enhance vaccination coverage rates.
An individual's attitudes and behaviors regarding vaccination are influenced by a combination of complex factors at multiple levels, including the cultural, social, and political context. Therefore, an integrated strategy including interactive communication through respected community organizations, legal groundwork, and structural reforms will be vital to address the unmet needs of vaccination [24][25][26][27], in addition to individual-level health communication [28].
This study also confirmed that various factors prevent individuals from receiving vaccinations, with each affecting a particular vulnerable group. Therefore, a one-size-fits-all solution would not be applicable to all cases. Rather, a tailored approach targeting each vulnerable group would be more effective for addressing the unmet needs and improving vaccination coverage rates. The WHO also recommends evidencebased interventions targeting certain vulnerable groups [29].
The study also reinforced the findings of previous studies that educated parents make up a surprisingly high percentage of the population with vaccine hesitancy [30]. They are known to request accurate information regarding side effects and efficacy directly from healthcare workers [31][32][33]. Thus, in order to improve the rate of vaccination coverage, healthcare workers should provide proper information based on facts and evidence concerning the belief that too many vaccinations could harm a child's immune system [34] and cause autism or autoimmune diseases [35][36][37], as well as whether vaccination during fever or atopy is safe. Healthcare providers are the point of contact for patients, and have the ability to bring about changes in their behavior. That is why efficient communication guidelines should be provided to healthcare workers [38] to promote efforts to build reliable relationships with patients and parents [12,39], and to deliver interventions via interactive communication [27,40].
The limitations of this study include the possibility of reflecting only narrow points of view and the experiences of certain individuals. This issue is most prominent for the pediatricians, who accounted for only 2 participants. Both a private practitioner and a university hospital staff member were selected as participants to complement such limitations and to help the research team understand barriers in different practice environments. In addition, the private practitioner represented the voice of his colleagues as the chairman of the Korean Association of Pediatric Practitioners, and the university hospital pediatrician also represented the opinions of the Korea Pediatric Society.
Despite such limitations, the focus group interviews provided an in-depth understanding of the attitudes, perceptions, and beliefs of parents who do not vaccinate their children through a qualitative approach. Fundamental roadblocks to vaccination were identified through the vivid experiences shared by the participants, who were experts working at vaccination sites. Hopefully, the results of this study can be used to establish a more tailored intervention strategy that can further increase the rate of vaccination coverage in Korea.
|
2018-08-14T13:49:22.725Z
|
2018-05-25T00:00:00.000
|
{
"year": 2018,
"sha1": "539254d1e32f56bbc9bd7589665f999a1bc1b3a6",
"oa_license": "CCBYNC",
"oa_url": "https://www.jpmph.org/upload/pdf/jpmph-51-4-173.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "539254d1e32f56bbc9bd7589665f999a1bc1b3a6",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237469440
|
pes2o/s2orc
|
v3-fos-license
|
Thoracoscopy for Spontaneous Pneumothorax
Video-assisted thoracic surgery (VATS) is the treatment of choice for recurrence prevention in patients with spontaneous pneumothorax (SP). Although the optimal surgical technique is uncertain, bullous resection using staplers in combination with mechanical pleurodesis, chemical pleurodesis and/or staple line coverage is usually undertaken. Currently, patient satisfaction, postoperative pain and other perioperative parameters have significantly improved with advancements in thoracoscopic technology, which include uniportal, needlescopic and nonintubated VATS variants. Ipsilateral recurrences after VATS occur in less than 5% of patients, in which case a redo-VATS is a feasible therapeutical option. Randomized controlled trials are urgently needed to shed light on the best definitive management of SP.
Introduction
Pneumothorax can occur spontaneously or because of trauma or procedural complication. Spontaneous pneumothoraces (SP) are divided into primary (PSP) and secondary (SSP). PSP occurs in someone without a known underlying lung disease, whereas SPP appears as a complication of an underlying lung disease, such as chronic obstructive pulmonary disease, lung cancer, interstitial lung disease, or tuberculosis. This distinction is probably artificial since most PSP patients have small subpleural emphysematous blebs and bullae (usually located in the lung apices) that may rupture, causing air to enter the pleural space, while others have an unrecognized lung disease (e.g., thoracic endometriosis, Birt-Hogg-Dubé syndrome, lymphangioleiomyomatosis, Langerhans cell histiocytosis, Ehlers-Danlos syndrome) [1]. Management of SP is guided by clinical symptoms, pneumothorax size and side (e.g., bilaterality), cause (PSP or SSP), occupation, and risk of recurrence. A recent meta-analysis of 29 studies comprising more than 13,500 adult patients with a first episode of PSP found that approximately 30% experienced recurrence within the first year, and females were at higher risk than males [2]. Of 170,929 hospital admissions in England for SP, 60.8% of patients had chronic lung disease [3]. The recurrence rate was 25.5% at 5 years, but it was higher in SSP than PSP (32% vs. 21%). Once a patient has a recurrence, subsequent recurrences are even more common. After a brief overview of the general management of SP and thoracoscopic techniques, this narrative review focuses on the role of thoracoscopy for the first and subsequent SP episodes.
Overview on the Management of Spontaneous Pneumothorax
Initial therapeutic options for PSP include observation with or without supplemental oxygen, manual aspiration with needle (14-16 G) or catheter (8)(9), and chest catheters (≤14 Fr) or tubes (16)(17)(18)(19)(20)(21)(22)(23)(24) connected to either a water seal or an ambulatory drainage 2 of 10 device [4][5][6]. Individuals with an SSP are primarily treated with catheter or tube thoracostomy (Table 1). Chest catheters (≤14 Fr) are recommended over chest tubes (>14 Fr) for both PSP and SPP, though some patients with SPP (e.g., hemopneumothorax, tension pneumothorax, barotrauma from mechanical ventilation, large air leaks) may benefit from large-bore chest tubes (24)(25)(26)(27)(28). After the initial management of SP, the need for a definitive procedure to prevent recurrences should be evaluated (Table 2). In short, when a definitive procedure is indicated, video-assisted thoracic surgery (VATS) with stapling of blebs/bulla and pleurodesis is the treatment of choice [4][5][6]. Interventions should target the apical half of the thorax. In nonsurgical candidates, chemical pleurodesis (e.g., talc, doxycycline) via chest tube represents an acceptable alternative. Ideally, the timing of these procedures should be within the same hospital admission as the risk of recurrence is highest during the first months [2]. Some experts have suggested that thoracoscopy should be done at the first episode of PSP, irrespective of the circumstances highlighted in Table 2. The rationale is to reduce the patient's anxiety and the economic healthcare burden related to a second episode. A meta-analysis of nine studies (1121 patients), of which only two were randomized controlled trials (RCT) (222 patients), showed that patients with a first episode of PSP have a more significant reduction in the ipsilateral recurrence rate when treated with VATS (irrespective of the type of surgical technique) than when treated conservatively (odd ratio 0.13) [7]. Specifically, for every three patients that undergo VATS operations, one recurrence is avoided (number needed to treat of 3.1 patients). One RCT recommended preventive VATS, particularly in those patients whose high-resolution computed tomography demonstrated bullae ≥ 2 cm [8]. However, the quality of current evidence on this debatable topic is moderate at best. On the other hand, if all patients were operated on after the first PSP occurrence, about two-thirds of them would undergo an unnecessary intervention; not to mention the potential, though few, side effects of surgery (e.g., postoperative bleeding, chest pain or paresthesia), and the increased technical difficulties for future thoracic surgeries. Ultimately, the preferred approach for an initial episode of PSP that does not meet the conditions outlined in Table 2 will be a decision shared with an adequately informed patient.
Thoracoscopic Techniques
Thoracoscopy, a procedure which allows access to the pleural space for diagnostic and therapeutic purposes, has classically been divided into "medical" and "surgical" [9,10]. Medical thoracoscopy (MT) is also referred to as pleuroscopy or local anesthetic thoracoscopy. It is usually performed by interventional pulmonologists in a non-operating room setting (e.g., endoscopy suite), under local anesthesia and moderate sedation. MT may be delivered via rigid or semi-rigid (flexi-rigid) instruments. Conversely, surgical thoracoscopy or VATS is conducted by a surgeon in an operating room, under general anesthesia with single lung ventilation and, traditionally, using three entry ports and rigid instruments ( Figure 1). However, owing to technical advances, the boundary between medical and surgical thoracoscopy are becoming increasingly blurred. For example, nonintubated (spontaneous ventilation) uniportal VATS can be considered a "medical" variation of the original surgical procedure, which only has a few technical differences from classical MT. Table 3 provides a comparison of MT and VATS.
Should Thoracoscopic Surgery Be Offered for Every First Episode of PSP?
Some experts have suggested that thoracoscopy should be done at the first episode of PSP, irrespective of the circumstances highlighted in Table 2. The rationale is to reduce the patient's anxiety and the economic healthcare burden related to a second episode. A meta-analysis of nine studies (1121 patients), of which only two were randomized controlled trials (RCT) (222 patients), showed that patients with a first episode of PSP have a more significant reduction in the ipsilateral recurrence rate when treated with VATS (irrespective of the type of surgical technique) than when treated conservatively (odd ratio 0.13) [7]. Specifically, for every three patients that undergo VATS operations, one recurrence is avoided (number needed to treat of 3.1 patients). One RCT recommended preventive VATS, particularly in those patients whose high-resolution computed tomography demonstrated bullae ≥2 cm [8]. However, the quality of current evidence on this debatable topic is moderate at best. On the other hand, if all patients were operated on after the first PSP occurrence, about two-thirds of them would undergo an unnecessary intervention; not to mention the potential, though few, side effects of surgery (e.g., postoperative bleeding, chest pain or paresthesia), and the increased technical difficulties for future thoracic surgeries. Ultimately, the preferred approach for an initial episode of PSP that does not meet the conditions outlined in Table 2 will be a decision shared with an adequately informed patient.
Thoracoscopic Techniques
Thoracoscopy, a procedure which allows access to the pleural space for diagnostic and therapeutic purposes, has classically been divided into "medical" and "surgical" [9,10]. Medical thoracoscopy (MT) is also referred to as pleuroscopy or local anesthetic thoracoscopy. It is usually performed by interventional pulmonologists in a non-operating room setting (e.g., endoscopy suite), under local anesthesia and moderate sedation. MT may be delivered via rigid or semi-rigid (flexi-rigid) instruments. Conversely, surgical thoracoscopy or VATS is conducted by a surgeon in an operating room, under general anesthesia with single lung ventilation and, traditionally, using three entry ports and rigid instruments ( Figure 1). However, owing to technical advances, the boundary between medical and surgical thoracoscopy are becoming increasingly blurred. For example, nonintubated (spontaneous ventilation) uniportal VATS can be considered a "medical" variation of the original surgical procedure, which only has a few technical differences from classical MT. Table 3 provides a comparison of MT and VATS.
Feature Medical Thoracoscopy VATS
Indications in SP patients Pleurodesis, electrocoagulation of blebs Bullectomy/blebectomy, pleurodesis, staple line coverage 1 Midazolam or dexmedetomidine in combination with fentanyl usually provide good sedation and analgesia. 2 The diameter of the rigid thoracoscope most commonly used is 7-10 mm. 3 Telescope of 3.3-5.5 mm. 4 Similar in handling to flexible bronchoscope, the pleuroscope has a proximal rigid and a flexible distal part, and 7 mm in outer diameter. 5 The thoracoscope and other instruments (e.g., stapler, grasper) are introduced through a single 2-2.5 cm skin incision. 6 Needlescopic VATS uses the existing chest drain wound as a working port and adds two 3-mm ports. SP, spontaneous pneumothorax; VATS, video-assisted thoracic surgery.
VATS for Spontaneous Pneumothorax
VATS (preferably) and thoracotomy are the two surgical approaches for the operative treatment of SP. During surgery, emphysema-like changes can be assessed in accordance with the Varderschueren classification (stage I, normal pleura; stage II, pleural adhesions; stage 3, blebs < 2 cm; and stage 4, bullae > 2 cm) [11]. If blebs and bullae are visible, which occurs in approximately 80% of the cases [5], they are generally resected ( Figure 2) and then a pleurodesis procedure is undertaken. Even if macroscopic blebs/bullae are not apparent and no air leak is identified by a water or saline test, many surgeons proceed with lung apex excision, confident that emphysema-like changes will be discovered in the resected tissue [4,12]. However, under this specific situation (no endoscopic abnormalities and no air leak), others just prefer to apply talc poudrage [13].
Anesthesia
Local, conscious sedation 1 1 Midazolam or dexmedetomidine in combination with fentanyl usually provide good sedation and analgesia. 2 The diameter of the rigid thoracoscope most commonly used is 7-10 mm. 3 Telescope of 3.3-5.5 mm. 4 Similar in handling to flexible bronchoscope, the pleuroscope has a proximal rigid and a flexible distal part, and 7 mm in outer diameter. 5 The thoracoscope and other instruments (e.g., stapler, grasper) are introduced through a single 2-2.5 cm skin incision. 6 Needlescopic VATS uses the existing chest drain wound as a working port and adds two 3-mm ports. SP, spontaneous pneumothorax; VATS, video-assisted thoracic surgery.
VATS for Spontaneous Pneumothorax
VATS (preferably) and thoracotomy are the two surgical approaches for the operative treatment of SP. During surgery, emphysema-like changes can be assessed in accordance with the Varderschueren classification (stage I, normal pleura; stage II, pleural adhesions; stage 3, blebs <2 cm; and stage 4, bullae >2 cm) [11]. If blebs and bullae are visible, which occurs in approximately 80% of the cases [5], they are generally resected (Figure 2) and then a pleurodesis procedure is undertaken. Even if macroscopic blebs/bullae are not apparent and no air leak is identified by a water or saline test, many surgeons proceed with lung apex excision, confident that emphysema-like changes will be discovered in the resected tissue [4,12]. However, under this specific situation (no endoscopic abnormalities and no air leak), others just prefer to apply talc poudrage [13]. Bullectomy/bleblectomy (also referred to as wedge resection) is mostly accomplished using an endostapler, but other alternatives, such as bulla suturing (no-knife stapler), endoloop ligation, and electrocoagulation (diathermy), exist. Pleurodesis can be achieved through different methods, whether mechanical (dry gauze abrasion above the fifth rib, apical or partial parietal pleurectomy, pleural electrocauterization), chemical (insufflation Bullectomy/bleblectomy (also referred to as wedge resection) is mostly accomplished using an endostapler, but other alternatives, such as bulla suturing (no-knife stapler), endoloop ligation, and electrocoagulation (diathermy), exist. Pleurodesis can be achieved through different methods, whether mechanical (dry gauze abrasion above the fifth rib, apical or partial parietal pleurectomy, pleural electrocauterization), chemical (insufflation of talc or instillation of other chemical agent), mixed (staple line coverage of the dissected visceral pleura with an absorbable mesh [e.g., cellulose, Vicryl, polyglycolic acid] and/or fibrin glue), or a combination thereof.
It should be noted that there is great variability in the surgical treatment of SP among institutions and a lack of high-quality RCT to guide evidence-based management. A meta-analysis of 51 studies (only 2 RCT) comprising 6907 patients compared outcomes of different thoracoscopic interventions for PSP [14]. It was found that recurrence rates were lowest in the wedge resection plus chemical pleurodesis group (1.7%) and highest in the wedge resection alone group (9.7%), thus emphasizing the importance of combining interventions.
VATS or Medical Thoracoscopy?
Whilst the surgical management of SP is typically reserved for VATS, it might rarely be performed by skilled proceduralists using MT. For instance, in one study, 124 patients with PSP underwent electrocoagulation of blebs/bullae and talc poudrage pleurodesis under MT [15]. The mean operative time was about 15 min and only 4 (3%) patients required reoperation by axillary thoracotomy during follow-up. However, since complex parenchymal interventions, such as bullectomy/bleblectomy, are more appropriately undertaken at VATS, in clinical practice MT is reserved for cases where talc poudrage is simply selected as the method to prevent SP recurrences.
VATS or Open Thoracotomy?
VATS has gradually supplanted open thoracotomy and mini-thoracotomy and is now considered the standard definitive treatment of SP. VATS is minimally invasive and several meta-analyses have demonstrated that it results in a shorter operation time, less intraoperative blood loss, shorter hospital stays, fewer post-operative analgesic requirements, and better cosmesis than open surgery [16,17]. However, the risk of SP recurrence following VATS is higher when compared to thoracotomy, which justifies the use of supplemental procedures during surgery (i.e., pleurodesis) as previously stated. The higher postoperative recurrence rate with VATS can be attributed to a higher chance of missed leaking blebs and a less intense pleural inflammatory reaction induced by this technique than by thoracotomy. Overall, the frequency of SP recurrence following VATS are reported to range from about 4% to 11%, whereas it is approximately 1% with open thoracotomy [18,19].
A French national database comprising 7396 SP patients, of whom 977 (13%) were treated by open thoracotomy and 6419 (87%) by a three-port VATS technique, offers comparative data between both procedures [20]. Although the proportion of PSP and SSP was unreported, roughly 40% of patients had underlying respiratory conditions that could predispose to SSP. Surgical procedures consisted of bullectomy (57% in open surgery and 66% in VATS) and pleurodesis (100%), the latter being performed mainly by mechanical abrasion or apical pleurectomy (79%) or, less commonly, by using a chemical agent (21%). There was a significantly higher recurrence rate of SP after VATS (3.8% vs. 1.8%), with a median time to recurrence of 3 months. Hospital length of stay was reduced by an average of one day in patients subjected to VATS, while the frequency of pulmonary complications also favored this technique (8% vs. 12%) [20].
A recent national-level epidemiologic study in the United States included 21,838 SSP admissions during 2016 and 2017 [21]. Despite guideline recommendations, only 7366 (33.7%) received prophylaxis of SSP recurrence during the same hospitalization, largely by VATS (80.8%). The 90-day post-discharge recurrence rates were similar for VATS and open surgery (4.10% and 4.03%, respectively). However, the chance of developing a recurrent SSP was four to five times higher in patients who received medical pleurodesis alone.
Uniportal or Multiportal VATS?
VATS is classically performed using two or three ports. Experience with single-incision VATS is increasing, though still limited. A meta-analysis of 17 retrospective case-control studies examined 502 SP patients who underwent uniportal VATS and 486 treated with a three-port VATS procedure [22]. The uniportal variant, as compared with the threeport VATS, did not increase mortality, recurrence rates (4.34% vs. 4.79%), operative time (61 vs. 59 min) or postoperative hospital stay (5.71 vs. 5.84 days), but significantly reduced patient postoperative pain and paresthesia, and improved patient satisfaction. The largest series from a single center on the use of uniportal VATS for SP included 351 patients [23]. The authors proved the feasibility and safety of this technique, which had a recurrence rate of 3.6% and resulted in 85% patient satisfaction due to the single small scar. To our knowledge, there is only one RCT in which 135 PSP patients were recruited and treated by either a single, double, or three-port approach (45 in each branch) [24]. The study indicated that uniportal VATS was less painful and had better cosmetic results, while it yielded similar efficiency as the two or three-port variants (overall recurrence rate 5%). Despite these advantages, single-access VATS is not yet widely used for SP. One reason may be that it requires greater technical skill to manage surgical instruments within small confines. In fact, a study suggested at least 100 procedural experiences for proficiency [25].
VATS or Needlescopic VATS?
For surgeons with insufficient experience to perform a uniportal VATS, but who would like to offer patients the benefits of this surgical modality, needlescopic VATS emerges as a reasonable alternative. For example, in a retrospective comparison of 106 PSP patients who underwent needlescopic VATS and 89 who were managed with conventional VATS, the former procedure was significantly associated (like uniportal VATS) with less postoperative pain and minimal skin scarring (3 mm wounds) [26]. Needlescopic VATS has never been compared with classical VATS in a RCT.
Intubated or Nonintubated VATS?
VATS generally involves endotracheal intubation under general anesthesia, which is inevitably associated with a risk of complications related to major airway injury (e.g., sore throat, hoarseness, tracheal damage) and the residual effects of muscle relaxants. It is feasible, however, to perform SP surgery using intravenous and/or locoregional anesthesia in a spontaneously breathing patient; the so-called nonintubated or awake VATS. Nonintubated VATS is a suitable alternative for patients who cannot receive general anesthesia because of increased risks [27]. Only three RCT that respectively enrolled 43, 41, and 335 patients have evaluated the safety and feasibility of this procedural adaptation [28][29][30]. The largest one assigned half the patients to nonintubated VATS and the other half to mechanical ventilation and found that awake VATS hastened the recovery from surgery, decreasing the operative consumption of intravenous opioid analgesia and the overall cost of anesthesia [30]. The other two small RCT highlighted the shorter operative and perioperative time in the awake group [28,29].
VATS Pleurodesis
Several techniques can be used to induce pleural symphysis in SP patients subjected to VATS, with significant variations among surgeons, hospitals, and countries. Notably, the normal parietal pleural surface in SP patients tends to be excruciatingly painful during a pleurodesis procedure, thus making adequate pain control necessary in the days or weeks after the intervention.
Mechanical or Chemical Pleurodesis?
A meta-analysis of one RCT and 6 observational cohort studies tried to ellucidate the best pleurodesis method, whether mechanical or chemical, following bullectomy for PSP [31]. Of 1933 PSP patients (mean age of around 27.5 years), 1032 were treated with mechanical pleurodesis and 901 with chemical pleurodesis. Mechanical pleurodesis consisted of pleural abrasion (n = 799), pleurectomy (n = 202) or both (n = 31), whereas chemical pleurodesis was performed predominantly with talc (n = 643) or, less commonly, minocycline (n = 69) or others. Chemical pleurodesis was superior in reducing recurrence rates (1.2% vs. 4%) and hospital stay (by 0.42 days). The reason for this superiority may be a matter of a more extensive distribution of the chemical agent throughout the pleural surface. Another meta-analysis of 5 studies (3 RCT and 2 retrospective) aimed to determine which VATS pleurodesis approach, single intervention (mechanical) or a combined intervention (mechanical and chemical), is more effective in preventing SP recurrence [32]. The combined group included 561 patients and the mechanical group 286. Adding a chemical agent to mechanical pleurodesis provided a 63% lower risk of devoloping a recurrent SP compared to single intervention, though at the expense of an increasing rate of postoperative pain.
Finally, although a previous pleurodesis for SP neither makes patients unsuitable for lung transplantation nor significantly affects surgical outcomes [33,34], some experts prefer mechanical over chemical pleurodesis in transplant candidates, in the belief that talc may be associated with difficulties during surgical reintervention.
Mechanical Pleurodesis: Abrasion or Pleurectomy?
Partial pleurectomy entails a parietal pleura stripping, whereas abrasion involves the rubbing of the parietal pleura, using a gauze or brush, until petechial bleeding occurs. A meta-analysis of 6 RCT examined pleural abrasion and other procedures in PSP patients [35]. As compared with apical pleurectomy, pleural abrasion had advantages in terms of operative time, postoperative bleeding, and residual chest pain and discomfort, although both equally reduced the recurrence of PSP. In addition, abrasion is simpler and less technically demanding than pleurectomy.
Chemical Pleurodesis: Talc
Talc is the most commonly used sclerosing agent worldwide. Tetracycline derivatives (e.g., minocycline, doxycycline) are alternative options in countries where talc is unavailable. In a large prospective series of 1415 patients with PSP undergoing VATS, with intervention to bullae in half the cases and talc poudrage (3-4 g) in all, the incidence of recurrent SP was only 1.9% after a median follow-up period of 8.5 years [13]. Of note, bullae suturing or ligation (instead of resection) were associated with a significantly higher frequency of recurrence (3.8% and 15%, respectively) compared with subjects receiving talc poudrage alone (0.3%). A subsequent systematic review of 8 studies (n = 2324), of which only one was randomized, confirmed that PSP recurrence following VATS with talc poudrage was very low (0% to 3.2%) [36]. The same article reviewed 4 additional studies (n = 249) in which talc poudrage was insufflated through MT without intervention on the lung, which provided higher recurrence rates of between 2.5% and 10.2% [36].
For patients unable or unwilling to undergo VATS or MT, pleurodesis via chest catheter using talc slurry (or other sclerosing agent) is a possibility. However, talc slurry is less effective than talc poudrage for SP because of the ability of the latter to target the lung apex where most SP originate.
Staple Line Coverage
Covering the staple line area after bullectomy/bleblectomy reinforces the visceral pleura and also has a symphyseal effect. One prospective study randomized 1414 PSP patients who underwent bullectomy with staplers to a coverage group (n = 757) in which the staple line was covered with an absorbable cellulose mesh and fibrin glue, or to a mechanical abrasion group (n = 657) [37]. Both groups showed comparable recurrence rates at 1 year (9.5% and 10.7%, respectively), but patients in the mechanical group had significantly more residual pain. In another RCT of 204 PSP patients who required VATS bullectomy and apical pleural abrasion, half were assigned to receive Vicryl mesh to cover the staple line and the other half were not (control group) [38]. There was a reduction in postoperative SP recurrences at 1 year in the mesh group (2.9% vs. 15.7%). Finally, according to a meta-analysis of 8 studies (3 RCT and 5 retrospective), totalling 1095 SP patients who were subjected to bullectomy, staple line coverage with a bioabsorbable polyglycolic acid patch also resulted in a lower postoperative recurrence rate (3.7% vs. 15.3%) [39].
Redo-VATS
Experience is scarce on the optimal approach to recurrent SP following VATS or thoracotomy [40][41][42][43]. In a Korean series of 188 patients in whom PSP recurred after VATS, 76 (40%) underwent redo VATS surgery, 60 (32%) were treated by observation, and 52 (28%) by tube thoracostomy [42]. A subsequent recurrence was seen in 3%, 20%, and 33% of the treatment groups, respectively, but these figures were 2.9%, 68%, and 57% in a smaller series of 34 patients [41]. This emphasizes that redo-VATS is probably the best option in this particular population, unless pneumothorax size is minimal. The typical operative findings at redo-VATS are pleural adhesions (70-80%) and the presence of blebs/bullae (90%) which predominate on the staple line or are new and have a different location from those seen at the original VATS [41,42]. Minor postoperative complications develop in about 10% of the cases. In patients who previously received talc pleurodesis, pleural adhesions may be more dense and, therefore, redo-VATS may be more challenging.
Conclusions
The indication for a definitive procedure to prevent recurrences of SP should be based on the probability of new episodes, patient profession and preferences, and procedural aspects (e.g., risks, surgeon's skills). VATS is the preferred operative approach. Combining stapled bullectomy/bleblectomy with a single (e.g., abrasion, partial pleurectomy, talc poudrage, staple line coverage) or double pleurodesis method results in very low recurrence rates. However, there is no consensus on the best treatment of blebs and bullae, except when they are leaking, nor on the ideal method of pleurodesis. In patients who are inoperable or refuse surgery, bedside pleurodesis via chest catheter/tube is recommended. Recurrences following VATS can be managed with redo-VATS.
|
2021-09-11T06:17:04.369Z
|
2021-08-26T00:00:00.000
|
{
"year": 2021,
"sha1": "bebc54c704fde62101d72ca828aca3c621c53973",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/17/3835/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2430290ec4c1570340276de57372e2e6b5283cd3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251540134
|
pes2o/s2orc
|
v3-fos-license
|
Use of centrifugal systems for investigating water flow processes in unsaturated soils
Centrifugal modelling, both physical and numerical, has been used for studying groundwater flow and transport processes in the past. However, there was disagreement in previous studies whether numerical models can be used in simulating centrifugal systems under unsaturated flow condition. In the present study, a numerical model based on Richards’ equation was developed to predict one-dimensional unsaturated flow in centrifugal systems. The validity of the model was tested using data from physical models in four published benchmark problems. The ability of the numerical model to close mass balance was also tested. It was shown that the newly developed numerical model was able to recreate the four benchmark problems quite successfully, indicating that using such a model under unsaturated flow condition is feasible. The mass conservation result shows that the model is more sensitive to spatial grid resolution than to specified temporal step. Therefore, fine spatial discretization is suggested to ensure the simulation quality. Additionally, adaptive temporal time stepping method can be used to improve the computational efficiency. It was found that the dimensionless factors used for scaling physical dimensions by 1/N, seepage velocity by N, and temporal dimension by 1/N2 were useful parameters for scaling centrifugal systems.
Definition of water potential. Figure 1 shows a schematic diagram of one-dimensional flow taking place in a centrifugal system. Flow is driven by a gradient in water potential, which is a function of elevation potential, kinetic energy and matric potential. The net water potential per unit mass at a point within a centrifugal system can be expressed as 23 : where Φ is water potential per unit mass [L 2 T −2 ], P e is centrifugal elevation potential per unit mass[L 2 T −2 ], v is Darcy velocity [L T −1 ], θ is water content [L 3 L −3 ], ψ is matric suction [M L −1 T −2 ], ρ is water density [M L −3 ], and subscript m denotes "centrifugal model" (hereafter the same). The seepage velocity (i.e. the ratio between Darcy velocity and water content) in the centrifugal acceleration field is generally small 24,25 , leading to kinetic energy negligible. In this case, the water potential becomes: The centrifugal acceleration is a function of radial distance and angular speed, and it is calculated in the following equation 19,26 : where a is centrifugal acceleration [L T −2 ], r is radius [L], ω is angular speed [T −1 ]. Equation 3 shows that centrifugal acceleration is distributed along the vertical direction of the centrifugal system. This distribution is noticeably different from the distribution of natural gravitational acceleration (almost uniform near the earth surface), so the elevation potential of water in the system must be derived from the most basic definition of elevation potential. Centrifugal elevation potential is equal to the work done to overcome centrifugal force, and elevates one unit mass of water from the datum to the current position. The mathematical expression for centrifugal elevation potential at r from the system bottom (set as the datum) is: where r b is the distance between the system bottom and the centrifuge axis [L]. Substitute Eq. (3) into Eq. (4), and do a definite integration: www.nature.com/scientificreports/ Combining Eqs. (2) and (5), the water potential for unsaturated flow taking place in a centrifugal system can be wrote in the form of: This expression of water potential is similar to the one used by Nimmo et al. 22 and Conca and Wright 27 . The difference is the choice of datum, which was set to be the centrifuge axis in those equations.
Darcy's law. Darcy's law describes the internal relationship between flow and gradient of water potential. Darcy's law for flow in a centrifugal system has been verified by many studies 8,24,25,28 , and it can be shown in the form of where v is seepage velocity [L T −1 ], K(ψ) is unsaturated hydraulic conductivity [L T −1 ], g is gravitational acceleration [L T −2 ]. Substitute Eq. (6) into Eq. (7) and rearrange it, the Darcy's law for unsaturated flow under centrifugal acceleration can be deduced as: Richards' equation for centrifugal modeling. Considering a control volume of a centrifugal system (shown in Fig. 1b), the principle of continuity leads to 16 15 . It describes the one-dimensional flow through unsaturated soils under a centrifugal acceleration system. By rearranging Eq. (10), unsaturated flow in the centrifugal system can be expressed as: Equations (10) and (11) are Richards' equation in two different forms. A detailed numerical method for solving the centrifuge equation in the form of Eq. (11) is provided.
Richards' equation of the prototype. Unsaturated flow taking place under normal gravitational acceleration is taken into account in the prototype (see Fig. 1a). In order to set an inspection standard for centrifugal modeling, a standard Richards' equation of the prototype in the following form was solved: where z is the height from the column bottom [L]. The other symbols in Eq. (12) are the same as those presented in the derivation process of Richards' equation of centrifugal modeling (RECM), and the subscript p denotes "prototype" (hereafter the same). Equation (12) is a mixed form of the Richards' equation 29 and its simulation results are set as the inspection standard to verify centrifugal modeling. g-Level. The ratio between effective acceleration and gravity is termed as g-level, which is always noted as N.
Because centrifugal acceleration is a function of radius and angular speed, an error caused by stress distribution exists between the centrifugal model and the prototype. Taylor pointed out that the error is minimal if the stress at the 2/3 height of the model is as same as the corresponding point of the prototype 30 , so N can be obtained using the following equation: where L m is the length of the centrifugal system [L]. In this study, we reviewed and adapted some of these numerical schemes and presented a detailed step-by-step approach that can be directly used by others. Furthermore, we assembled a set of benchmarks to rigorously validate the numerical solution.
The entire column length of the centrifugal system is divided into n finite-difference grids with n nodes as shown in Fig. 2 (the (n + 1)th node is a virtual one, just used for the flux boundary at the top of the system). For the ith node in the system, a fully implicit finite-difference approximation of spatial terms in Eq. (11), using a Picard iteration scheme for linearizing the nonlinear terms, can be described as: where j denotes the jth discrete time level, Δr is the spatial step [L], τ is the Picard iteration level, K j+1 m,i is the hydraulic conductivity of the ith node at jth time level [L T −1 ], ψ j+1 m,i is the matric suction at that node [M L −1 T −2 ], and K(ψ) is a nonlinear function of ψ, which is linearized using a Picard iteration scheme 1 .
A backward Euler approximation, coupled with Picard iteration scheme, is used to discretize the temporal term at the left hand side of Eq. (11): The specific water capacity of soil is defined as: Using Eqs. (15)-(17), the partial time derivative of water content is approximated as: The first term on the right side of Eq. (18) is an explicit estimate of the partial time derivative of water content, based on the τth Picard level estimates of matric suction. In the second term of the right side of Eq. (18), the numerator of the bracketed fraction is an estimate of the error in the pressure head at node i between two successive Picard iterations. Its value diminishes as the Picard iteration process converges. As a result, as the Picard process proceeds, the contribution of the specific water capacity C(ψ) is diminished 1 .
The finite-difference expressions for the spatial and temporal derivatives of Eqs. (14) and (18) are rearranged by moving all the unknowns on the left side and all the knowns on the right, in agreement with Eq. (11). Using the above implicit finite-difference approximation, the matric suction at the (j + 1)th time level and (τ + 1)th Picard level is obtained by solving the following linear algebraic equations: Equation (19) applies to all interior nodes; this equation is modified at boundary nodes to reflect the appropriate boundary conditions as matrix form in below: where ψ is a vector of unknown matric suctions ψ j+1,τ +1 m,i , b is the forcing vector, A is a square matrix consisting of the coefficients of the finite-difference Eq. (19). Equation (20) can be solved using a Thomas algorithm.
Boundary conditions. In most cases, the top boundaries are always set as flux boundaries and the bottom ones are set to be free-drainage faces during centrifugal experiments. According to existing literatures 31 , the freedrainage boundary would be treated as constant pressure head boundary in numerical simulations. Therefore, the boundary conditions used afterwards are as follows: a constant water flux is added at the top and a constant matric suction is fixed at the bottom. Mathematically these conditions can be represented as: , and r t is the radius of the top [L]. It should be noticed that special attention needs to be paid to flux boundary
Published physical models for benchmarking
The performance of the numerical model presented above is compared with four published experimental datasets, each representing a different scenario. The experimental setups of the four cases are shown in Fig. 3, and the parameters used in the numerical simulations are listed in Table 1.
Case 1: suction forced unsaturated flow when ω = 0. Data used in Case 1 come from Kirkham 36 ( Fig. 3A), where the unsaturated flow is forced by suction gradient with the absence of elevation potential (ω www.nature.com/scientificreports/ equals to zero). In this case, the centrifugal system can be considered as a horizontal column experiment. The horizontal column was constructed of acrylic sections which vary from 0.9 to 2.6 cm in length. The acrylic material enables the advance of the wetting front to be observed and the section orientation allows rapid partitioning of the column at the end of the experiment by pushing down on column sections. The total length of the column is 25 cm filled with the Ferrosol soil. One end of the column was attached to a Marriotte bottle, and the other end was kept open. As it was destructive sampling, six experiments were conducted to get a time series data of water flow. The initial and boundary conditions were almost the same with slight differences, and average values are used here for simulations. The initial water content of soil column was uniformly set to be 0.155 cm 3 /cm 3 , the end which was attached to the Marriotte bottle is treated as a constant suction boundary with the value of 0 Kpa, and the other end is treated as a free-drainage boundary.
Case 2: steady-state profiles. Data used in Case 2 come from Nimmo et al. 22 (Fig. 3B), and it is selected to verify the prediction of the steady-state profiles. In this case, an internal flow control (IFC) apparatus to measure unsaturated hydraulic conductivity in a relatively short time was developed and Darcy's law under low hydraulic conductivity was tested. A cylindrical sample with the diameter of 50 mm and height of 47 mm was filled with Oakley sand, and then was saturated and put on the apparatus. The distance between the bottom of the sample and the centrifuge axis was 221 mm, and an angular speed, 100 rad/s (N ≈ 194), was applied to (Fig. 3C), and it is selected to verify the prediction of transient unsaturated flow processes. The original physical model was intended to estimate unsaturated soil parameters by using one-step outflow tests with the help of a 2-m radius geotechnical centrifuge which is set at the Idaho National Laboratory. An apparatus which allows suction pressure heads within the samples and cumulative outflow to be measured was used. The apparatus was filled with dry fine Ottawa sand and saturated by deaired water in a large vacuum chamber for all the experiments. Then, the container filled with soil was placed on the centrifuge to desired accelerations. The top of the samples were set as zero-flux boundaries, and the bottom were kept contacting with water table while the experiments were going. Cumulative outflow data and suction pressure heads during the processes were collected, and then the unsaturated soil parameters were estimated based on the collected data. These experiments were conducted under three different g-levels (10 g, 20 g, and 40 g), but the same prototype was modeled.
Case 4: unsaturated flow in multi-rotation experiments. Data in Case 4 come from Šimůnek and
Nimmo 15 (Fig. 3D), and it is chosen to verify the prediction of transient unsaturated flow while doing multirotation experiments with a centrifuge. The data were collected in three multi-rotation experiments, which were marked as Run 1, Run 2 and Run 3, respectively. The apparatus used to do the experiments was the IFC apparatus which developed by Nimmo et al. 22 , and eight electrodes were buried for water content measurements at five different depths with six data channels. Oakley sand was packed in the apparatus and was saturated. Afterwards, experiments were carried out and data were collected meanwhile with zero-flux top-boundary and constant suction bottom-boundaries. Equilibrium and transient analyses were conducted and then unsaturated hydraulic parameters were estimated.
Results
In Case 1, the comparison between Kirkham's data 36 of unsaturated flow dominated by matrix suction gradient and the simulation results are shown in Fig. 4. The unsaturated flow processes are simulated by the numerical model with parameters and adjusting conditions listed in Table 1. A good agreement (R 2 values of all six comparisons are larger than 0.9) is observed, which illustrates that the numerical model could be applied for this situation. www.nature.com/scientificreports/ In Case 2, results predicted by the numerical model and data collected by Nimmo et al. 22 are compared in Fig. 5a. It is illustrated that there is a good agreement between the model simulation and the observed data in terms of moisture content profile. The R 2 value between the simulated value and observed data is 0.9972. Furthermore, the steady-state suction profile was also predicted and was shown in Fig. 5b.
In Case 3, the predicted transient flow is compared with experimental results presented by Nakajima and Stadler 14 in Fig. 6. It is illustrated that the agreement between simulated and observed results are acceptable. The R 2 values of the most comparisons are larger than 0.9, while the R 2 values of the comparisons of 80 min and 192,000 min at g-level of 40 g are 0.5563 and 0.8708, respectively. Factors resulting in poor match of testing at 40 g are caused by non-ideal controlled boundaries and the limitation of tensiometers' performance. According to Fig. 6, it is difficult for water to be discharged from the bottom of samples at the starting phase of the experiment. The positive values of suction pressure heads near the bottom at 80 min (prototype time) indicate that water is ponding there. Besides, rapid pore water pressure drop would occur when testing under higher g-level, eventually inducing cavitation of the ceramic attached on the tensiometers 14 , which causes poor performance of these inset tensiometers.
Three separate runs from re-saturation with different rotation speeds were considered and simulated in Case 4. As these runs have similar intentions, only Run 1 of Case 4 is selected to test the unsaturated flow in multi-rotation experiments in this study, and the sequences of centrifuge speeds are given in Table 2. It should be noted that the value of constant suction at soil bottom varies at times when the rotation speed is changed. Since a 9.2 mm porous plate was set at soil bottom for supporting during the experiments, only the plate's bottom was kept contact with the water table. 9.2 mm is sufficiently long for the whole plate to be saturated within a centrifugal field, so the suction at the soil bottom (contact with the top face of the plate) will not be zero. In order to deal with it, an assumption was made for the simulation, that is, every point on the plate has a same fluid potential. In that case and based on Eq. (6), the constant suction at the soil bottom can be obtained: where H is the thickness of the plate [L]. With the bottom boundary condition solved, unsaturated flow in Run 1 is predicted then, and the simulation results are compared with the data from Šimůnek and Nimmo 15 in Fig. 7. The R 2 values of these five depths between the simulated values and observed data are 0.9642, 0.9408, 0.9080, 0.9660 and 0.9259, respectively, which are all larger than 0.9. This indicates that a good agreement of the water content changing at five depths can be observed and the predicted pressure head profiles at times when the rotation speed is changed are exactly the same as data presented by Šimůnek and Nimmo 15 .
Discussion
Validation of the numerical model by four physical models. The numerical model developed in the present study was validated by four benchmark cases, and it was shown that the numerical model can be well applied in simulating unsaturated flow of centrifugal systems over a broad range of problems. However, the numerical model also showed certain limitations when predicting unsaturated flow with sufficient accuracy. www.nature.com/scientificreports/ Taking Case 1 as an example, a further inspection was carried out on the simulated mass balance in connection with the model discretization (Eq. 26 applied), and the results are shown in Fig. 8. According to Fig. 8, the numerical model showed good mass conservation when both time and space intervals are small. The performance of the numerical model is more sensitive to the spatial discretization than to the temporal discretization. When the time step varied from 1 to 60 s, the numerical solution exhibited mass balance error less than 5% at 96 min. On the contrary, the mass balance errors are more than 10% and 60% at 96 min when the space spans are 2.5 mm and 5 mm, respectively. Therefore, it is suggested that a fine spatial discretization is necessary to ensure the quality of simulation. It is noted that case 1 had no inertial acceleration added to the column, and since angular speed is not zero, a significant finer spatial discretization is needed to accommodate large gradients at early times of the simulation to achieve smaller mass balance errors, especially at the outflow boundary 15 . It is noted from Fig. 5b that when the steady-state condition is achieved, the distributions of moisture content and suction towards the sample top are almost uniform. This phenomenon was also observed by other studies 16,26,31 , and the larger the N value of gravity level is, the more obvious the phenomenon is 31 . In that case, the suction gradient in Eq. www.nature.com/scientificreports/ Equation (32) is the basic equation of Nakajima and Stadler 14 to measure unsaturated hydraulic conductivity. According to Eq. (32), the unsaturated hydraulic conductivity can be calculated with a known water flux supplied and angular speed, and the corresponding water content could be detected after the experiment. Furthermore, a different data set (K, θ) can be obtained while changing the rotation speed with other conditions kept the same, so it can be used to get soil moisture characteristic curves conveniently 31 .
Validity of using a numerical model for 1D centrifugal modelling. Several previous studies have investigated the uncertainty of using a numerical model for 1D centrifugal modeling. Goforth et al. 19 deduced the formula of Darcy's law, which is similar to Eq. (8), to describe the centrifugal fluid mechanism by doing a force equilibrium analysis on a fluid volume. They pointed out that the seepage velocity is directly proportional to the g-level only if the pressure gradient (i.e., ∂ψ m /∂r) is zero, so scale factor for the seepage velocity maybe wrong. Furthermore, fluid flux in soil is dominated by suction gradients, which can be 10-1000 times greater than the gradient due to gravity 37 . In that case, Goforth et al. 19 commented that there is no advantage of modeling unsaturated flow in a centrifugal field and it may not be feasible, but no useful data were collected by them to support such claim. Poulose et al. 20 investigated moisture migration in silty soil by using a small centrifuge. They showed that the models may be valid only for saturated soil by comparing data collected at three different g-levels, so they agreed with Goforth et al. 19 's comments. However, several applications of centrifugal modeling under unsaturated situations are shown in literatures 5-8 and most of them give tacit consent to that it is feasible to do it. Therefore, it is time to make the uncertainty more clear, and the algorithm presented above is used for this purpose.
Through different methods (i.e., dimensionless analysis, inspectional analysis, and analytical solutions' comparison), the scale factors for unsaturated flow are N, 1/N, and 1/N 2 , which are corresponding to the seepage velocity, dimensions, and time between the centrifugal system and the prototype, respectively. In order to verify whether centrifugal modeling could reproduce unsaturated flow of the prototype, these scale factors are assumed to be right at first. Afterwards, an infiltration scene is simulated by the numerical model presented above, and Eq. (12) is also numerically solved to predict the unsaturated flow under the same situation (i.e. prototype). The numerical scheme of Eq. (12) can refer to the above-mentioned algorithm, which is based on the Clement www.nature.com/scientificreports/ et al. 1 approach. In order to judge whether the centrifugal experiment simulation can correctly reproduce the prototype, Hydrus 1-D software package is used to directly predict the unsaturated water transfer process in the full-scale prototype. The simulated scene is water infiltrating through a uniform, unsaturated, low permeable soil column, which has a saturated hydraulic conductivity of 1 × 10 -7 m/s. The initial water content throughout the column is 0.22, the column top is a flux boundary and the bottom is kept wet during the migration processes. The input and modeling parameters of the centrifugal system under different g-levels and of the prototype are listed in Table 3, and these scale factors are already reflected in the values. A comparison of simulated results of centrifugal systems and the prototype is made in Fig. 9. It can be seen in Fig. 9 that the unsaturated flow in the scaled-down centrifugal systems are almost the same as the prototype except for the lower part of the column, which preliminary illustrates that these scaling parameters are useful factors and centrifugal modeling can be used in studying of unsaturated flow processes.
Use of centrifugal modeling under unsaturated flow condition. As mentioned above, Goforth et al. 19 pointed out that seepage velocity is directly proportional to the g-level only if the pressure gradient is zero, but the simulation results of the hypothetical scene show that it may be incorrect. This is because there could be another situation where the pressure gradient is also proportional to the g-level itself, which guarantees that the seepage velocity is proportional to the g-level. According to Fig. 9, the matric suction at the same corresponding place is the same for both centrifugal systems and the prototype, and it should be noticed that the dimension is deduced by a factor of 1/N, so the pressure gradient between any two specified points in the centrifugal system would be N times larger than that in the prototype. That means the pressure gradient is proportional to the g-levels, and the scale factor of the seepage velocity is correct. Goforth et al. 19 commented that it may not be feasible to model unsaturated flow with a centrifuge. One possible reason could be that they considered that centrifugal field would enhance the status of elevation potential gradient. The proportions of elevation potential gradient in the total driving force for unsaturated flow in the centrifugal system and the prototype are: www.nature.com/scientificreports/ It is known that the pressure gradient is proportional to the g-level, so η m equals to η p , which means that centrifugal field will not change the status of elevation potential gradient. By using Poiseuille's equation of capillary flow, Lord 38 checked the capillary flow under different situations in the geotechnical centrifuge, and the results show that the scale laws for time and dimension of one or two-dimensional unsaturated flow maybe correct. Therefore, Goforth et al. 19 's comments may be incorrect. In addition, it should also be highlighted that the numerical verification result shown in Fig. 9 proves that scale factors for time, seepage velocity, and dimension are correct without assuming that the centrifuge acceleration is uniformly distributed, and this method is not limited to the steady-state conditions like the work of Dell'avanzi et al. 16 .
Furthermore, the slight differences between the lower part of centrifugal system and prototype could be noticed in Fig. 9. The unsaturated flow at the lower part of the centrifugal system lagged behind the prototype as the matric suctions at this part are smaller than the prototype. This happened because the method used to handle spatial distance with the same N value for the whole centrifugal system. According to Eqs. (3) and (13), the g-level value at the lower part of the centrifugal system should be larger than N, and this would cause the size of the lower part to be reduced compared with the real space distance. For the same reason, the size of the upper part is caused to be enlarged compared with the real space. The error due to this (as shown in the inset of Fig. 8) is still within an acceptable small range. Two methods could be applied to reduce the error caused by the uneven distribution of acceleration, conducting centrifugal experiments with large N values or treating the space distance with the exact g-level values for the different points on the centrifugal system.
It should be noted that the parameters used in the numerical model (namely, the parameters presented in Tables 1, 2, 3) were assumed constant as the g-level changed. Based on this, it can be concluded that theoretically centrifugal modeling is feasible to be applied for modeling unsaturated flow. However, there could be unmatched results between model simulations and real data collected from physical experiments 20 . In real physical systems, various factors could bring in large experimental uncertainty for unsaturated flow, such as poorly controlled boundaries, the emergence of compaction, performance limitations of the installed sensors, variations in the preparation of soil samples, etc. It can be seen from Case 3, the improperly controlled bottom boundary caused strange data at 80 min (prototype time), and the limitation of tensiometers resulted in almost failing experiments at 40 g. Samples with strong plasticity (such as clay) trend to be compacted by the centrifugal acceleration, and result in a change of soil water characteristics, therefore the experimental data can vary a lot. As the modeling method is used by Poulos et al. 20 , several experiments with different g-levels need to be completed for a special prototype, and the differences between soil preparations could be a reason for variations in the data reported in their study.
Conclusions
Centrifugal modeling has been used for investigating the flow and solute transport behavior in both saturated and unsaturated soil in the past. However, it has been questioned whether this approach is suitable for applying under unsaturated conditions. In the present study, a numerical model for one-dimensional unsaturated flow in centrifugal systems was developed and verified using four published benchmark datasets.
The newly developed numerical model was able to recreate the four benchmark datasets with reasonable accuracy. Therefore, it is suggested that the numerical model can be used to predict unsaturated flow processes in centrifugal systems when certain criteria are met.
The numerical model was able to close the water budget when spatial and temporal intervals are sufficiently small. The model is more sensitive to spatial discretization than to temporal discretization. Therefore, finer spatial discretization is advised (e.g. 1 mm) to ensure high quality simulation.
It is feasible to study flow processes occurring within the unsaturated zone using centrifugal experiment. The concerns raised by Goforth et al. 19 may not have many impacts since the matric suction gradient is directly www.nature.com/scientificreports/ proportional to the gravity level N, and the centrifugation will not strengthen the importance of the position potential energy gradient in the process of driving the unsaturated flow. The similarity ratios of seepage velocity (N), dimensions (1/N) and temporal size (1/N 2 ) proposed by the predecessors are indeed reasonable, and these ratios are used effectively in the examples discussed in this study. The uneven distribution of acceleration in the simulation of centrifugal experiment could cause water arrival at the bottom of the soil column lag behind the prototype. When using the same centrifugal equipment to simulate the same prototype, the use of higher centrifugal acceleration can reduce this type of lag.
Data availability
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
|
2022-08-14T06:17:35.272Z
|
2022-08-12T00:00:00.000
|
{
"year": 2022,
"sha1": "18f7b927c80e30689c6acd98dc97f04d6bb9bdba",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "de345964b0482c63dbf5c51e73af1f1d8d5793d6",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207772990
|
pes2o/s2orc
|
v3-fos-license
|
DC-S3GD: Delay-Compensated Stale-Synchronous SGD for Large-Scale Decentralized Neural Network Training
Data parallelism has become the de facto standard for training Deep Neural Network on multiple processing units. In this work we propose DC-S3GD, a decentralized (without Parameter Server) stale-synchronous version of the Delay-Compensated Asynchronous Stochastic Gradient Descent (DC-ASGD) algorithm. In our approach, we allow for the overlap of computation and communication, and compensate the inherent error with a first-order correction of the gradients. We prove the effectiveness of our approach by training Convolutional Neural Network with large batches and achieving state-of-the-art results.
I. INTRODUCTION
Training Deep Neural Networks (DNNs) is a time-and resource-consuming problem. For example, to train a DNN to state-of-the-art accuracy on a single processing unit, the total time needed is in the order of magnitude of days, or even weeks [16]. For this reason, in recent years, several algorithms have been developed to allow users to perform parallel or distributed training of DNNs [7]. With the correct use of parallelism, training times can be reduced down to hours, or even minutes, [16], [6], [12], [21]. The reader interested in a broad survey of Deep Learning algorithms is referred to [2], which is also a great resource for taxonomy and classification of different parallel training strategies.
The most widely adopted type of training parallelism, and the one we will employ in this work, is denominated data parallelism: the DNN is replicated on different Processing Units, each replica is trained on a subset of the training data set, and updates (usually in the form of gradients) are regularly aggregated, to create a single update which is then applied to all the DNN replicas. The way updates are aggregated differs across algorithms in terms of communication scheme, distribution of roles among processing units, and message frequency and content. We will discuss different approaches and architectures in Section II.
In Section III we describe our approach, which constitutes a modification to the DC-ASGD algorithm proposed in [24]. Our approach shows promising results for Convolutional Neural Networks (CNNs): in Section IV we report the results obtained when training different networks on the well-known ImageNet-1k data set, which has imposed itself as the standard benchmark for CNN performance assessment.
In Section V we propose possible extensions to the presented algorithm, and outline what advantages they could bring.
II. RELATED WORK
With the growing availability of parallel systems, such as clusters and supercomputers, both as onpremises or cloud solutions, the demand for fast, reliable, and efficient parallel training scheme has been fueling research in the Artificial Intelligence community [2], [6], [12], [21], [14]. The most widespread technique, data parallelism, can be applied to many different areas, such as image classification, Reinforcement Learning, or Natural Language Processing [15]. When data-parallel training has to be scaled to large systems, convergence problems and loss of generalization arise from the fact that the global batch size becomes very large [13], [19], [15].
As suggested in [2], data-parallel training methods can be classified according to two independent aspects: synchronicity (or model consistency across different processes) and communication topology (centralized or decentralized). Synchronous methods are those which ensure that after each training iteration each process (or worker) holds a copy of exactly the same weights; asynchronous methods allow workers to get out of date, receiving updated weights only when they request them (usually after having computed a local update). Centralized communication schemes imply the existence of socalled Parameter Servers, processes which have the task of collecting weight gradients from workers, and send back updated weights; in decentralized schemes, each worker participates in collective communications to compute the weight updates, e.g. via MPI all-reduce calls.
A. Advantages and Disadvantages of Different Training Schemes
Historically, when the first major Deep Learning toolkits (such as e.g., TensorFlow [1] or MXNet [3]) started offering the possibility of parallel training, they did so by implementing techniques with centralized communication, i.e. with Parameter Servers (PSs). As every centralized communication scheme, the PS-paradigm does not scale efficiently. With a growing number of Workers, PSs become bottlenecks, and communication becomes of the many-to-few type. Nevertheless, asynchronous methods often use this paradigm, as it allows workers to send updates independently, without waiting for other workers to complete processing their batches. The most straight-forward algorithm for this setting is clearly the Asynchronous SGD, which has been improved during years with respect to many aspects [20], [10], [24], but its core mechanism can be summarized as follows: • at the beginning of the computation, every worker receives an exact copy of the weights from the PSs • every worker processes a mini-batch and sends the computed gradients to the PSs, which apply them to their local copy of the weights, and send the updated weights to the worker which initiated the communication • the worker proceeds to process another batch, while the PSs wait for gradients from other workers The problem (and the subject of the mentioned improvements) of this approach resides in the fact that after the first weight update, the weights on the PSs and the workers will be different (except for the worker who communicated with the PSs last). This in turn creates an inconsistency between the weights used to compute the gradient (on the worker's side) and the weights which will be updated with such gradient on the PSs. This problem is often reffered to as gradient staleness. Clearly, the larger the difference between the weights, the less accurate the update will be. If we assume that all N workers have approximately the same processing speed, we can deduce that after N iterations the PSs will receive gradients which are -on average-out of date by N steps. This clearly has a large negative impact on convergence, when N is large. We will focus on one particular attempt which has been made to limit this effect and is derived in the DC-ASGD algorithm. The method computes an approximated first-order correction to modify the gradients received by the PSs. But even though this approach mitigates the problem, it can only work when the distance between PSs' and worker's weights is relatively small.
In recent years, large-scale training was obtained by using different flavors of the most classic synchronous scheme, that is Synchronous SGD, in conjunction with decentralized communication. Again, even though many variants exist, the core mechanism is easy to summarize as follows: • at the beginning of the computation, every worker receives an exact copy of the weights • when a worker has finished processing its mini-batch, it participates in a blocking allreduce operation, where it shares the gradient it computed with all other workers • at the end of the all-reduce, all workers possess the sum of the computed gradients, and they can use it to compute the same weight update • every worker proceeds to process another batch This scheme has been thoroughly explored, and has one only drawback, which resides in the blocking nature of the all-reduce operation: all workers have to wait for the slowest one (sometimes referred to as straggler) before initiating the communication, and then they have to wait for the end of the communication to compute the update. Decentralized communication can also be used for a particular form of asynchronous methods, which are known as stale-synchronous. In stalesynchronous methods, workers are allowed to go out of sync by a maximum number of iterations (processed mini-batches), before waiting for other ones to initiate communication. The maximum number of iterations is called maximum staleness.
As we will see in the next section, our method is a stale-synchronous centralized of DC-ASGD, and in this work, we will only focus on the version with a maximum staleness of one.
III. ALGORITHM
Our algorithm is similar to the DC-ASGD method proposed in [24], with three main differences • it eliminates the need of a Parameter Server in favor of a decentralized communication scheme; • it is stale-synchronous, and not fully asynchronous; • weights computed by different workers are averaged. In the following sections, we will explain why these differences result in a novel and improved approach, compared to existing algorithms.
A. Problem Setting
We quickly review the problem of data-parallel training of a DNN. For this work, we will focus on DNNs trained as multi-dimensional classifiers, where the input is a sample, denoted by x. The goal of training is to find a set of network weights w which minimizes a loss function for a set of samples X , where l(x, w) is the persample classification loss function (cross-entropy loss in our case). Instead of reporting the final value of the loss function, it is usual to derive a figure of merit, which has the benefits of being more understandable by humans and applicable to different loss functions. In our case, we will use the top-1 error rate, which is simply the rate of misclassified samples to the number of elements of x. We will measure both the error obtained on the training data set and on the validation data set. We will employ a common version of the classic Mini-batch Stochastic Gradient Descent, which is usually referred to as Stochastic Gradient Descent (SGD), and solves the above mentioned minimization problem in an iterative way, following where B is a mini-batch, i.e. a subset of the training data set, and |B| is the mini-batch size, which has been proven to be an important factor, determining how easily a network can be trained. We will adopt a simple version of the SGD algorithm, namely the so-called momentum SGD, in which a momentum term [18] ensures that updates are damped, and allows for faster learning [18].
In the synchronous parallel version, SGD works exactly in the same way, with the only difference that each worker computes gradients locally on the mini-batch it processes, and then shares them with other workers by means of an all-reduce call.
B. DC-ASGD
Since our algorithm is a variation of DC-ASGD, we will briefly outline its most important feature, that is, the delay compensation. As illustrated in Section II-A, gradient staleness reduces the convergence rate, because of the difference between the weights held by the worker and those held by the PSs. In DC-ASGD, the gradients are modified to take this difference into account. Basically, the idea is to apply a first-order correction to the gradients, so that they are approximately equal to those which would have been computed using the PSs' copy of the weights. If the Hessian matrix computed at w i , here denoted by H i , was known, one could compute the corrected gradients as where w i are the weights used by the i th worker, w P S are those held by the PS, and I n is a vector with all n components equal to one, with n being the dimension of the weights. The quadratic error term O((w P S − w i ) 2 ) · I n comes directly from the Taylor expansion used to derive this result, and we will denote it as R for the rest of this work. In principle, the Hessian matrix could be computed analytically, but the product of its approximation (known as pseudo-Hessian) H with a vector v is computationally convenient to compute as where represents the Hadamard (or componentwise) product. Thus, we can rewrite 3 as Removing the error term and adding a variance control parameter λ ∈ R as defined in [24], we obtain the final form of the equation as which is the one we base our algorithm on.
C. DC-S3GD
In our centralized setting, there is no PS, but since we implement a stale-synchronous method, workers can be expected to be out of sync. In fact, the main idea of our approach is to allow for communication and computation to run in parallel, thus diminishing communication's impact on the total training run time. To allow for this, we make use of the non-blocking all-reduce function which is part of the MPI standard, i.e. MPI_Iallreduce.
We now describe our method, which is also illustrated in Algorithm 1. We stress the fact that all processing units will act as identical workers, only fed with different data. The only hyper-parameters we will need to set are the learning rate η, the momentum µ, and the variance control parameter λ.
At the beginning of the computation, each worker receives the same set of initial weightsw 0 and a different mini-batch, which it processes to obtain a set of gradients g i , where the bar over w stresses the fact that the same value is held by all workers, the subscript i denotes the worker index, and the superscript 0 denotes the iteration. We will drop the superscripts when possible, to keep the notation concise.
Based on g i , the worker uses a function U(g i , η, µ) to compute the update to its local weights. We denote the update as ∆w t i and all workers will share their local update with the others, by starting a non-blocking all-reduce operation.
While the all-reduce operation is progressing, the worker updates its local copy of the weights: and proceeds to process the next mini-batch, in order to compute new gradients g i . After having processed the mini-batch, all workers wait for the all-reduce operation to complete. In our implementation, the completion is checked by means of a call to MPI_Wait. After completion, each worker possesses an identical copy of ∆w, that is the sum of all workers' updates of the previous iteration.
At this point, we can compute the average of the weights held by each worker, as Notice that in principle, there is no guarantee that the mean value of the weights is actually meaningful, but studies such as [8] suggest that averaging different weights can lead to better minima. The Euclidean distance from the weights possessed by the i th worker to the average weights is Knowing this distance, each worker could replace its own copy of the weights with the average ones, but this is actually not needed. More importantly, by using a modified version of 6, the local gradient can be corrected and used to compute a local update that can be applied to the average weights. The correction equation becomes and thus the new update can be computed as and immediately shared with the other workers, by means of a new non-blocking all-reduce call. Each worker will update its weights following where we first move weights to the average value and update them as a single operation. At this point, Algorithm 1: DC-S3GD for N workers Input: step η, momentum µ, variance control parameter λ 0 Initialize: weights w i =w g i = ∇l(w i ); ∆w i = U(g i , η, µ) ; w i = w i + ∆w i ; for t < max_iterations do MPI_Iallreduce(∆w i ); // non-blocking g i = ∇l(w i ); ∆w = MPI_Wait(); // blocking each worker can start a new iteration, by proceeding to process the next mini-batch.
A description of how λ i is computed at each iteration is given in IV-A.
D. Advantages and Disadvantages of the proposed Method
We compare the proposed approach to two methods described in II-A, SSGD and DC-ASGD.
1) Comparison to SSGD: The main advantage over SSGD resides in the fact that communication costs are (at least partially) hidden in our approach. We can approximate the time taken by SSGD to complete an iteration over a mini-batch B over N nodes as where t C (B) is the time it takes a worker to process the mini-batch (including feed-forward and backpropagation phases), and t ARed (g, N ) is the time taken by the all-reduce call to reduce the gradients g across all nodes. For our method, a similar approximation can be made, and it yields N )) (14) which is an obvious consequence of the fact that the computation and all-reduce operations run concurrently in our setting.
2) Comparison to DC-ASGD: Similarly to the results derived in the previous section, we can define an approximation to the run-time of a DC-ASGD iteration, denoted by t SSGD , as where t P 2P (g, N ) is the total time needed by a worker to push its gradients to the PS and obtain the updated weights. Clearly, this time also includes time spent by the worker, waiting for the PS to receive the gradients. Therefore, even though it is true that in DC-ASGD fast workers do not have to wait for stragglers, it is also true that run-time depends heavily on the network and on the capability of PSs. As mentioned in II-A, DC-ASGD's convergence decreases for increasing numbers of workers. This is because the Euclidean distance between the workers' and the PSs' weights, w P S − w i , is proportional to N . In our method, the distance used to compute the correction is that between workers' and average weights, which we expect to grow more slowly w.r.t. N .
IV. EXPERIMENTS
In this section, we first describe how we set training hyper-parameters, and then we report results obtained by training four standard CNNs on the ImageNet-1k data set.
A. Hyper-parameter Settings and Update Schedules
As mentioned in III-A, to train CNNs, we employed a data-parallel version of SGD with momentum. For each network, we set the momentum µ to the value used to obtain the state-of-the-art results, and we keep it constant for the whole training, which consisted in 90 full epochs. For the learning rate η, we first define the theoretical learning rate as η theo = N η sn (16) where N is the number of workers, as usual, and η sn is the learning rate for single-node training: for ResNet cases, we used as reference a learning rate of 0.1 for a batch-size of 256 samples. This is standard practice, and it seems to give stable results for our setting. For VGG, the base learning rate was 0.02. Another standard approach is to define a learning rate schedule. In our case, we adopted an iteration-dependent (and not epoch-dependent) schedule with linear warm-up and linear decrease. The length of the warm-up phase was initially defined as half of the total iterations, but we found empirically that after 15 epochs, the training error would reach a plateau (for all batch sizes up to 64k samples), and thus we stopped the warm-up phase at the reached learning rate, and we initiated a longer linear decrease phase, which would run until the end of the training. For the case of 128k samples, the plateau was reached after 20 epochs. Identification of the plateau was done by direct observation, but we believe it could easily be automated, by e.g. checking for training error reduction every five epochs during the warm-up phase.
To reduce over-fitting, weight decay was applied to all weights, with the exception of those belonging to batch normalization layers. This technique has given the best results, and the reasoning behind it can be found in [9]. Since this kind of normalization reduces weights by a constant fraction, when the learning rate is very little (as it can happen in our case, when t is very close to 0 or to max_iterations), the weight decay can become larger than the update, therefore blocking convergence. To mitigate this problem, we decided to apply the same schedule we used for learning rate, also for the weight decay parameter. To compensate for the smaller effective regularization, we also multiply the weight decay hyper-parameter by a constant factor k. We find that k = 2.3 gives us the best results. This factor was applied to the weight decay hyper-parameter value usually adopted in the literature, namely 0.0001 for ResNet topologies and VGG-16.
By stopping the warm-up phase early, we reach only a small fraction of the maximum step length (e.g. one third for a 15-epoch warm-up), and we note that the pseudo-Hessian correction term is very small compared to the computed gradients. We investigated possible correction re-scaling techniques, and we found that the best result was to add 0.5% to the validation accuracy, when the step reached the end of the warm-up phase. We think that this correction term would have a larger influence for larger learning rates. The parameter λ i , which is used to control the variance introduced by correction step [24], was empirically found to give the best results when dynamically set as with λ 0 = 0.2.
B. Hardware and Software Configuration
We ran our experiments on a Cray XC system. Every node was equipped with two 24-core Intel Skylake processors with a clock speed of 2.4 GHz and nodes were connected through Cray Aries with dragonfly topology. The use of CPUs only, which is in contrast with the more standard usage of a GPU-cluster, allowed us to explore very large local mini-batch sizes (up to 1024 samples per local mini-batch). As a toolkit, we used a modified version of MXNet [4], in conjunction with the Intel MKL-DNN libraries [5]. We chose to use MXNet because it offered an easy way to implement our algorithm: we modified the original Key-Value Store (KV Store), which is used to update weights after each iteration, so that it included the needed mechanics and MPI code. The MPI implementation was Cray-mpich. The source code can be made available upon direct request to the author.
C. Results
We report results obtained by training ResNet-50, ResNet-101, ResNet-152, and VGG-16 on the ImageNet-1k data set. 1) ResNet-50: As training ResNet-50 has become a reference benchmark, we investigated performances of our method on such problem, for different settings. To maximize CPU usage, and to exploit the large memory available on CPU nodes, we use a local mini-batch size of 512 or 1024 samples. From the achieved accuracy values, shown in Table I, it can be seen that we manage to reach state-of-the-art accuracy on up to 64 nodes, with a batch size of 32k samples: the total training time, not considering network setup, is of 503 minutes. Keeping the number of nodes at 64 and using a larger batch size results in a slight loss of accuracy and a speed-up of 10%. Running the parallel training on 128 nodes, we still reach a reasonable accuracy for a total mini-batch size of 64k samples, in 260 minutes: in comparison to [16], where the target accuracy was 74.9%, we clearly outperform the best results obtained on CPUs, even accounting for the difference between total execution and training time, which never exceeded 10 minutes. From the reported results, we can see that employing a larger batch size on 128 nodes results in a large loss of accuracy. In Figure 1, top-1 error for full training of ResNet-50 networks is shown. For each combination of node count and aggregate batch size, we plot the results of the training run which reached the lowest validation error.
2) Other Architectures: Table I lists the results we obtained training other CNNs. It is clear that we are able to reach state-of-the-art accuracy for all ResNet topologies. More importantly, our method is also capable of training VGG-16 with a minibatch size of 16k samples, even though this is known to be a difficult task [22].
In order to fairly assess our method's performances, we did not adapt the hyper-parameters for the different topologies. The only tuning we performed, was to extend the warm-up phase to 20 epochs (thus, two ninth of the total training) when running on 64 or 128 nodes.
V. CONCLUSIONS
In this work, we proposed a new algorithm for distributed training, named DC-S3GD, which allows for the overlap of computation and communication by averaging in the parameter space (weights) and applying a first-order correction to gradients. We showed that this approach can achieve state-of-the-art results for parallel DL training.
Many aspects could be improved, for example, more sophisticated methods, like LARS [22], or Adam [11], could be used as local optimizers.
Another possible enhancement would be to allow more out-of-sync minimization steps to be taken by local optimizers, and to see how this influences performances, in terms of time-to-accuracy.
To reduce the error introduced in the correction step, the pseudo-Hessian could be replaced by an analytical version of the Hessian matrix.
In terms of maximum achieved accuracy, we ran some preliminary tests with a larger number of iterations, and in some cases, extending the training to 100 or 120 epochs could improve the accuracy of 0.2-0.8%, even for the case of 128k samples per batch.
We believe this approach could also be applied to train neural networks of other types, such as those used for Natural Language Processing, or Reinforcement Learning, if a data-parallel scheme can be adopted.
|
2019-11-06T17:54:56.000Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "8553e53c36797106cea4e24d0a88a7c6348b39f2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.02516",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8553e53c36797106cea4e24d0a88a7c6348b39f2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
139904354
|
pes2o/s2orc
|
v3-fos-license
|
Dimples and heat transfer efficiency
Dimple surface is one of the surface textures that are widely studied today. In general, dimples are already well known from golf ball aerodynamics. In the case of golf balls, the application of dimples is a special form of surface roughness, which shifts the typical dropdown of the flow resistance for blunt bodies into the low Reynolds number range. Initially, the idea was to use dimples for drag reduction.
Introduction
Dimple surface is one of the surface textures that are widely studied today. In general, dimples are already well known from golf ball aerodynamics. In the case of golf balls, the application of dimples is a special form of surface roughness, which shifts the typical dropdown of the flow resistance for blunt bodies into the low Reynolds number range. Initially, the idea was to use dimples for drag reduction.
The surface functions are also controlled by the micro/nano-scale surface structures manufactured artificially such as micro-grooves and micro-dimple for industrial uses. This is because surface texture has emerged as a viable option for surface engineering, resulting in significant improvements in friction coefficient, wear resistance, load capacity, etc.
It is a well-known device for significantly enhancing heat transfer rate (generally measured in terms of Nusselt number) with possibly minimum pressure drop penalties. Belenkiy et al. 1 showed that there is 150% enhancement of heat transfer rate compared to smooth surfaces while Chyu et al. 2 reported that the heat transfer rate is about 2.5 times higher than that of a smooth surface, and the pressure losses are about half that produced by conventional rib turbulators.
The enhancement of heat transfer efficiency is needed for a variety of practical applications such as cooling electronic components, heat exchangers, cooling along the micro-fluidic passages, in the internal cooling passages of turbine blades, and even biomedical devices in continuous operation.
This article presents a mini review about the application of dimple structure for increasing the heat transfer efficiency form the previous researcher work.
Previous study regarding the application of dimple structure in heat transfer
A lot of research has been carried out on various heat transfer augmenters such as pin fins, louvered fins, offset strip fins, slit fins, ribs, protrusions and dimples in order to improve the thermal efficiency of heat exchangers. However, over past few years, dimples have received much more attention for enhancing heat transfer in internal cooling passages. This is because the previous research works have shown that dimples can enhance heat transfer in confined channels with relatively low pressure loss penalty compared to other types of augmented heat transfer devices. Ligrani et al. 3 has compared several types of heat transfer augmentation techniques and concluded that dimpled surface shows a high heat transfer capacity with relatively low pressure loss penalty compared to other types of heat transfer augmenters that are available. According to Ligrani et al., the central and edge vortex pairs which are periodically shed from each dimple (as well as the resulting shear layer reattachments, boundary layer re-initialisations, and induced local unsteadiness) are key ingredients in augmenting local and spatially-averaged turbulence transport levels, and the associated surface heat transfer rates. Unlike protruding turbulators, heat transfer on dimpled surfaces is enhanced because vortex structures promote mixing, drawing ''cold'' fluid from outside the thermal boundary layer into contact with the wall, enhancing convective heat transfer. Therefore, a lot of research has been conducted to determine the pressure loss and heat-transfer characteristics that are caused by the dimpled surface. Research on dimpled surfaces is summarised below.
Moon et al. 4 experimented on the enhancement of heat transfer using a convex-patterned surface. Moon et al. conclude that enhancement of heat transfer using a convex patterned surface was thermally more effective at relatively low Reynolds numbers than that of a smooth channel. Meanwhile Silva et al. 5,6 investigated the flow structure and heat transfer enhancement of rectangular mini channel with dimples on one side of the wall. The results showed that the drag coefficient and heat transfer were increased to some extent. Wei et al. 7 & Lan et al. 8 studied the flow and heat transfer enhancement of microchannel with dimple. However, the misuse of boundary condition may lead to some discrepancies between the simulated results and real flow in the experiment conducted by Wei et al. The studies showed that the application of dimple and protrusion in micro-channel heat sinks could obtain low-pressure penalty and higher heat transfer coefficient in laminar regime. 9 Recently, Suresh et al. 10,11 experimentally studied the flow and heat transfer of CuO-water nanofluids in the channel with dimple in laminar and turbulent regime. Results show that the use of nanofluids in a helically dimpled tube increases the heat transfer rate. Experiments done by Griffith et al. 12 ; Kim et al. 13 in rotating rectangular channels describe that rotation could increase the heat transfer coefficient on dimpled surfaces, especially on the trailing dimpled surface, making it higher than that on the leading surface. 14 conducted research to expose the characteristics of the flow structure induced by the dimpled surface on one side wall of a channel with three different channel heights (H/d = 0.25, 0.50, and 1.00) in the range of ReH = 600-11,000. The visualization of the flow results showed the shedding of vertical structures from the dimple cavity. The maximum Nussel number appeared on the dimple rim and the flat surface adjacent to the downstream of each dimple. The minimum heat transfer was observed in the flow-recirculation region inside the dimple. Through experiments, Burgess et al. 15 showed the influence of the dimple depth and the dimple imprint diameter on heat transfer, and found that the local and spatially-averaged Nusselt number augmentations increase with an increase in the dimple depth, for various Reynolds numbers from 12,000 to 70,000. Doo et al. 16 conducted a parametric study on flows passing through a channel with newly designed surface shapes that comprised of a combination of dimple and riblet. The superior thermo-aerodynamic performance, assessed in terms of the volume goodness factor, was predicted in the riblet-mounted dimple case with a riblet angle of 60°.
Mahmood & Ligrani
Experiments by Xiao et al. 17 show that thermal performance parameters were higher in a heat exchanger with a dimpled bottom and smooth top than in a heat exchanger with a dimpled bottom and protrusions on top in laminar region, and proposed friction factor ratio and Nusselt number ratio correlations for a heat exchanger with dimpled bottom and smooth top. Burgess et al. 15 ; Ligrani 18 experimentally investigated the characteristics of the friction factor and Nusselt number with respect to dimple depth. Both proposed a Nusselt number ratio correlation as a function of the dimple print diameter and dimple depth in a heat exchanger with dimples on the bottom and a smooth top.
Shape optimization studies for dimpled plate heat exchangers have also been conducted. 19,20 Choi 19 & Kim, Shin 20 carried out multiobjective optimization for a heat exchanger with a dimpled bottom and smooth top, using approximate models obtained via the response surface method and the Kriging method, respectively. Kim et al. 21 generated a Pareto optimal solution in staggered elliptic dimpled channels, using an evolutionary multi-objective algorithm with the Kriging method.
Conclusion
Dimple surfaces can enhance the heat transfer rate by up to 150% compared to smooth surfaces, and in certain cases the heat transfer rate is about 2.5 times higher than that of smooth surfaces.
|
2019-04-30T13:08:34.226Z
|
2018-10-31T00:00:00.000
|
{
"year": 2018,
"sha1": "85794fed919c07e356db70aec8e23cc1dc5fd5e1",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/MSEIJ/MSEIJ-02-00051.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "509df198fd2e899ace5438daa598e39d437efe3f",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
55916001
|
pes2o/s2orc
|
v3-fos-license
|
On ergodic least-squares estimators of the generalized diffusion coefficient for fractional Brownian motion
We analyse a class of estimators of the generalized diffusion coefficient for fractional Brownian motion $B_t$ of known Hurst index $H$, based on weighted functionals of the single time square displacement. We show that for a certain choice of the weight function these functionals possess an ergodic property and thus provide the true, ensemble-averaged, generalized diffusion coefficient to any necessary precision from a single trajectory data, but at expense of a progressively higher experimental resolution. Convergence is fastest around $H\simeq0.30$, a value in the subdiffusive regime.
We analyse a class of estimators of the generalized diffusion coefficient for fractional Brownian motion Bt of known Hurst index H, based on weighted functionals of the single time square displacement. We show that for a certain choice of the weight function these functionals possess an ergodic property and thus provide the true, ensemble-averaged, generalized diffusion coefficient to any necessary precision from a single trajectory data, but at expense of a progressively higher experimental resolution. Convergence is fastest around H ≃ 0.30, a value in the subdiffusive regime. Single molecule spectroscopy techniques allow the tracking of single particles over a wide range of time scales [1][2][3]. In complex media such as living cells, a number of recent studies have reported evidence for subdiffusive transport of particles like proteins [4], viruses [5], chromosome monomers [6], mRNA [7] or lipid granules [8]. Subdiffusion is typically characterized by a sublinear growth with time of the mean square displacement (MSD), E(B 2 t ) = Kt ν with ν < 1, where B t is the particle position at time t, E denotes the ensemble average and K is a generalized diffusivity.
A growing body of single trajectory studies suggest that fractional Brownian motion (fBm), among the variety of stochastic processes that produce subdiffusion, may be a model particularly relevant to subcellular transport. FBm is a Gaussian continuous-time random process with stationary increments and is characterized by a so-called Hurst index H = ν/2. If H < 1/2, trajectories are subdiffusive with increments that are negatively and long range correlated [9]. Such correlations were observed in subdiffusing mRNA molecules [10], RNAproteins or chromosomal loci [4] within E. coli cells. Similarly, fBm can be used to describe the dispersion of apoferritin proteins in crowded dextran solutions [11] and of lipid molecules in lipid bilayers [12].
Whereas the determination of an anomalous exponent from data has been extensively studied, as it demonstrates deviation from standard Brownian motion (BM), the problem of estimating the generalized diffusion constant K has received much less attention. It appears that K is much more sensitive than ν to many biological factors and its precise determination can potentially yield valuable information about the kinetics of transcription, translation and other physico-biological processes. The generalized diffusivity of RNA molecules in bacteria is greatly affected (either positively or negatively) by per-turbations, for instance treatment with antibiotic drugs, which have however a negligible effect on ν [4]. Likewise, the coefficient K of lipids in membranes is strongly reduced by small cholesterol concentrations, whereas ν remains unchanged [12]. In the context of search problems, a particle following a subdiffusive fBm actually explores the 3d space more compactly than a BM and can have a higher probability of eventually encountering a nearby target [13]. The larger the value of K, the faster this local exploration.
In this paper, generalizing our previous results for standard BM [14], we present a method to estimate the ensemble averaged diffusivity K from the analysis of single fBm trajectories of a priori known anomalous exponent. Estimating diffusion constants from data is not an easy task when trajectories are few and ensemble averages cannot be performed. BM and fBm are ergodic processes and time averages tend to ensemble averages, but convergence can be slow [15]. For finite trajectories of finite resolution, variations by orders of magnitude have been observed for estimators of the normal diffusion coefficient obtained from single particles moving along DNA [16], in the plasma membrane [2] or in the cytoplasm of mammalian cells [17]. Large fluctuations are also manifest in subdiffusive cases [4,12].
A broad dispersion in the measures of the diffusion coefficient raises important questions about optimal fitting methodologies. A reliable estimator must possess an ergodic property, so that its most probable value should converge to the true ensemble average independently of the trajectory considered and its variance should vanish as the observation time increases. Recently, much effort has been invested in the analysis of this challenging problem and several different estimators have been analyzed, based, e.g., on the sliding time-averaged square displacement [18,19], mean length of a maximal excursion [20], the maximum likelihood approximation [21][22][23][24][25] and optimal weighted least-squares functionals [14].
Our aim here is to determine an ergodic least-square estimator for the generalized diffusion coefficient when the underlying stochastic motion is given by a fBm. The estimators considered here are single time quantities, unlike others based on fits of two-time quantities such as the time averaged MSD.
Let us consider a fractional Brownian motion B t in one dimension with B 0 = 0 and zero expectation value for all t ∈ [0, T ], where T is the total observation time. The covariance function of the process is given by [9]: where D(= K/2) is the generalized diffusion coefficient and the Hurst exponent H ∈ (0, 1). The Hurst index describes the raggedness of the resulting motion, with a higher value leading to a smoother motion. Standard Brownian motion is a particular case of the fBm corresponding to H = 1/2. As already mentioned, for H < 1/2 the increments of the process are negatively correlated so that the fBm is subdiffusive. On the other hand, for H > 1/2 the increments of the process are positively correlated and superdiffusive behavior is observed.
We consider a single trajectory B t , that is, a particular realization of an fBm process with a known H, and write down the following weighted least-squares functional: where W (t) is some weighting function to be determined afterwards and K f is a trial parameter. We call K f an estimate of the generalized diffusion coefficient from the single trajectory B t , if it minimizes F . Calculating the partial derivative ∂F/∂K f , setting it to zero and solving the resulting equation for u = K f /K, we find the following least-squares estimator of the generalized diffusion coefficient K: where we have introduced the notation Note that the estimator u measures the ratio of the observed generalized diffusion coefficient for a single given trajectory relative to the ensemble-averaged value. Moreover, E{u} ≡ 1 holds for any arbitrary ω(t), making it possible to compare the effectiveness of different choices of ω(t). It is worthwhile remarking that u is given by a single time integration (a local functional) and thus differs from other estimates used in the literature which involve two-time integrals (see e.g., [15]). Further on, from a straightforward calculation the variance of the estimator u is, for arbitrary weight function ω(t), where Cov B 2 t , B 2 s is the covariance function of a squared fBm trajectory This function can be calculated exactly using Eq. (1) to give Inserting the latter expression into Eq. (5) and noticing that the kernel is a symmetric function of t and s, we have (8) Following Ref. [14], we choose where t 0 is a lag time and α a tunable exponent. In a discrete time description, t 0 can be set equal to the interval between successive measurements [14]. We thus identify t 0 as a resolution parameter in the present continuous description. We also note that in [14], it was proven that a power law weight function of the type in Eq. (9) was optimal among all weight functions. Fixing t 0 and scanning over different values of α, we seek the value for which the variance of u is smallest. Hopefully, for such value, the variance should vanishes in the limit of infinite resolution or infinite data size, i.e. when the parameter ǫ = t 0 /T tends to zero. To check the latter point, we consider first the limit of an infinitely long observation time, ǫ = 0. For α < γ H = 1 + 2H the integrals in Eq. (8) can be performed exactly yielding where Γ(·) is the gamma-function. On the other hand, for α > γ H = 1 + 2H and ǫ = 0, the result in Eq. (8) can be conveniently represented as a single integral where 2 F 1 (·) is the confluent hypergeometric function. The integral in Eq. (11) can be also performed exactly by using the series representation of the confluent hypergeometric function and then resumming the resulting series. However, the expression obtained is rather lengthy as it contains several hypergeometric functions 3 F 2 (·). On the other hand, the result in the form of Eq. (11) can be tackled by Mathematica; in addition the asymptotic behavior can be easily extracted from it, so that we prefer to work with the compact expression (11) rather than with an exact but cumbersome expression.
In Fig.1 we show the dependence of the variance of the estimator u on the exponent α, for different values of the Hurst index H. We notice that for any fixed H, the variance vanishes as α approaches α = 1+2H and is nonzero for any other value. This means that for a fractional Brownian motion with Hurst index H the estimators in Eq. (3) with power-law weight functions ω(t) = (t 0 +t) −α possess an ergodic property only when α = 1 + 2H.
The last issue we discuss is that of the decay rate of the variance when ǫ is small but finite in the ergodic case α = 1 + 2H. It is straightforward to show from Eq. (8) that in the limit ǫ → 0 the variance is given to leading order by: where C(H) is a constant defined by: which exists for any H ∈ (0, 1). This result generalizes that of Ref. [14] for ordinary Brownian motion. We conclude that the variance of the estimator vanishes logarithmically with the total observation time. In other words, the diffusion constant estimated from one trajectory by this method tends toward the correct value logarithmically slowly. The prefactor C(H), which is displayed in Fig.2, reaches a minimum at H * ≃ 0.30. From Fig.2, we notice that, keeping the resolution ǫ fixed, the variance of u will be small for processes with H ∈ [0.15, 0.6], typically. This interval encompasses almost all the anomalous exponent values reported in single particle studies. Conversely, the function C(H) diverges as H → 0 or 1. Therefore, we can expect that, even with the ergodic choice of α, the estimates of the diffusion constant should become highly inaccurate for nearly localized or nearly ballistic fBm processes.
In conclusion, we have shown that the true, ensembleaverage generalized diffusion coefficient K of a fractional Brownian motion of known Hurst index H can be obtained from single trajectory data using the weighted least-squares estimator in Eq. (3) with the weight function ω(t) = 1/(t 0 + t) 1+2H . Such an estimator possesses an ergodic property so that K can be evaluated with any necessary precision but at the expense of increasing the observation time T (or decreasing t 0 ). A limitation of the present class of estimators, which are based on singletime functionals of B 2 t , is admittedly their slow convergence toward the ensemble average. Two-time functionals, based on the time averaged MSD, for instance, exhibit faster convergence: for fBm with H < 3/4 the relative variance of the time averaged MSD vanishes as t 0 /T [15]. Nevertheless these other estimators might be more sensitive to measurement errors and may not be accurate when diffusion is no longer a pure process but a mixture of processes with different characteristic times. A quantitative comparison between estimators beyond the ideal cases considered here is a necessary future step.
|
2013-01-31T15:08:25.000Z
|
2013-01-31T00:00:00.000
|
{
"year": 2013,
"sha1": "a8b17692faf58f8849d5a3147491890224608b11",
"oa_license": null,
"oa_url": "https://hal.archives-ouvertes.fr/hal-00820816/file/BoyerPRE2013.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bc580559179b8b64f21dc0cac9e799f2fe34975c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
232282784
|
pes2o/s2orc
|
v3-fos-license
|
Metabolites of shikimate and tryptophan pathways in coronavirus disease (COVID-19)
Both Alzheimer disease (AD) and COVID-19 are age-dependent diseases with a high prevalence of obesity and diabetes. Proteomic and metabolomic profiling of sera from COVID-19 patients is discussed in this chapter. Some shikimate pathway-tryptophan metabolites are altered in the blood sera of severe COVID-19 versus healthy controls. Metabolites of bile acids increased in the sera of COVID-19 patients. In the serum metabolomics of severe COVID-19 patients versus healthy, serotonin was lower, −1.72 fold, and tryptophan was lower, −0.567 fold. The complete autopsy examinations may indicate that manifestations in different organs of COVID-19 patients can be caused by chronic and acute toxicity. This suggestion is consistent with data on serum metabolic profiling of COVID-19 patients.
Therefore, the prevalence of COVID-19 severe disease in Florida is essentially lower than the prevalence of another age-dependent disease, AD, that is considered to be noninfectious.
On March 8, 2020, the total positive cases in Florida were 491,884; negative 3,260,914; death 7157. Positive cases by exposure source on March 8, 2020, indicated as traveled 3737; contact with confirmed case 133,732; travel and contact with confirmed case 3756; under investigation 323,681; for a total of 491,884. Thus, the sources for majority of positivity are not known. Table 9.1 includes the data on positivity for the virus related to COVID-19 in different zip codes, mainly in Miami-Dade, Florida. These data show no direct relation between the population density and COVID-19 virus positivity. Particularly, the Miami-Dade zip code 33141 with population density of 18.27 subjects per square mile showed 0.9% positivity per population, while zip code 33125 with a population density 13.83 showed 4.5% positivity (Table 9.1). Of note, no data on hospitalization of COVID-19 positive cases are available for each zip code to be reviewed and analyzed.
Density of population does not of itself determine the ease with which infection spreads through a population. Problems tend to arise primarily when populations become so dense as to cause overcrowding. Overcrowding is often associated with decreases in quality of living conditions and sanitation, so the rate of agent transmission is typically very high in such areas. Thus, overcrowded cities or densely populated areas of cities can potentially serve as breeding grounds for infectious agents, which may facilitate their evolution, particularly in the case of viruses and bacteria. Rapid cycling between humans and other hosts, such as rats or mice, can result in the emergence of new strains capable of causing serious disease (https://www.britannica.com/science/ infectious-disease/Population-density).
The recent Centers for Disease Control and Prevention (CDC) data indicate the virus linked to COVID-19 has been found in untreated wastewater (https://www.cdc.gov/ coronavirus/2019-ncov/community/sanitation-wastewater-workers.html). The leakage of untreated wastewater happened previously during some storms, hurricanes, and consequent flooding in Miami, Florida, and New York City, NY.
The CDC estimates that influenza was associated with more than 35.5 million illnesses, more than 16.5 million medical visits, 490,600 hospitalizations, and 34,200 deaths during the 2018e19 influenza season. This burden was similar to the estimated burden during the 2012e13 influenza season (Centers for Disease Control and Prevention. Estimated influenza illnesses and hospitalizations averted by influenza vaccination, United States, 2012e13 influenza season. MMWR Morb Mortal Wkly Rep 2013; 62(49): 997e1000). -related positivity in Miami-Dade County and some other Florida Counties (by zip codes) in the period of spike in positivity (July 6e7, 2020;August 3, 2020;and then 1/29/2021 In comparison of severe versus nonsevere COVID-19 cases, the blood serum quinolinate was higher 1.488-fold (P value .00178), thyroxine was lower, -0.4868-fold (P value .00208), taurochenodeoxycholic acid 3-sulfate was higher 2.21-fold (P value 2.48041E-05), and tauroursodeoxycholic acid sulfate was higher 3.178-fold (P value 2.736 EÀ05). Choline was lower in severe COVID-19 patients than in healthy controls at À0.47 fold (P value 6.216 EÀ06) and in nonsevere COVID-19 versus healthy choline À0.46 fold (P value 1.219 EÀ07). Choline metabolite betaine [374] was lower in severe COVID-19 than in the healthy group at À0.42 fold (P value .0002) [372]. Choline deficiency can cause disorders in many bodily systems, including liver, muscle, and lymphocytes in humans [375]. Phosphocholine, a human metabolite phosphate of choline, was higher in severe COVID-19 group than in other groups. Cholinesterase inhibitors, which function by inhibiting cholinesterase from hydrolyzing acetylcholine into its components of acetate and choline ( Fig. 9.2), allowing for an increase in the availability and duration of action of acetylcholine in neuromuscular junctions, may present in the COVID-19 patients.
Uchida and Yamashita showed that enzyme choline kinase catalyzes formation of phosphocholine from choline and phosphate of ATP. Spermine and spermidine stimulated this enzyme by decreasing the apparent Km for ATP and increasing Vmax [376]. Acetylcholine was also stimulatory.
Taurocholic acid 3-sulfate (TCA3S) is a metabolite of the conjugated bile acid taurocholic acid (a taurine-conjugated form of cholic acid). Plasma levels of TCA3S are elevated in wild-type and Sortilin 1 (Sort1) knockout mice at 6 h following bile duct ligation (BDL) and are further elevated in Sort1 knockout mice at 24 h post-BDL [377].
Bile acids are produced in the liver and excreted into the intestine, where their main function is to participate in lipid digestion. Ursodeoxycholic acid and tauroursodeoxycholic acid have shown antiapoptotic, antiinflammatory, and antioxidant effects in various models of neurodegenerative diseases [378]. Taurochenodeoxycholic acid 3sulfate and tauroursodeoxycholic acid sulfate (1) are statistically significantly higher (4.34-fold and 6.34-fold, respectively) in sera of non-COVID-19 patients than in controls.
Quinate synthesizes through a side branch of the shikimate pathway. Quinolinic acid (quinolinate) is a downstream product of the kynurenine pathway, which metabolizes the amino acid tryptophan. Quinolinic acid has a potent neurotoxic effect [379].
Tauroursodeoxycholic acid is the more hydrophilic form of ursodeoxycholic acid, which is the more abundant naturally produced bile acid in humans. Tauroursodeoxycholic acid, on the other hand, is produced abundantly in bears and has been used for centuries as a natural remedy in some Asian countries. It is approved in Italy and Turkey for the treatment of cholesterol gallstones and is an investigational drug in China, the Unites States, and Italy. Tauroursodeoxycholic acid is being investigated for use in several conditions such as primary biliary cirrhosis, insulin resistance, amyloidosis, cystic fibrosis, cholestasis, and amyotrophic lateral sclerosis.
A report of the World Health Organization "Benzoic acid and sodium benzoate," 2000, (Concise International Chemical Assessment Document 26) concluded that the acute toxicity of benzoic acid and sodium benzoate in humans is low [380]. However, both substances are known to cause contact dermatitis (pseudoallergy). In patients with urticaria or asthma, an exacerbation of the symptoms was observed after testing (oral provocation test or patch tests), whereas this effect is unusual in healthy subjects [380]. Their antimicrobial properties are used for different applications, such as food preservation. Benzoates applied dermally can penetrate through the skin.
Paley discussed the activities of the human fecal metabolite benzoate in a recent article [16]. Particularly, Hoffmann and Grond, 2004, demonstrated that in Streptomyces sp., benzoate forms directly from shikimate [381], a precursor for chorismate, which is a precursor for the aromatic amino acids phenylalanine, tryptophan, and tyrosine. Benzoate is widely used by the food industry to prevent spoilage and to inhibit the growth of pathogenic microorganisms [382]. Sodium benzoate therapy improved symptomatology of patients with schizophrenia [383]. Zinc benzoate, commonly used in food and feed additives as a preservative and source of zinc (Zinc Benzoate CAS: 553-72-0, Silver Fern Chemical Inc.), inhibits MAO-A activity [384]. Zinc benzoate reversibly and competitively inhibited MAO-A activity in a dose-dependent manner. Being an MAO inhibitor, zinc benzoate can act as a protoxin since it can lead to accumulation of toxic biogenic amines. Zinc benzoate is an environmental contaminant derived from polystyrene. No data is available in respect of MAO inhibitory activity for sodium benzoate or benzoic acid. Thus, further research is needed to answer the question whether any salts of benzoic acid can inhibit MAO or only specific salts of benzoic acid such as zinc benzoate are able to inhibit this enzyme.
SDS (formerly known as MSDS) informs about benzoic acid uses and safety (https:// www.msdsonline.com/2015/02/16/benzoic-acid-uses-and-safety/). Specifically, benzoic acid is a compound naturally found in many plants and is an important precursor for the synthesis of many other organic substances. Benzoic acid is most commonly found in industrial settings to manufacture a wide variety of products such as perfumes, dyes, topical medications, and insect repellents. Benzoic acid's salt (sodium benzoate) is commonly used as a pH adjustor and preservative in food, preventing the growth of microbes to keep food safe. It works by changing the internal pH of microorganisms to an acidic state that is incompatible with their growth and survival. Immediately or shortly after exposure to benzoic acid, the following health effects can occur: eye damage, irritation of the skin, resulting in a rash, redness, and/or a burning feeling, irritation to the nose, throat, and lungs if inhaled, which may cause coughing, wheezing, and/or shortness of breath.
I suggest that bacteria and other microorganisms may develop tolerance/resistance to benzoic acid/benzoate. Creamer et al. reported that Escherichia coli K-12 W3110 grows in the presence of membrane-permeant organic acids that can depress cytoplasmic pH and accumulate in the cytoplasm. The authors conducted experimental evolution by daily diluting cultures in increasing concentrations of benzoic acid (up to 20 mM) buffered at external pH 6.5, a pH at which permeant acids concentrate in the cytoplasm. By 2000 generations, clones isolated from evolving populations showed increasing tolerance to benzoate but were sensitive to chloramphenicol and tetracycline. Sixteen clones grew to stationary phase in 20 mM benzoate, whereas the ancestral strain W3110 peaked and declined. Similar growth occurred in 10 mM salicylate. Benzoate-evolved strains grew like W3110 in the absence of benzoate, in media buffered at pH 4.8, pH 7.0, or pH 9.0, or in 20 mM acetate or sorbate at pH 6.5. Genomes of 16 strains revealed over 100 mutations, including single-nucleotide polymorphisms, large deletions, and insertion knockouts. Most strains acquired deletions in the benzoate-induced multiple antibiotic resistance regulon or in associated regulators [385].
Therefore, bacteria can develop tolerance to benzoate. Moreover, benzoate can become a selective agent in the human microbiome. Furthermore, such microbial selection can lead to dysbiosis, which is implicated in a number of medical conditions including AD [178]. Microorganisms can also use benzoate as a source of carbon. Cinar, 2004, studied the response of a mixed microbial culture to different feed compositions, that is, containing benzoate and pyruvate as sole carbon sources at different levels, in a chemostat with a 48-h hydraulic residence time under cyclic aerobic and anoxic (denitrifying) conditions. The cyclic bacterial culture was well adapted to different feed compositions as evidenced by the lack of accumulation of benzoate or pyruvate in the chemostat [386].
Further analysis of fecal samples from COVID-19 patients can reveal dysbiosis in these patients ( Fig. 9.1).
Lennerz et al. reported that anthranilic acid, a tryptophan metabolite, exhibited a robust rise, while acetylglycine dropped through a randomized, controlled, cross-over study of 14 overweight subjects. In this study, serial blood samples were collected following an oral glucose challenge, in the presence or absence of sodium benzoate. Outcome measurements included glucose, insulin, glucagon, as well as temporal mass spectrometry-based metabolic profiles [387]. Genetic and nutritional studies with fungus Neurospora crassa indicate that the first enzyme specifically involved in tryptophan biosynthesis catalyzes the formation of anthranilic acid [388]. Anthranilic acid, which is involved in tryptophan metabolism in both humans and bacteria, could reflect an influence on tryptophan metabolism in either the subjects themselves or their gut microbial species, or both. It is also notable that anthranilic acid and benzoate differ by a single amine group, so the anthranilic acid may come directly from the ingested benzoate itself, either through metabolism by the subjects or their microbiota, though there is no annotated enzyme that catalyzes this reaction. Anthranilic acid accumulates in the setting of renal failure, and one cell culture study suggested that it might promote renal failure through adverse effects on mesangial cells. In a separate study, treatment of cultured neurons and glial cells with anthranilic acid altered NADþ levels and caused cytotoxicity. Although this study shows that benzoate does not have an acute, adverse effect on glucose homeostasis, future studies will be necessary to explore the metabolic impact of chronic benzoate exposure [387].
Wild-type Streptomyces maritimus produces benzoate via a plant-like b-oxidation pathway and can assimilate various carbon sources for benzoate production [389]. Presence of Streptomyces has been shown in human gut microbiome [390] and in human lung [391].
Raposa et al. demonstrated that sodium benzoate (from low to high doses) dosedependently silenced MAPK8 (mitogen-activated protein kinase 8) expression (P ¼ .004 to P ¼ .002) [392]. Khoshnoud et al. [393] investigated the effects of oral administration of different concentrations of sodium benzoate (0.56, 1.125, and 2.25 mg/mL) for 4 weeks, on the learning and memory performance tests, and also the levels of malondialdehyde (MDA), reduced glutathione (GSH), and acetylcholinesterase activity (AChE) in the mouse brain. The results showed that sodium benzoate significantly impaired memory and motor coordination. Moreover, sodium benzoate decreased GSH and increased the MDA level in the brain significantly (P < .001), and nonsignificant alteration was observed in the AChE activity. These findings suggest that short-term consumption of sodium benzoate can impair memory performance and increased brain oxidative stress in mice [393].
Furthermore, benzoic acid is formed by the enzymatic hydrolysis of benzoylcholine with cholinesterase [394] ( Fig. 9.2). Akcasu et al. described the pharmacology of benzoylcholine in 1952 [395]. Benzoylcholine is known to be broken down in the body into benzoic acid and choline. Benzoylcholine has a direct stimulant action on gut and heart; this action is unaffected by atropine [395]. Benzoylcholine is a neuromuscular blocking agent in rabbits and is relatively nontoxic (LD 50 in rabbits: 150 mg/kg). It produces neuromuscular block in the rat diaphragm and cat gastrocnemius and paralysis in chicks. On smooth muscle, benzoylcholine appears to exert at least three distinct actions. In small doses in the trachea preparations, it potentiates the acetylcholine response, possibly by inhibition of the cholinesterase; in medium doses in all preparations, it blocks the acetylcholine response, possibly by attaching itself to the same receptors; and in larger doses, it stimulates most forms of smooth muscle, in part through a direct stimulant action [395]. Benzoylcholine has been widely used in enzyme studies as a substrate for pseudocholinesterase. It is freely soluble in water. Erdos and colleagues reported in Science (1957) that tryptamine accelerates the enzymatic hydrolysis of benzoylcholine by plasma cholinesterase [396] ( Fig. 9.2). A series of derivatives of benzoylcholine has been prepared with substituents in the benzene ring, and the rate of hydrolysis of these compounds with cholinesterase of horse serum has been determined and reported in 1953 by Ormerod [397]. In 2006, the catalytic properties of rat butyrylcholinesterase with benzoylcholine and N-alkyl derivatives of benzoylcholine used as substrates were reported by Hrabovska et al. [398]. Docking studies showed that long-chain substrates were not optimally oriented in the active site for catalysis, thus explaining the slow rate of hydrolysis [398]. The simultaneous increases in benzoate and phosphocholine and choline decrease in sera of severe COVID-19 patients can, presumably, derive from hydrolysis of the candidate molecule such as benzoylcholine or its derivative by plasma cholinesterase.
Another candidate molecule is benzalkonium chloride (Fig. 9.2), also known as alkyl dimethyl benzyl ammonium chloride, which is classed as an antiseptic active ingredient by the United States Food and Drug Administration and can be oxidized to benzoic acid [399]. Jaganathan and Boopathy demonstrated in 2000 [400] the distinct effect of benzalkonium chloride on the esterase and aryl acylamidase activities of butyrylcholinesterase. Acetylcholinesterase (AChE) and butyrylcholinesterase (BChE) from vertebrates, other than their predominant acylcholine hydrolase (esterase) activity, display a genuine aryl acylamidase activity (AAA) capable of hydrolyzing the synthetic substrate onitroacetanilide to o-nitroaniline. Benzalkonium chloride (BAC), a cationic detergent widely used as a preservative in pharmaceutical preparations, has been shown to distinctly modulate the esterase and AAA activities of BChEs. The detergent BAC was able to inhibit the esterase activity of human serum and horse serum BChEs and AChEs from fish electric eel and human erythrocyte. BAC binds to the active site of ChEs. BAC was able to profoundly activate the AAA activity of human serum and horse serum BChEs [400]. Tryptamine is able to inhibit both AAA and ChE activities [246]. Swiercz et al. demonstrated in 2008 the pulmonary irritation after inhalation exposure to benzalkonium chloride in rats [401]. BAC may be classified to class I acute inhalation toxicity. It showed a strong inflammatory and irritant activity on the lungs after 6-h inhalation [401]. The cases of intentional and accidental poisoning of humans with benzalkonium chloride were described. In 2018, a forensic autopsy case of an elderly man who ingested an unknown amount of germicidal disinfectant containing 50% benzalkonium chloride (BZK) was reported. He survived for 18 days after BZK ingestion and then died because of pneumonia [402]. Hitosugi et al. reported in 1998 a case of fatal benzalkonium chloride poisoning [403]. In this report, five elderly persons with senile dementia accidently ingested Hoesmin, a 10% aqueous solution of benzalkonium chloride (BAC). The condition of one patient, an 84-year-old woman whose lips and oral cavity became erythematous, gradually deteriorated. Although gastric lavage was performed, the patient died 3 h after ingestion of Hoesmin. Autopsy revealed corrosive changes of the mucosal surfaces of the tongue, pharynx, larynx, esophagus, and stomach, which may have come in contact with BAC. In addition, BAC was detected in the serum. The authors conclude that the patient died of BAC poisoning [403]. Wilson and Burr reported in 1975 a case of benzalkonium chloride poisoning in infant twins [404]. In this case, infant twins sustained severe circumoral and pharyngeal burns from a concentrated solution of benzalkonium (Zephiran) chloride prescribed for treatment of candidiasis. This report emphasizes the unnecessary hazard accompanying use of a potentially toxic drug, especially when prepared in error by the pharmacist. Risks from use of a prescription drug for other than the intended patient are also highlighted by this episode of poisoning [404]. A news report published on July 10, 2018, (https://nypost.com/2018/07/10/japanese-nurseadmits-she-killed-over-20-elderly-patients/) informed that a Japanese nurse admitted to killing at least 20 elderly patients. Ayumi Kuboki allegedly poisoned the patients by lacing their intravenous (IV) drips with toxic antiseptic chemicals at specific times, Japanese newspaper the Asahi Shimbun reported. Police believe the caregiver put cleaning product containing benzalkonium chloride into patients' IVs, which left patients dead within hours. The chemical, an antibacterial that is readily available within hospitals, was found in the patient's body and his IV bag. Ayumi Kuboki, who was arrested in July on suspicion of killing at least three and possibly a fourth patient, was ordered to undergo a psychiatric evaluation for 3 months from September until the end of November, Fuji TV reported. Prosecutors said the evaluation had found her mentally fit to stand trial. Pereira and Tagkopoulos published a review in 2019 on benzalkonium chloride uses, regulatory status, and microbial resistance [405]. BACs were reported for the first time in 1935 by Gerhard Domagk, gaining the market as zephiran chlorides, and were marketed as promising and superior disinfectants and antiseptics. In 1947, the first product containing BACs was registered with the Environmental Protection Agency in the United States. Since then, they have been used in a wide variety of products, both prescription and over the counter. Applications range from domestic to agricultural, industrial, and clinical [405]. BACs have been detected in food samples, with a maximum at 14.4 mg/kg [405]. High-performance liquid chromatography and gas chromatographye mass spectrometry analysis was used to study BAC degradation pathway. It was shown that during BAC biodegradation by bacterium Aeromonas hydrophila sp., formation of benzyldimethylamine, benzylmethylamine, benzylamine, benzaldehyde, and benzoic acid occurred [399]. Biodegradation of benzalkonium chlorides singly and in mixtures by a Pseudomonas sp. isolated from returned activated sludge was reported by Khan et al. in 2015 [406]. At least 40 outbreaks have been attributed to infection by disinfectant-and antibiotic-resistant pathogens such as Pseudomonas aeruginosa. Kim et al. reported in 2018 the genomic and transcriptomic insights into how bacteria withstand high concentrations of benzalkonium chloride [407]. Gene expression changes in BAC-adapted P. aeruginosa were revealed. Particularly, overexpression of biogenic amine spermidine synthesis genes was detected in BAC-adapted bacteria [407]. Both spermidine (twofold) and N('1)-acetylspermidine (2.6-fold) are higher in sera of non-COVID patients versus healthy group. Seguin et al. reported in 2019 [408] that while there exists a large and growing body of literature concerning the toxicology of BACs, information on the metabolism of BACs in mammalian species is still lacking. Single-dose intravenous injection of BACs to rats (7 mg/kg) led to a wide distribution of these compounds in various tissues (levels in mg/g of tissue shown in the parentheses), with the highest level observed in the kidney (50.5), followed by lung and spleen (15.4 each), serum (1.2), liver (0.9), and brain (0.2) after 30 min of administration. When BACs were administered orally (250 mg/kg), the levels of BAC reached their highest concentrations for the majority of the tissues after 24 h (2 h for liver), with the level in kidney (5.25) > lung (2.75) > liver (0.72) > blood (0.34) (not determined in brain). Therefore, BACs are orally bioavailable and distribute broadly throughout tissues in vivo. However, the mechanisms of metabolism and disposition of BACs in humans have not been studied, and this represents a barrier to our understanding of the systemic toxicology of BACs [408]. A few studies have reported the microbial degradation of BAC by several pure cultures (Pseudomonas nitroreducens, Aeromonas hydrophila, and Bacillus niabensis) to benzyldimethylamine by dealkylating amine oxidase and related enzymes [409]. Metagenomic analysis of a river sediment microbial community by Oh et al. revealed that BAC exposure selected for a low-diversity community, dominated by several members of the Pseudomonas genus that quickly degraded BACs [410]. Metatranscriptomic analysis of this microbial community during a complete feeding cycle with BACs as the sole carbon and energy source was conducted under aerobic conditions. P. nitroreducens isolates from the BAC-fed bioreactor can grow on BACs as a sole carbon and energy source by dealkylating the parent molecules and producing stoichiometric quantities of benzyldimethylamine (BDMA) as a dead-end product. The genes involved in BAC dealkylation and b-oxidation based on bioinformatics and/or genetic analyses show BDMA production from BAC by P. putida and P. entomophila [410]. Phylogenetic affiliation of gene transcripts related to the three known benzoate degradation pathways was demonstrated mainly for Pseudomonas species [410]. The metatranscriptomic profiles showed increased expression of genes predicted to be associated with the biodegradation of benzoate, the by-product of BDMA metabolism, by enzyme such as benzoate dioxygenase (benABC). These results indicated that the benzyl compounds produced from the dealkylation of BACs by P. nitroreducens were predominantly metabolized by P. putida and P. entomophila [410]. The control of synthesis of the five enzymes responsible for the conversion of D(À)-mandelate to benzoate by Pseudomonas putida was investigated [411]. Benzoate is converted to catechol in this pathway. The members of the class of catechols are significantly lower in COVID-19 sera compared to controls. For instance, 4-ethylcatechol sulfate was À4.92fold lower in severe COVID-19 cases compared to controls. Catecholamine metabolite vanillylmandelate was À0.407-fold lower in nonsevere COVID-19 compared to healthy subjects. Another benzoate metabolite hippurate is also statistically significantly decreased in severe COVID-19 versus healthy group: hippurate (À1.466-fold), 3hydroxyhippurate (À2.23-fold), 2-hydroxyhippurate (salicylurate) (À1.669-fold). The 3-hydroxyhippurate (À1.31-fold) and hippurate (À1.0344-fold) are decreased in non-COVID-19 patients versus healthy group [372]. Pallister et al. reported in 2017 [412] that an increasing hippurate trend was associated with reduced odds of having metabolic syndrome. Thus, the data on hippurate serum content are consistent with increased odds of having metabolic syndrome in COVID-19 patients.
Chen et al. demonstrated that sodium benzoate exposure downregulates the expression of tyrosine hydroxylase and dopamine transporter in dopaminergic neurons in developing zebrafish [413]. The results suggest that sodium benzoate exposure can cause significantly decreased survival rates of zebrafish embryos in a time-and dose-dependent manner and in decreased locomotor activity of zebrafish larvae [413].
Piper and Piper reported in a review that benzoate can react with the ascorbic acid in drinks to produce the carcinogen benzene [414]. A few children develop an allergy to this additive. As a competitive inhibitor of D-amino acid oxidase, benzoate [415] can also influence neurotransmission and cognitive functioning. Model organism and cell culture studies have raised some issues. Benzoate has been found to exert teratogenic and neurotoxic effects on zebrafish embryos. In addition, benzoate and sorbate are reported to cause chromosome aberrations in cultured human lymphocytes, as well as to be potently mutagenic toward the mitochondrial DNA in aerobic yeast cells. Whether the substantial human consumption of these compounds could significantly increase levels of such damages in man is still unclear [414]. Benzoate rapidly traverses the bloodebrain barrier and is now attracting increased attention as an agent for the treatment of certain brain disorders [414], partly because it presents the advantages of a ready oral administration and an existing approval for the treatment of urea cycle disorders (UCD) [416]. Cinnamondthe most consumed spice worldwidedis also of interest in this regard, since the oral feeding of cinnamon (Cinnamomum verum) powder is known to generate benzoate in the blood and brain of mice [414]. D-amino acid oxidase is the prototype of the flavin adenine dinucleotide (FAD)-dependent oxidases. It catalyzes the oxidation of D-amino acids to the corresponding alphaketoacids. The reducing equivalents are transferred to molecular oxygen with production of hydrogen peroxide. The crystal structure of the complex of D-amino acid oxidase with benzoate, a competitive inhibitor of the enzyme, has been solved by Mattevi et al. [417].
Fagnant et al. evaluated virus survival over time on ViroCap filters. The filters were seeded with poliovirus (PV) type 1 (PV1) and/or bacterium Escherichia coli virus MS2 and then dosed with preservatives or antibiotics prior to storage and elution. These filters were stored at various temperatures and time periods, and then eluted for PV1 and MS2 recovery quantification. Filters dosed with the preservative combination of 2% sodium benzoate and 0.2% calcium propionate had increased virus survival over time when stored at 25 C, compared to samples stored at 25 C with no preservatives [418].
The benzoate increase was highest among the serum metabolites altered in Chinese severe COVID-19 patients compared to healthy individuals [372]. Serotonin was lower in sera of severe COVID-19 patients compared to healthy controls [372] and also in aging, depression, and AD patients [105].
Postmortem studies of COVID-19 patients in different countries
Most patients with COVID-19 are asymptomatic or experience only mild symptoms, including fever, dry cough, and shortness of breath. However, some individuals deteriorate rapidly and develop acute respiratory distress syndrome (ARDS). The most common histopathologic correlate of ARDS is diffuse alveolar damage (DAD), characterized by hyaline membrane formation in the alveoli in the acute stage, and interstitial widening by edema and fibroblast proliferation in the organizing stage. DAD has a long list of potential etiologies, including infection, vaping-associated pulmonary injury, oxygen toxicity, drug toxicity, toxic inhalants or ingestants, shock, severe trauma, sepsis, irradiation, and acute exacerbations of usual interstitial pneumonia [419].
Schaller et al. in a recent report of examinations conducted in Germany, state that "because there are still insufficient data on cause of death, we describe postmortem examinations in a case series of patients with . Between April 4 and April 19, 2020, the serial postmortem examinations were conducted in patients with proven severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection who died at the University Medical Center Augsburg (Germany). Autopsies were conducted according to published best practice. Specimens from lung, heart, liver, spleen, kidney, brain, pleural effusion, and cerebrospinal fluid (CSF) were assessed. Postmortem nasopharyngeal, tracheal, bronchial swabs, pleural effusion, and CSF were tested for SARS-CoV-2 by reverse transcriptaseepolymerase chain reaction. In this postmortem evaluation of 10 patients with COVID-19, acute and organizing DAD and SARS-CoV-2 persistence in the respiratory tract were the predominant histopathologic findings and constituted the leading cause of death in patients with and without invasive ventilation. Periportal liver lymphocyte infiltration was considered unspecific inflammation. Whether myoepicardial alterations represented systemic inflammation or early myocarditis is unclear; criteria for true myocarditis were not met. Central nervous system involvement by COVID-19 could not be detected. This study has limitations, including the small number of cases from a single center and missing proof of direct viral organ infection [420].
Pulmonary postmortem findings in a series of COVID-19 cases were revealed in a northern Italy two-center descriptive study [421]. The predominant pattern of lung lesions in patients with COVID-19 patients is DAD, as described in patients infected with severe acute respiratory syndrome and Middle East respiratory syndrome coronaviruses. Hyaline membrane formation and pneumocyte atypical hyperplasia are frequent. Importantly, the presence of plateletefibrin thrombi in small arterial vessels is consistent with coagulopathy, which appears to be common in patients with .
However, the mechanisms, phenotype, and optimal management of ischemic stroke associated with COVID-19 remain uncertain.
Researchers in London, United Kingdom, described the demographic, clinical, radiologic, and laboratory characteristics of six consecutive patients assessed between and April 1 and 16, 2020, at the National Hospital for Neurology and Neurosurgery, Queen Square, London, UK, with acute ischemic stroke and COVID-19 (confirmed by reverse-transcriptase PCR (RT-PCR)). All six patients had large vessel occlusion with markedly elevated D-dimer levels (!1000 mg/L). Three patients had multiterritory infarcts, two had concurrent venous thrombosis, and in two, ischemic strokes occurred despite therapeutic anticoagulation [425].
In New Orleans, USA, autopsies were performed on 10 African American decedents aged 44e78 years with cause of death attributed to COVID-19, reflective of the dominant demographic of deaths following COVID-19 diagnosis [426]. Important findings include the presence of thrombosis and microangiopathy in the small vessels and capillaries of the lungs, with associated hemorrhage, that significantly contributed to death. Features of DAD, including hyaline membranes, were present, even in patients who had not been ventilated. Cardiac findings included individual cell necrosis without lymphocytic myocarditis. There was no evidence of secondary pulmonary infection by microorganisms [426].
Solomon et al. reported that neurologic symptoms, including headache, altered mental status, and anosmia, occur in many patients with COVID-19. The neuropathologic findings were from autopsies of 18 consecutive patients with SARS-CoV-2 infection who died in a single teaching hospital between April 14 and April 29, 2020, in Massachusetts, USA [427]. All the patients had nasopharyngeal swab samples that were positive for SARS-CoV-2 on qualitative reverse transcriptaseepolymerase chain reaction (RT-PCR) assays. Histopathologic examination of brain specimens obtained from 18 patients who died 0e32 days after the onset of symptoms of COVID-19 showed only hypoxic changes and did not show encephalitis or other specific brain changes referable to the virus. The virus was detected at low levels in six brain sections obtained from five patients; these levels were not consistently related to the interval from the onset of symptoms to death. Positive tests may have been due to in situ virions or viral RNA from blood [427].
The autopsy findings of 21 COVID-19 patients hospitalized at the University Hospital Basel and at the Cantonal Hospital Baselland, Switzerland, were reported by Menter et al. [428]. The primary cause of death was respiratory failure with exudative DAD and massive capillary congestion, often accompanied by microthrombi despite anticoagulation. Ten cases showed superimposed bronchopneumonia. Further findings included pulmonary embolism (n ¼ 4), alveolar hemorrhage (n ¼ 3), and vasculitis (n ¼ 1). Pathologies in other organ systems were predominantly attributable to shock; three patients showed signs of generalized and five of pulmonary thrombotic microangiopathy. Six patients were diagnosed with senile cardiac amyloidosis upon autopsy. Most patients suffered from one or more comorbidities (hypertension, obesity, cardiovascular diseases, and diabetes mellitus). Additionally, there was an overall predominance of males and individuals with blood group A (81% and 65%, respectively) [428]. Myocardial hypertrophy was observed in 71% of COVID-19 cases, and liver pathologies such as liver steatosis were observed in 41% and liver shock necrosis in 29% of COVID-19 cases. The cytoplasm of kidney podocytes, endothelial cells, and proximal tubular epithelial cells contained multiple vesicles revealed by electron microscopy [428]. At higher magnification, the vesicles contain double membranes. Similar multiple vesicles were earlier observed in AD brain and in tryptamine-treated human neuronal cells by Paley, 2011 [50]. In summary, the findings provide an insight into the complexity of COVID-19 pathophysiology. SARS-CoV-2 substantially contributed to fatality in all cases, but the authors postulate a multifactorial cause of death, with COVID-19 as a contributory factor in multimorbid patients [428]. Major findings that imply an impaired microcirculation include pulmonary capillarostasis and the presence of microthrombi in the lungs and kidneys despite anticoagulation. The findings corroborate clinical and epidemiological data on cardiovascular morbidity and disease outcome and add amyloid transthyretin (ATTR) amyloidosis as a risk factor. Of note, ATTR is thyroxine-binding prealbumin, while serum thyroxine was lower in severe COVID-19 compared to nonsevere cases and healthy individuals [372].
Barton et al. report the findings of two complete autopsies of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-positive individuals who died in Oklahoma (United States) in March 2020 [419]. A 77-year-old obese man with a history of hypertension, splenectomy, and 6 days of fever and chills died while being transported for medical care. He tested positive for SARS-CoV-2 on postmortem nasopharyngeal and lung parenchymal swabs. Autopsy revealed DAD, chronic inflammation, and edema in the bronchial mucosa. A 42-year-old obese man with a history of myotonic dystrophy developed abdominal pain followed by fever, shortness of breath, and cough. Postmortem nasopharyngeal swab was positive for SARS-CoV-2; lung parenchymal swabs were negative. Autopsy showed acute bronchopneumonia with evidence of aspiration. Neither autopsy revealed viral inclusions, mucus plugging in airways, eosinophils, or myocarditis [419]. Bacterial cultures (aerobic/anaerobic) of the lung tissue grew nontoxigenic E. coli, Candida tropicalis, and Proteus mirabilis from the 42-year-old patient. In complete autopsy of the 77-year-old patient, there was hepatic centrilobular steatosis and remote cholecystectomy, and in the 42-year old, hepatic cirrhosis and advanced remote cholecystectomy were revealed. Coronary artery disease was found in both patients.
Ackermann et al. reported in 2020 that the lungs from patients with COVID-19 showed distinctive vascular features, consisting of severe endothelial injury associated with the presence of intracellular virus and disrupted cell membranes. Histologic analysis of pulmonary vessels in patients with COVID-19 showed widespread thrombosis with microangiopathy. Alveolar capillary microthrombi were nine times as prevalent in patients with COVID-19 as in patients with influenza (P < .001). In lungs from patients with COVID-19, the amount of new vessel growth, predominantly through a mechanism of intussusceptive angiogenesis, was 2.7 times as high as that in the lungs from patients with influenza (P < .001) [429]. In this study, pulmonary autopsy specimens from seven patients who died from respiratory failure caused by SARS-CoV-2 infection were compared with lungs from seven patients who died from pneumonia caused by influenza A virus subtype H1N1, a strain associated with the 1918 and 2009 influenza pandemics [429].
Archer et al. described in 2020 the differences between COVID-19 pneumonia, ARDS, and high-altitude pulmonary edema (HAPE) [430]. Although 80% of people with COVID-19 have a minor, acute respiratory infection, the mortality range was from 2% to 7%. Patients with COVID-19 pneumonia may decompensate because of hypoxemic respiratory failure. Autopsy data show inflammation, DAD, alveolar fluid accumulation, and occasional hyaline membranes, consistent with ARDS [430]. Recently, HAPE physiology was proposed to explain the edema and hypoxemia in COVID-19 pneumonia.
Fried et al. reported four cases (positive SARS-CoV-2 testing) that illustrate a variety of cardiovascular presentations of COVID-19 infection [431]. Case 1: A 64-year-old woman is presented with COVID-19 pneumonia and differential diagnosis included myopericarditis and cardiac amyloidosis. Case 2: A 38-year-old man with a history of type 2 diabetes mellitus presented with 1 week of cough, pleuritic chest pain, and progressive shortness of breath to an outside hospital. The patient had a bradycardic arrest lasting 6 min. Case 3: A 64-year-old woman with a nonischemic cardiomyopathy, atrial fibrillation, hypertension, and diabetes mellitus presented with a nonproductive cough and shortness of breath for 2 days. On arrival, she was afebrile, with blood pressure 153/120 mm Hg, heart rate 100 bpm, and oxygen saturation 88%. Case 4: A 51-yearold man with a history of heart transplantation in 2007 and renal transplantation in 2010 presented with intermittent fever, dry cough, and shortness of breath for 9 days. He denied any recent travel or sick contacts. His outpatient immunosuppression included tacrolimus 5 mg twice daily, mycophenolate mofetil 250 mg twice daily, and prednisone 5 mg daily [431].
Conclusions
Metabolic profiling of sera from COVID-19 patients reveal altered metabolism of choline, benzoic acid, hippurate, catechol, tryptophan, and shikimate metabolic pathway compared to healthy subjects. Some of these metabolic alterations have a similar trend in COVID-19 and non-COVID-19 patients. The data on hippurate serum content are consistent with increased odds of having metabolic syndrome in COVID-19 patients. Table 9.1 includes the data on positivity for the virus related to COVID-19 in different zip codes (A zip code is a postal code used by the United States Postal Service) mainly in Miami-Dade, Florida. Analysis of these data shows no direct corelation between the population density (density of population per square miles) and COVID-19 virus positivity.
Complete autopsy examinations may indicate that manifestations in different organs of COVID-19 patients are caused by chronic and acute toxicity. This suggestion is consistent with the data on serum metabolic profiling of COVID-19 patients. Some COVID-positive patients died with, but not of COVID-19.
|
2021-03-20T13:13:46.596Z
|
2021-03-19T00:00:00.000
|
{
"year": 2021,
"sha1": "77b18fc5cc7cf9cfe4b8cbf23cd29f940f0e5b63",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/b978-0-323-88445-7.00009-9",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "411a42af01d21d6950ed2c8ae992f8be6c704280",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237879623
|
pes2o/s2orc
|
v3-fos-license
|
The Effects of Substance Use on Public Perceptions of Rape Crimes
This paper will review the influence substance use has on public perceptions of rape crimes. We will examine an apparent double standard the public has towards survivors and perpetrators who consumed substances prior to the assault, and we will discuss professional implications and suggestions for future research.
Although rape crimes are highly prevalent and a multitude of negative consequences have been documented, these crimes are still highly normalized and excused [15]. The responsibility for the assault often is shifted from the perpetrators to survivors, with the latter being frequently blamed and stigmatized by the public, police and court officers, and health professionals [1,15]. Additionally, the general public frequently excuses perpetrators' actions and grants them leniency in legal proceedings [15,16]. Many factors impact this widespread cultural acceptance however, substance use has been one of the greatest influencers on public perceptions of rape crimes and attitudes towards survivors and perpetrators. The impact of substance use on rape crimes is complex and multi-faceted, and the influence substance use has ranges from stigmatizing to excusatory. In this article, we will provide a systematic review of the impact substance use has on public perceptions of rape crimes and the differential consequences for survivors and perpetrators. Additionally, we will consider future research directions.
Substance Use
The first level of impact of substance use on rape crimes is the high prevalence rates of substance use among both survivors and perpetrates. Researchers have reported that up to 50% of survivors and more than 75% of perpetrators had consumed alcohol prior to an assault [17,18]. Considering that most rape crimes involved the use of substances, the influence it has on public perception highly relevant. Although the use of substances is associated highly with rape, research is inconsistent on the effects substance consumption has on these crimes and the public perception of survivors and perpetrators. The influence of alcohol on rape and rape survivors has been thoroughly investigated over the past 30 years however, empirical studies on the influence of specific illicit drugs on these crimes remain scant.
Investigations on the effects of alcohol revealed that the relationship between alcohol use and rape crimes is complex, and that public perceptions varied greatly when survivors and perpetrators were intoxicated prior to the assault [19]. While survivors generally were attributed with more blame and viewed more negatively if they were intoxicated before the assault [20,21], alcohol use by perpetrators was seen as a potentially exonerating circumstance [20,22]. This double standard held true even in cases when survivors and perpetrators consumed commensurate levels of alcohol. When both survivors and perpetrators were portrayed as equally intoxicated, observers were more likely to see survivors as blameworthy, consider perpetrators less responsible, question the validity of rape allegations altogether, and believe that it would be "unfair" to prosecute perpetrators as criminals [23].
Researchers have consistently supported the notion that survivors who willingly consumed alcohol prior to the assault were viewed less favorably, considered less credible, blamed more, viewed as more willing to have sexual intercourse, held more responsible for the incident, and judged more harshly than women who did not drink before they were raped [1,21,24]. Horvath and Brown [18] concluded that intoxicated survivors were seen as guilty of "contributory negligence," and as such, were considered more responsible for their sexual assault.
Negative beliefs about female survivors who consumed alcohol before the assault also were held by police officers and jurors, as higher levels of intoxication were associated with lower credibility ratings of survivors [23,24]. When survivors were drinking alcohol prior to the assault, police officers were more likely to believe that perpetrators genuinely considered sexual intercourse to be consensual. Likewise, survivors' consumption of alcohol also was more likely to influence police officers' judgments than perpetrators' drinking [24]. Additionally, Wenger and Bornstein [25] identified guilty verdicts were less common when survivors were intoxicated before the assault.
Stormo and colleagues [26] noted that consumption of alcohol mediates participants' perceptions of survivors' behaviors. That is, when individuals believed rape survivors were under the influence of alcohol, all decisions survivors made were deemed to be contributors to the assault. However, when survivors were perceived as sober, the same decisions were seen as less impactful [26]. Further findings also revealed that women who were under the influence of alcohol also were seen as "more appropriate" for sexual assault and were viewed as more interested in having sexual intercourse [24]. Additionally, survivors who were raped while intoxicated were blamed at a greater rate than women who were raped by force [22].
Perpetrators' consumption of alcohol, in most cases, had a positive effect on the public's perception of them. Perpetrators were seen as less responsible and guilty when they were intoxicated prior to the assault [20]. They also were likely to be blamed less when rape survivors were under the influence of alcohol [24,26]. The only exceptions to this trend appear to be situations in which perpetrators were less intoxicated than survivors. If perpetrators seemed to have taken advantage of intoxicated women, they were held more responsible for the assault [27]. Even in cases when perpetrators who were under the moderate influence of alcohol assaulted women who were highly intoxicated, perpetrators were held more liable for the offense [26]. Additionally, perpetrators were seen as more blameworthy if they intentionally gave women large quantities of alcohol without their knowledge [28].
While there is a substantial amount of evidence regarding the impact of alcohol, consumption of illicit substances in rape cases has not been researched thoroughly. Therefore, there remains minimal evidence of how specific drugs influence perceptions of rape crimes, attitudes towards survivors and perpetrators, and attribution of blame. Additionally, researchers who have investigated the effects of illicit psychoactive substances on rape have only focused on marijuana, Gamma-Hydroxybutyric acid (GHB) and D-lysergic acid Diethylamide (LSD). No empirical evidence is currently available regarding the effects of highly consumed illicit substances such as heroin, cocaine and methamphetamine. This lack of investigation is especially concerning since these three substances account for more than a quarter of all illicit adult drug use in 2015 (Substance Abuse & Mental Health Services Administration; SAMSHA, 2016).
Girard and Senn [28] conducted one of the most comprehensive studies on the impact of illicit drugs on the public's perceptions in rape cases. The authors found that drugs had a "marginally stronger" effect on observers' perception than alcohol. Girard and Senn [28] suggested that perceptions of legality and stigma attached to illicit drug use played a role in how survivors of rape who consumed drugs were viewed. Voluntary use of drugs, especially by women, was found to have a severe impact. Women who consumed drugs voluntarily were judged harshly, blamed at a higher rate and held more responsible for the assault. The authors concluded that survivors' voluntary use of drugs decreased their "worthiness as a victim," and perpetrators in these cases also were more likely to be excused for their actions [28]. Although results presented by Girard and Senn [28] indicated a clear pattern of blame and responsibility attribution, findings from other studies yielded contradictory evidence.
Stewart and Jacquin [24] found no significant differences in the consequences of consumption of alcohol, marijuana and GHB. The researchers indicated that the type of drug women consumed prior to the assault did not influence the assignment of blame or observers' impressions of survivors. Additionally, Wenger and Bornstein [25] found that survivors who consumed alcohol and LSD were not viewed differently. Survivors who consumed LSD before the assault were not perceived as less credible and were not blamed more than survivors who drank alcohol. Given the evidence that is available currently, it is difficult to conclude with certainty the influence drugs other than alcohol have on perceptions of rape crimes.
Substance use, aside from affecting the publics' perceptions, also influences survivors' internal experiences. Survivors who consumed alcohol and other drugs prior to the assault felt more shame, guilt and overall responsibility for the crime [21]. Survivors of these crimes were likely to question whether their experience was an actual rape. These beliefs also were found to influence survivors' willingness to report the assault to the police. Survivors believed that, as a result of their intoxication, they had no "proof" of the crime and were not sure if the offense was serious enough. Female survivors of rape also believed that they would be treated differently by the police and legal system because of their consumption of illicit drugs [21].
Conclusion and Future Directions
Despite the seriousness of rape crimes, and the severity of impact they have on survivors, these crimes are often excused and trivialized in our society. Although many factors contribute to the societal attitudes towards rape, substance use appears to be one of the strongest influencers on public perceptions of these crimes. Extensive research on the impact of alcohol on public attitudes towards rape survivors and perpetrators revealed a troubling double standard that is associated with alcohol consumption prior to the assault. The use of alcohol by women before the assault had a severely negative impact on how they were perceived by the public. Most notably, survivors were attributed with more blame for the rape if they consumed alcohol prior to the assault. Additionally, women who consumed alcohol were viewed as less favorable and credible.
In essence, stigma towards alcohol consumption (and towards persons who drink alcohol) was an active influencer on the public's view and treatment of rape survivors. These prejudicial attitudes could have a tangible and lasting negative impact on survivors. Ullman [29] found that up to 70% of rape survivors experienced negative reactions following the assault. Survivors of rape who experienced these negative reactions, when compared to survivors who did not face them, were more likely to experience psychological distress, maladaptive coping strategies, delayed recovery and strained interpersonal relationships [13,30]. Thus, survivors who consumed alcohol before the assault, in addition to the negative effects of the rape itself, have a higher risk of facing a myriad of negative consequences due to public stigmatization.
Conversely to the attitude faced by survivors, previous research has revealed that the use of alcohol by perpetrators had a positive impact on public perception of them. For the most part, perpetrators who consumed alcohol prior to the assault were viewed as less responsible and guilty. Additionally, perpetrators were blamed less in instances when survivors were under the influence. These favorable attitudes towards perpetrators could be a significant contributor to the alarmingly low reporting and prosecution rates in rape crimes. The Federal Bureau of Investigation (FBI) [31] reported that both reporting rates of rape crimes and the prosecution of perpetrators are low. If we consider the high prevalence of alcohol use in rape crimes, and the positive perceptions towards perpetrators who used alcohol before the assault, it is likely that attitudes towards alcohol are highly influential. Additionally, consumption of alcohol prior to the assault has led to questions regarding the possibility of rape in those instances. These attitudes also are likely to contribute to the reporting and prosecution rates and impact perceptions of survivors and perpetrators.
In terms of future research, it is imperative to expand the scope of current findings and examine a wider range of licit and illicit substances. While there is a sizable body of research regarding the impact of alcohol, our understanding of the role other substances play in rape crimes is inadequate. Furthermore, the evidence that is presently available is contradictory. Research studies examining the impact of most prevalent illicit substances (e.g., heroin, cocaine and methamphetamine) would be particularly important. In addition to the high prevalence rates, the use of these drugs is often highly stigmatized, which could lead to further and more severe negative consequences for the survivors. Additionally, it would be important to examine the impact substance use post-assault has on public perception. Results from previous studies have indicated that a significant number of rape survivors struggle with substance use post rape. As a result, it is critical to examine if the negative attitudes that were associated with the use of substances prior to the assault would be prevalent post-assault.
|
2021-08-03T18:03:27.112Z
|
2021-06-23T00:00:00.000
|
{
"year": 2021,
"sha1": "1ec3fc011179cd3f2bd4f5d93a5cf3fb477fe729",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/aad-7276/100063",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1ec3fc011179cd3f2bd4f5d93a5cf3fb477fe729",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
25722176
|
pes2o/s2orc
|
v3-fos-license
|
Maternal death review and outcomes: An assessment in Lagos State, Nigeria
The objective of the study was to investigate the results of Maternal and Perinatal Death Surveillance and Response (MPDSR) conducted in three referral hospitals in Lagos State, Nigeria over a two-year period and to report the outcomes and the lessons learned. MPDRS panels were constituted in the three hospitals, and beginning from January 2015, we conducted monthly MPDSR in the three hospitals using a nationally approved protocol. Data on births and deaths and causes of deaths as identified by the MPDSR panels were collated in the hospitals. The results show that over a 21-month period (January 1, 2015 –September 30, 2016), maternal mortality ratio (MMR) remained high in the hospitals. Although there was a trend towards an increase in MMR in Lagos Island Maternity Hospital and Gbagada General Hospital, and a trend towards a decline in Ajeromi Hospital, none of these trends were statistically significant. Eclampsia, primary post-partum haemorrhage, obstructed labour and puerperal sepsis were the leading obstetric causes of death. By contrast, delay in arrival in hospital, the lack of antenatal care and patients’ refusal to receive recommended treatment were the patients’ associated causes of death, while delay in treatment, poor use of treatment protocols, lack of equipment and lack of skills by providers to use available equipment were the identified facility-related causes of death. Failure to address the patients and facility-related causes of maternal mortality possibly accounted for the persistently high maternal mortality ratio in the hospitals. We conclude that interventions aimed at redressing all causes of maternal deaths identified in the reviews will likely reduce the maternal mortality ratios in the hospitals.
Introduction
The high rate of maternal mortality in Nigeria has been a major cause of public health concern at both national and international levels. In response to recommendations made by experts. [1] and in efforts to address this developmental challenge, Nigeria's Federal Ministry of Health in 2013 approved that all maternal health institutions in the country should periodically carry out maternal death review, surveillance and response [2], using the technical guidance document as recommended by the WHO [3]. This was updated in 2016 to include Perinatal Death Reviews, while the initiative was retitled Maternal and Perinatal Death Surveillance and Response (MPDSR), to take account of the equally high rates of stillbirth, neonatal and perinatal deaths in the country [4]. Essentially, it is hoped that with regular reviews of maternal and perinatal deaths, and an analysis of the causes of deaths, that recommendations could be made that if addressed, would reduce the high rates of maternal and perinatal mortality in the country. This approach was required to address the current lack of substantive data on the circumstances under which maternal and perinatal deaths occur in Nigeria, needed to design strategies, policies and programmes at a health systems level to reverse the trend.
Since its promulgation, the Federal Ministry of Health has recommended that all States in the country adopt a uniform protocol for conducting the reviews [5]. The Lagos State Ministry of Health, one of Nigeria's 36 federating States, started implementing the recommendations and protocol in 2014. This included an initial state-wide assessment of maternal mortality rates which identified un-acceptably high rates in the most rural Local Government Areas of the State [6,7]. Thereafter, State officials constituted a Committee to review maternal deaths using the methods and protocols recommended by the Federal Ministry of Health [8]. The Committee was given the mandate to collate accurate data and make recommendations on ways to reduce the rate of maternal deaths in the State.
Our initial systematic analysis of maternal death reviews in Nigeria suggests that substantial information can be obtained through this process to strengthen health systems in efforts to reduce maternal mortality rates [9,10]. Since previous maternal death reviews were not systematically carried out in many parts of the country, we sought to follow up reviews conducted in three public hospitals in Lagos state in a prospective manner in order to report the outcomes and the lessons learnt. In particular, as this review added the element of surveillance and policy/programmatic feedback, we sought to determine how this approach might impact on maternal mortality prevention. We hypothesized that maternal mortality would decline if corrective measures are taken to rectify the medical and social factors that are identified in the reports as associated with maternal mortality. The objective of this study therefore, is to report the results and outcomes of maternal death reviews conducted in the hospitals over the initial two-year period. We believe the results will be useful for improving the delivery of maternal death reviews and surveillance, and eventually provide useful lessons for scaling up the method throughout the country.
Methods
The study was conducted by researchers from the Women's Health and Action Research Centre (WHARC), a leading non-governmental, not-for-profit organization in Nigeria whose mission is to promote the health and social well-being of women through research, documentation and advocacy. In efforts to implement the study, WHARC met with officials of the Lagos State Ministry of Health and its Committee on Maternal and Child Mortality Reduction Programme (MCMR). The study objectives were explained, which led the officials of the State to give permission for the research team to monitor maternal deaths under the MPDSR conducted in three of its public maternity hospitals. The hospitals were: Ajeromi General Hospital (AGH), Gbagada General Hospital (GGH), and the Lagos Island Maternity Hospital (LIMH) in Lagos Island. All three offer routine antenatal, delivery and postnatal care as well as comprehensive emergency obstetrics care to pregnant women-covering very large populations in Lagos State.
Before commencing the project, we sought permission to conduct the study from the Management of the hospitals after a full explanation of the purpose and methods of the project. Thereafter, we conducted workshops in the hospitals to create awareness of the study and to increase understanding of the MPDSR process. We also met individually with members of the MPDSR committees in the three hospitals to explain the objectives of the study and also to develop a shared understanding of the MPDSR process. The workshops emphasized the importance of using confidential approaches, including a no-blame enquiry in the conduct of the reviews, in consonance with the recommendations of the FMOH. The committees gave permission for a member of the research team to be present at the committee meetings as an observer without any rights to make submissions at the meetings. They were assured of confidentiality of any information obtained, and no names of specific patients were presented during the committee meetings and thereafter.
Thereafter, WHARC supported the organization of the meetings of the MPDSR committees in the hospitals, which held monthly. This report presents the results of the monthly MPDSR committee meetings held in the three hospitals over 21 months from January 2015 to September 2016. The data were then cross-checked with available data in the Medical Records Departments to ensure accuracy.
A report and an analysis of the methodology used in the committee meetings has been documented elsewhere [10]. In brief, MPDSR was conducted in a structured way and had multidisciplinary representation. We grouped discursive strategies observed into three overlapping clusters: 'doing' no-name no-blame; fostering participation; and managing personal accountability. Within these clusters, explicit reminders, gentle enquiries and instilling a sense of togetherness were used in doing a no-name, no-blame enquiry.
As recommended by the FMOH, the committees consisted of the Chairman of the Medical Advisory Committee/Director of Clinical Services as Chairperson, while the Head of the Department of Obstetrics and Gynaecology is Secretary. Other members were Heads of Departments of Nursing/Midwifery, Paediatrics, Pathology, Labour ward, Neonatal care, anaesthesia and haematology.
Data collection and analysis
We collected data on the number of deliveries during the month, the number of maternal deaths, and the proportion of women who died that received antenatal care in the same hospital (booked cases) versus those who did not receive antenatal care (unbooked cases). We then calculated the maternal mortality ratios (MMR) (number of maternal deaths per 100,000 live births) by month per hospital, and calculated the overall MMR for the total cohort of women. We compared ratios among the hospitals, developed monthly trends and compared the monthly differences using the chi-square (χ 2 ) test for trends. We then adjusted the analysis for confounders such as increase or decrease in number of deliveries, and proportion of complicated deliveries in relation to the decrease or increase in MMR.
Following the reviews in each hospital, we identified the medical causes as well as the social (background) causes associated with the deaths. Each committee made recommendations to the heads of the institutions and also to the State government for rectifying the identified causes of deaths. We compiled these recommendations for each hospital, and analyzed qualitatively for form, theme and content. The results were presented qualitatively for each hospital and overall, to identify the nature of recommendations made for averting maternal mortality in the hospitals.
Ethical approval for the study was obtained from the Lagos State Ethical Review Board, and also consent was obtained from the Lagos State Ministry of Health and the individual health facilities. Additionally, all data were anonymized before access to the researchers and the MPDSR committees.
Maternal Mortality Ratios (MMR)
The results presented in Table 1 show that there were a total of 164 maternal deaths in the three hospitals during the period, and a total of 10,237 live births, giving an overall MMR of 1,602/100,000 live births. As shown, the LIMH had the highest MMR, while GH had the lowest.
The results in Table 2 show that "unbooked" women (women who had not received antenatal care in the hospital, but who presented as emergencies with complicated labour) accounted for a high proportion of the maternal deaths in the three hospitals. Nearly 90% of the deaths in both LIMH and AGH occurred in "unbooked women", while about 79.0% occurred at the GH. By contrast, fewer proportions of deaths were reported in "booked" women-these were women who reported for antenatal care and delivered in the same hospitals.
Trends in Maternal Mortality Ratio
The monthly trends in MMR over the 21 months' period in the 3 hospitals and overall are plotted in Fig 1. There was wide variation in the trends in the hospitals, but as shown in Fig 2, we plotted a best fit linear trend in each hospital and overall. The results show a trend towards an increase in MMR in LIMH and GGH, and a trend towards a decrease in MMR in AGH. Overall, there was a slight trend towards an increase in MMR in the three hospitals. However, the chi-square test for trends at 95% significance level, was not statistically significant in the
Obstetric causes of maternal deaths
The medical and obstetric causes of deaths as reported in the MPDRS conducted in the three hospitals are presented in Table 3. As shown, eclampsia, primary postpartum haemorrhage, prolonged obstructed labour, maternal sepsis and antepartum hemorrhage were the leading obstetric causes of maternal mortality in the hospitals. Medical complications-cardiac failure, sickle cell disease and malaria-were unusually high as causes of maternal deaths, especially at the LIMH.
Associated causes of maternal deaths
The MPDSR in the three hospitals were designed to establish the associated causes of maternal mortality, especially the background social factors associated with maternal deaths in the hospitals. The results of the associated causes of death elicited from the MPDSR are shown in Table 4. Delay in presentation in the hospital (Type 1 Delay) [11] ranked high in the three hospitals as epitomized by results such as "delayed presentation in health facility", "lack of antenatal care services", "patient's refusal of prompt management at admission" and "no evidence of antenatal services". These were patients' related factors but there was also reports that some patients received "inadequate/wrong care from traditional birth attendants" or as reported from GH and the LIMH, that some patients were mismanaged in private hospitals from where they were referred. Many reports of poor or delayed management of the cases (Type 3 delay) in the three hospitals also featured in the MPDSR reports. As shown in Table 4, these included "non-availability of blood" in the three hospitals, "non-availability of intensive care unit" "poor compliance to treatment protocols", "delay in patient management", "lack of essential equipment", and the "lack of skills by staff to use available equipment, e.g. anti-shock garments", etc.
Recommendations by MPDSR Committees
The recommendations made for rectifying the associated causes of maternal mortality in the three hospitals are presented in Table 5. The recommendations were largely derived from the deficits identified and included community health education, training of staff to better use treatment protocols, provision/repair of equipment, provision of emergency obstetric care and intensive care among several others. However, there was no clear evidence that these recommendations which went to policy makers and hospital administrators were fully implemented by any of the hospitals during the period.
Discussion
The study was designed to investigate the results of maternal death reviews in three hospitals in Lagos State, a leading Nigerian State that commenced the nationally recommended guidelines for MPDSR. This assessment also enabled us to collate data and statistics on MMR accurately in a prospective manner. To the best of our knowledge, this is one of a few prospective studies ever carried out in the country to obtain accurate data on MMR. The results showed very high MMR in the three hospitals, which are consistent with reports from various hospitals in the country. The LIMH had the highest ratio of 2114.5/100,000 live births, while the GH had the lowest ratio of 986.7/100,000 live births. Overall, the results show figures that are much higher than recently reported national community estimates of maternal deaths [12,13,14], which is probably due to the fact that the three hospitals are referral hospitals that receive women as obstetric emergencies from lower levels of health care provisions. However, it may also be due to a true increase in MMR in the hospitals. The objective of the national guideline for MPDSR is to identify the medical and associated causes of maternal mortality in maternity hospitals in a "no-blame manner", and to correct the deficiencies that lead to death in order to prevent future deaths in the hospitals. Thus, we hypothesized that if the correct measures are taken, MMR would systematically decline in the three hospitals over the months of implementation of the review. However, contrary to our expectation, MMR did not decline in any of the hospitals. Although there was a trend towards a slight decline in MMR at Ajeromi Hospital, there were also trends towards some increases in the LIMH and the GH. Although none of these trends were statistically significant, the results suggest the lack of effectiveness of the MPDSR process in preventing maternal mortality in the hospitals. We believe this to be due to the fact that although the medical and associated causes of MMR were identified in the hospitals, there was no evidence that the reported deficits were corrected at least on the short term. Future efforts to correct these deficiencies would hopefully lead to a true sustainable decline in MMR in the hospitals.
The results show that eclampsia, primary post-partum haemorrhage, prolonged obstructed labour and sepsis were the leading obstetric causes of maternal mortality in the three hospitals. These are consistent with reports from various hospitals in the country [15,16,17]. Among the associated factors identified by the MPDSR as responsible for deaths in women with these complications, patients' factors topped the list. These included the non-use of antenatal care, use of alternative providers such as traditional birth attendants, late presentation in hospital, and refusal by patients to receive the recommended methods of treatment. Indeed, among the women who died from these complications, more that 80% were women who had not received antenatal care, but who presented late in hospitals when complications occurred. This is consistent with Type 1 of the three-delay model previously proposed by Thaddeus and Maine [11] and is similar in pattern to previous reviews of maternal mortality from other parts of Nigeria, with late presentation in hospital, being a major determinant factor [18,19,20]. Previous studies have also identified the inability to pay for services, illiteracy, cultural beliefs [21,22], and poor satisfaction with services in health facilities [23] as reasons that women fail to attend early antenatal and delivery care in Nigeria. Clearly, there is a need to focus on improving the quality of delivery of maternal health services in both public and private sectors as well as on patients' education in efforts to promote the use of evidence-based pregnancy care by women. The use of safety nets such as conditional cash transfers or the elimination of user fees are examples of successful interventions that have been used to incentivize women to seek early pregnancy and emergency obstetric care in resource-poor countries [24,25]. Such measures which have been successfully used in parts of Nigeria [26] would also be useful in Lagos State to reduce the incidence of such delays. As shown in this report, some women with complications had been delayed in private hospitals before a decision was taken by care-givers in those health facilities to refer such women to higher levels of care. An additional measure therefore would be to re-train of private providers of maternity care to update their skills and knowledge, so as to ensure that they promptly refer women in difficult labour to higher levels of health care.
The study also identified various institutional factors from the MPDSR reports (Type 3 delays) that account for deaths from obstetric complications. These included delay in management of patients, non-availability of blood and blood products for transfusing haemorrhaging women, lack of equipment, inadequate skills of providers to use available equipment, nonadherence to treatment protocols by providers, and actual cases of mis-management of patients. It is noteworthy that the reviews were able to identify such institutional factors as being responsible or associated with maternal mortality. This may be due to the "no-blame method" and confidential approach adopted and the careful systematic process used for the review as previously documented [10]. Efforts to correct these institutional deficits should include the training and re-training of health providers, the development of institutional policies and guidelines for provision of maternal health care, provision of relevant equipment and training of staff to use them, and regular provision of blood and blood products. In particular, treatment protocols and guidelines should target the leading causes of maternal mortality, with emphasis placed on building staff competencies and agencies to handle them effectively. Additionally, equipment provision and infrastructural improvements have to be given top priority by the government and hospital management authorities to correct the related deficiencies that were identified in this report.
To the best of our knowledge, this is the first description of the results of the nationally adopted MPDSR process from any part of Nigeria. The strength of the study lies in the prospective method of data collection over 21 months, and the involvement of three major hospitals in Lagos State, which allowed for institutional comparison of the data. Furthermore, the fact that the data were collected by individuals "external" to the review process engendered greater objectivity and accuracy of the results.
However, the results are limited by the fact that although various associated factors were identified as causes of maternal mortality, no substantive efforts were made to correct the deficiencies in any of the hospitals. Thus, our results could be identified as baseline results which would change significantly if efforts are made to address the patients' related and institutional factors that lead to maternal mortality in the hospital. The next step is to develop an advocacy platform [27,28] of action to build political will for ensuring that the recommendations made by the MPDSR Committees are implemented by the hospital management authorities and the supervising government agencies.
Conclusion
The results of maternal death reviews in three hospitals in Lagos State of Nigeria show persistently high MMRs, and have identified the obstetric and social factors that predispose to maternal deaths in the hospitals. These factors include obstetric complications, but are heightened by the inadequate demand for services, poor delivery of services and inadequate equipment to deliver quality care in the health referral facilities. Unfortunately, there has been little evidence to show that these factors were addressed during the period of the study. We believe that efforts devoted to addressing these factors will lead to a significant reduction in maternal mortality ratios in the hospitals. In particular, strong political will by the management of the hospitals and the supervising government agencies is a prerequisite to address the human and infrastructural deficits that predispose to maternal mortality in maternity hospitals in Lagos State. Such improvements when demonstrated can be scaled up to other Nigerian States to achieve sustainable reduction in maternal mortality throughout the country.
|
2018-04-03T05:54:13.612Z
|
2017-12-14T00:00:00.000
|
{
"year": 2017,
"sha1": "db6dae4e50f98d1ed7d433ee2d5dcf308425752c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0188392&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db6dae4e50f98d1ed7d433ee2d5dcf308425752c",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253254966
|
pes2o/s2orc
|
v3-fos-license
|
Exploiting Kronecker structure in exponential integrators: fast approximation of the action of $\varphi$-functions of matrices via quadrature
In this article, we propose an algorithm for approximating the action of $\varphi-$functions of matrices against vectors, which is a key operation in exponential time integrators. In particular, we consider matrices with Kronecker sum structure, which arise from problems admitting a tensor product representation. The method is based on quadrature approximations of the integral form of the $\varphi-$functions combined with a scaling and modified squaring method. Owing to the Kronecker sum representation, only actions of 1D matrix exponentials are needed at each quadrature node and assembly of the full matrix can be avoided. Additionally, we derive \emph{a priori} bounds for the quadrature error, which show that, as expected by classical theory, the rate of convergence of our method is supergeometric. Guided by our analysis, we construct a fast and robust method for estimating the optimal scaling factor and number of quadrature nodes that minimizes the total cost for a prescribed error tolerance. We investigate the performance of our algorithm by solving several linear and semilinear time-dependent problems in 2D and 3D. The results show that our method is accurate and orders of magnitude faster than the current state-of-the-art.
Introduction
Exponential time integrators [1,2,3,4] are a class of methods for solving stiff semilinear systems of Ordinary Differential Equations (ODEs) of the form u ′ (t)+ Au(t) = f (t, u(t)), where A is a square matrix and f is a nonlinear function. Classical time integration schemes have an exponential scheme counterpart including exponential Runge-Kutta methods [5,6], exponential multistep methods [7] or exponential splitting schemes [8], among many others.
Exponential time-stepping methods incorporate the exact propagator of the homogeneous equation so that linear stability is satisfied by construction. However, such an advantage comes at the cost of having to compute ϕ-functions of the matrix A. These ϕ-functions are defined in terms of integrals of the exponential of A times a polynomial and appear in all exponential integrators.
The first strategies employed in exponential integrators are based on approximating the whole ϕ-function of A [9,10,11] (a dense matrix in general), and are thus expensive in terms of both CPU time and memory. In the last decade, new research focused on instead computing the action of ϕ-functions against a vector [12], which is considerably more efficient whenever the matrix A is sparse. Current approaches include rational Padé approximations [13,14], Krylov subspace methods [15,16], and truncated Taylor series expansion [17]. These new developments led to the application of exponential integrators in a wide range of applications [18,19,20].
In most applications of exponential integrators the matrix A and the resulting system of ODEs come from the semidiscretization in space of transient Partial Differential Equations (PDEs). In the specific case in which the spatial domain is a box and the coefficients of the PDE are separable, spatial discretizations such as Finite Differences (FD) or Finite Elements (FE) on tensor product grids or Isogeometric Analysis (IGA) [21] typically lead to a matrix A with Kronecker sum structure, i.e. A = A x ⊕ A y = A x ⊗ I y + I x ⊗ A y (in 2D). Here A x,y are 1D matrices arising from spatial discretization of the linear operator in a single spatial direction.
It is well known that the exponential of a matrix with Kronecker sum structure is equal to the Kronecker product of the exponentials of the one-dimensional matrices, and this property has been exploited in the literature to design efficient routines for computing matrix exponentials. For instance, in [22] the authors propose an efficient CPU and GPU implementation of the exponential of Kronecker sums of matrices for problems in arbitrary dimensions. Their method makes the solution of transient linear problems with zero source with exponential integrators extremely efficient, but it does not extend to more general semilinear problems. In fact, ϕ-functions of Kronecker sums do not simply separate into the Kronecker product of ϕ-functions of 1D matrices, making the tensor structure of the problem difficult to exploit in exponential integrators. Authors in [23] recently proposed an algorithm that circumvents this problem by building on recurrence relations between ϕ-functions to recast the evaluation problem in terms of the action of 1D ϕ-matrix-functions. However, this algorithm does not generalize easily to the 3D case and is numerically unstable for high-order exponential integrators.
In this paper, we make the following new contributions: • We introduce a new method based on approximating the integral definition of the ϕ-functions via both fixedpoint and adaptive quadrature (Gauss-Legendre and Clenshaw-Curtis respectively). Our algorithm inherits the numerical stability of quadrature rules and computations at each node are trivially parallelizable and only involve standard matrix exponentials. For this reason, only 1D matrix exponential actions are needed and no assembly of the full matrix A is required.
• We provide an a priori error analysis for our algorithm that builds on classical and modern theory on scalar quadrature methods [24,25,26], and shows that our method converges at a supergeometric rate with respect to the number of nodes. Since our estimate grows exponentially with A ∞ , we combine our method with the scaling and modified squaring strategy from [27] to reduce the size of A ∞ .
• We design an algorithm for estimating the optimal scaling factor and number of quadrature nodes of the fixed-point quadrature strategy that minimizes the total cost while satisfying a given error tolerance. This algorithm is based on our theory and essentially only involves scalar and polynomial rootfinding operations which nowadays are robust and efficient numerical procedures. Our adaptive algorithm employs the same estimation routine for the optimal scaling factor, but then adaptively determines the number of nodes required.
We test the performance of our method in several linear and semilinear time-dependent problems and we conclude that, for matrices with Kronecker sum structure, our algorithm is accurate and order of magnitudes faster than the generic-purpose state-of-the-art routine from [17].
The article is organized as follows: Section 2 introduces the background needed, including the definition and properties of ϕ-functions and matrices with Kronecker sum structure, and exponential integrators. In Section 3 we present and analyze our algorithm. We derive an a priori quadrature error bound and present a routine for estimating the optimal scaling factor and number of quadrature nodes. In Section 4 we study the performance of our method for different 2D and 3D time-dependent linear and semilinear problems. Finally, we summarize our findings in Section 5 and discuss suggestions for future work on the topic.
Background
We first recall the definition of ϕ-functions, exponential Runge-Kutta methods and matrices with Kronecker sum structure.
ϕ-functions and exponential time integrators
In this paper we consider the following semilinear system of ODEs as model problem: where A is a square matrix and f is a nonlinear term. Exponential integrators are constructed from different approximations of the integral form of the solution of system (1), the variation-of-constants formula This representation includes the exact propagator of the homogeneous equation (i.e. for f = 0) and different approximations of the source term in (2) lead to different methods.
The form of expression (2) leads to all exponential integrators being built in terms of the so-called ϕ−functions.
After defining ϕ 0 (A) := e A , these are The ϕ-functions satisfy the following recurrence relation For the time discretization of (2) with exponential integrators, we consider a uniform partition of the time with time step size τ = t k+1 − t k , ∀k = 0, . . . , m − 1. The simplest first-order exponential Runge-Kutta method is the exponential Euler method which involves only ϕ 1 . This method is obtained by approximating the source term in (2) by the constant value f (t k , u k ) and employing recurrence formula (4). More generally, s-stage exponential Runge-Kutta methods are given by Here, the coefficients b i and a ij are expressed in terms of linear combinations of ϕ−functions of the matrix A. As for traditional Runge-Kutta methods, the coefficients defining the methods (5) can be expressed via Butcher tableaus.
We refer to [1] for an extensive review of existing methods and their properties.
Kronecker sum structure
System (1) often arises from a semi-discretization in space of transient Partial Differential Equations (PDEs).
Here, we focus on the specific case in which the matrix A has Kronecker sum structure, i.e.
Here, ⊕ denotes the Kronecker sum and ⊗ denotes the Kronecker product, I x,y,z are one-dimensional identity matrices and A x,y,z are the matrices coming from the semidiscretization in each space direction.
The Kronecker sum structure (6) is obtained whenever the PDE has a tensor-product structure: the domain is a box, the PDE coefficients are separable, and the PDE is semidiscretized in space employing Finite Differences (FD), Finite Elements (FE) on tensor product grids, or Isogeometric Analysis (IGA) (see [21] for details).
It is well known [28] that the exponential of a matrix with Kroncker sum structure (6) satisfies the following A crucial ingredient of the algorithm we propose in the next section is a routine to compute matrix-vector product with e A efficiently. For this purpose, we exploit the following relations: where b = vec(B), v = vec(V ) and vec(·) is the vectorization operator. In the 2D case in (8), V and B are matrices while in 3D they are tensors of order 3. Here we are indicating with × d with d ∈ {1, 2, 3} the Tucker operator.
Performing matrix-vector products with the exponential as in (8) is extremely efficient as it only involves dense linear algebra operations with 1D exponential matrices and can be accelerated on GPUs if needed [22]. We refer to [29] for a detailed presentation on multidimensional tensor algebra and its efficient implementation.
Remark 2.1. In this article, we only consider 2D and 3D time-dependent PDEs. However, the second equivalence in (8) holds for matrices with Kronecker sum structure in arbitrary dimensions d While the extension of our algorithm to dimensions higher that 3 is straightforward, we work in 2D and 3D in this paper for simplicity.
New Algorithm
In this section we introduce our algorithm for approximating the action of ϕ-functions of matrices. Our method is based on numerical quadrature (both adaptive and fixed-point) combined with a scaling and modified squaring approach. In what follows we also provide an a priori error estimate for the quadrature error and we design a robust and efficient strategy for computing the optimal scaling factor and number of quadrature nodes that minimizes costs for a given error tolerance.
Approximation of ϕ-functions via quadrature
The relations (8) lead to an efficient algorithm for computing the action of the matrix exponential. However, (8) is a direct consequence of property (7), which does not hold for the ϕ-functions. Our objective is to obtain an efficient algorithm for evaluating actions of ϕ p (A) for p > 0 that can still exploit the Kronecker structure in A without performing any full matrix assembly. For this purpose, we rely on equation (3) to express the action of any ϕ-function of a matrix against a vector b as Since the above is just a one-dimensional integral of an analytic function over a bounded interval, we can approximate it via any suitable (n + 1)-point 1D quadrature rule: where are the quadrature weights and nodes and the action of the matrix exponential at the nodes can be computed efficiently via (8). While any geometrically convergent quadrature scheme is suitable for this purpose, we mainly employ Gauss-Legendre or Clenshaw-Curtis quadrature as they both come with sharp error bounds [26] that we can leverage in our analysis. While Gaussian quadrature is more accurate, Clenshaw-Curtis is a nested rule and can therefore be used adaptively with live error estimation and automatic selection of the number of nodes required to achieve a prescribed tolerance. In Section 4 we study and compare the performance of both approaches in numerical experiments.
Employing a quadrature rule has three advantages: 1) It converges supergeometrically fast (see next subsection) so only a few matrix-vector products with the exponential are needed. We present our method in Algorithm 1 (fixed-point quadrature version) and in Algorithm 2 (adaptive version). Algorithm 1. Fixed-point quadrature algorithm for computing ϕ j (A)b for j = 1, . . . , p.
Input: An integer p, a vector b, the matrices A x,y,z , and a quadrature rule • Compute and store the vectors v i = e (1−xi)A b for i = 1, . . . , n + 1 using (8).
Input: An integer p, a vector b, the matrices A x,y,z , and a relative error tolerance ε.
2) Compute the vectors
4) Update the error: err = max j y j −ỹ j ∞ y j ∞ .
Remark 3.1. Linear combinations between the actions of different ϕ-functions against different vectors can also be computed efficiently as where b 1 , . . . , b p are arbitrary vectors. However, we were unable to make this strategy compatible with the generalized scaling and squaring technique of Section 3.3.
Error bounds
We now focus on Algorithm 1 only for simplicity, and derive a bound for the quadrature error. For this purpose, we need the following result by Trefethen [26]: HereÎ denotes the approximate integral. Furthermore, (n + 1)-point Gaussian quadrature with n ≥ 2 satisfies The factor ρ 1−n in (12) can be improved to ρ −n if n is even.
We now employ Theorem 3.1 to obtain an error bound for Algorithm 1. The result is stated in the following theorem and corollary.
Theorem 3.2. For any integer p ≥ 0, let I p+1 = ϕ p+1 (A)b, and letÎ p+1 be the approximation of I p+1 computed via Algorithm 1 with a total of n + 1 quadrature nodes. Then, provided that n ≥ 4 for Clenshaw-Curtis quadrature and n ≥ 2 for Gaussian quadrature, we have that where M (ρ) is given by andρ satisfiesρ > 1 and is a real root of the monic polynomial equation whose coefficients are given by If n is even we can replace n with n + 1 for Clenshaw-Curtis quadrature.
Under the same assumptions of Theorem 3.2, if we further assume that n ≥ max 4, ⌈ 1+ (n + 1)-point Clenshaw-Curtis quadrature with n even yields an error of If n is odd, the bound still holds with n replaced by n−1. Provided that n ≥ max 2, ⌈ 1+ Proof. We prove both Theorem 3.2 and Corollary 3.3 for Gaussian quadrature only since the result for Clenshaw-Curtis quadrature follows the same argument. In order to apply Theorem 3.1, the first step is to map the integral The second step is to provide an upper bound for the module of the integrand in E ρ . Since the integrand is vectorvalued, we work with the infinity norm to provide an upper bound for all its entries and bound G(s)b ∞ over E ρ .
We have that Since the maximum of |1 ± s| over E ρ is attained on the rightmost or leftmost points of the ellipse at which , and a bound for Applying Theorem 3.1 to each entry of the integrand we obtain that for Gaussian quadrature Minimizing with respect to ρ for fixed p we get that ρ = arg min ρ>1 144 35 Differentiating the expression in the large brackets with respect to ρ and setting the derivative to zero yields the polynomial equation P (ρ) = ρ 4 + a 3 ρ 3 + a 2 ρ 2 + a 1 ρ + 1 = 0 with coefficients Writing ρ = 1 + x for x ∈ C and applying Descartes' rule of signs to the shifted polynomial it can be verified that the coefficients of Q(x) change sign either once or three times depending on the values of a 2 and a 3 , ensuring that there is always at least one positive real root of Q(x). Hence, there is at least a rootρ of P (ρ) that is real and satisfiesρ > 1 for all n ≥ 2, A ∞ > 0 and p ≥ 0.
The same exact argument also holds for Clenshaw-Curtis quadrature and the thesis of Theorem 3.2 is thus proved. To derive the bounds in Corollary 3.3 we start from equation (24), which we simplify by noting that g(ρ) ≤ ρ and 144 After minimizing the result with respect to ρ, we obtain which is (20). Here in the last passage we used the fact that the minimum is attained at ρ = max(1 + √ 2,ρ), Note that for the expression on the right in (27) to be decreasing in n we need n > 1 4 A ∞ + p 2 , for whichρ > 1. Taking n ≥ 1+ √ 2 4 A ∞ + p 2 ensures thatρ ≥ 1 + √ 2 and that the bound (27) holds. For Clenshaw-Curtis quadrature the same simplifications forρ ≥ 1 + √ 2 yield the similar result for even n: where n must be replaced with n − 1 if n is odd. The bound (28) is (19). In this case, the minimum is attained , and for the right-hand side expression to be decreasing in n we now need n > 1 2 A ∞ + p, for whichρ > 1. Taking n ≥ 1+ A ∞ + p ensures thatρ ≥ 1 + √ 2 and that the bound (28) holds.
Remark 3.2. Theorem 3.2 and Corollary 3.3 establish that the rate of convergence of the quadrature rules used to approximate the ϕ-functions is supergeometric.
The numerical approximation of the roots of a polynomial is nowadays a straightforward, robust, fast, and accurate procedure. Theorem 3.2 thus inspires a definite recipe to compute an upper bound for the quadrature error and for the minimum number of quadrature nodes required to achieve a given error tolerance. We present the related routines in Algorithms 3 and 4.
Description: Algorithm for estimating the quadrature error.
Input: An integer p corresponding to the maximum value of p for which computing ϕ p (A)b is required. An estimate α ≈ A ∞ , β = b ∞ , and a chosen number of quadrature nodes n + 1.
2) Solve (17) and selectρ > 1 to be the root that minimizes either (14) or (15) depending on the quadrature rule used. Store the corresponding error bound inĒ and set E = max(E,Ē).
Output: An upper bound E on the quadrature error for computing ϕ q (A)b valid for all q = 1, . . . , p.
Description: Algorithm for estimating the number of quadrature nodes.
Output: An upper bound n + 1 on the minimum number of nodes required to achieve a quadrature error below ε. Remark 3.3. In both Algorithms 3 and 4 it is best to work with the logarithm of the error bounds in (14) and (15) to avoid possible issues with the numerical range of floating-point numbers.
Scaling and modified squaring method
The bound in Corollary 3.3 is less sharp than that in Theorem 3.2, and thus less useful in practice. Nevertheless, it is more informative as it clearly shows that the rate of convergence is supergeometric. Furthermore, its proof suggests that the number of quadrature nodes should scale linearly with the size of A ∞ , a phenomenon that we indeed observe heuristically when using Algorithm 4 to compute a suitable n for a wide range of matrix sizes (results not shown for brevity). As it is common for the matrix exponential [13,27], we therefore use a scaling approach to reduce the size of A ∞ .
First, we compute ϕ p (2 −l A)b for a suitable integer l, and then scale the result back by using the modified squaring algorithm from [27], namely: where the action of the matrix exponential is computed according to (8). Our method is well-suited for evaluating As an example, if we choose the scaling l to be l = log 2 e 2 A ∞ , Theorem 3.2 then yields the following bounds for the scaled problem: where c p = (2 p+1 p!) −1 and we assumed n is even in the Clenshaw-Curtis rule. For both quadrature rules and p ≤ 20, a quick computation yields that 21 quadrature nodes are sufficient to reduce the error below 10 −20 b ∞ .
In practice, such a scaling choice may be excessive and lead to a high squaring cost as well as to loss of significant digits. In fact, while (29) was reported in [27] to be resilient to rounding error accumulation, when A is non-normal excessive squaring may still lead to rounding error accumulation similarly as for the matrix exponential [13,10].
Motivated by these considerations, we thus design an algorithm that helps balancing scaling and computational expense by calculating the optimal scaling factor that minimizes the total cost. The resulting routine is presented in Algorithm 5, where we rely on Theorem 3.2 and Algorithms 3 and 4 to numerically estimate the optimal values of l and n. Algorithm 5 is based on modelling the total cost of our algorithm as follows: Let n + 1 is the final number of quadrature nodes used and let d be the spatial dimension. Then our method requires the computation of dn 1D matrix exponentials, and n + lp matrix-vector products as in (8). Since the optimal scaling factor depends on the relative cost of these two operations, we take the total cost to be given by C(n, l, p) = c 1 dn + c 2 (n + lp) for some suitable positive constants c 1 and c 2 that are architecture-dependent and must be estimated. In the numerical experiments of Section 4 we take c 1 = 0 and c 2 = 1 for simplicity.
Description: Algorithm for estimating the optimal scaling and number of quadrature nodes.
Input: An integer p, an estimate α ≈ A ∞ , and β = b ∞ , and a quadrature error tolerance ε.
Output: The optimal scalingl, the corresponding number of quadrature nodesn + 1, and the total costC required to achieve a quadrature error below the tolerance ε.
The reason why we can stop searching in Algorithm 5 if C(n, l, p) >C is that decreasing the scaling factor causes n to monotonically increase. Therefore C(n, l, p) is convex in l and it will start increasing only after l decreases beyond its minimum.
We remark that when the A x,y,z matrices are sparse computing the infinity norm of A can be typically done efficiently. When A is instead dense, it is possible to estimate its infinity norm via the upper bound Algorithm 6. Y = phiquadmv(p, A x,y,z , b, α, ε, l,type).
Input: An integer p, the matrices A x,y,z and a vector b.
Optional input: An estimate α of the infinity norm of A (i.e. α ≈ A ∞ , default: use (31)). A tolerance ε (default: 10 −14 ). A scaling l (will be estimated if not provided). An integer variable type with values 1 or 2 depending on whether Algorithm 1 or 2 is to be employed (default: type = 1).
• Depending on the value of type, apply either Algorithm 1 (with Gaussian quadrature using n + 1 nodes), or Algorithm 2 (with Clenshaw-Curtis adaptive quadrature using ε as tolerance) to 2 −l A and obtain y j = ϕ j (2 −l A)b for j = 1, . . . , p. Store the matrix exponentials exp(2 −l A x,y,z ) used in Algorithm 1 or 2.
Output:
The matrix Y such that its j-th column is given by the product y j = ϕ j (A)b for j = 1, . . . , p.
With Algorithm 6 we have two options: either employ a direct approach using Gaussian quadrature for a fixed number of points (as in Algorithm 1) determined from Theorem 3.2 and Algorithm 5, or employ an adaptive strategy with Clenshaw-Curtis (or another nested rule such as Gauss-Kronrod) as in Algorithm 2. The former approach employs Gaussian quadrature which converges faster, but it comes with no error estimation and relies on the upper bound from (20) which may be an over-estimate. On the other hand, the adaptive strategy uses Clenshaw-Curtis quadrature, but it comes with adaptivity which might improve performance. In the next section we test both methods in practice to determine which one is the most efficient.
Numerical results
We now compare the performance of our method in terms of computational time and approximation errors with the state-of-the-art MATLAB routine expmv() from Higham et al. [17] for different problems. We use phiquadmv() from [22] for the 3D tensor operations required by (8). We set the tolerance for the quadrature error to the default value (i.e. ε = 10 −14 ) and we employ C(n, l, p) = n + lp in Algorithm 5 (i.e. c 1 = 0 and c 2 = 1). In this section, we denote these routines with phiquadmv gauss() and phiquadmv cc(), respectively.
In all examples we employ a FE semidiscretization in space with piecewise linear functions and a 2-point Lobatto quadrature to obtain diagonal mass matrices. All the experiments were performed using Matlab version r2021b using a single computational thread of an Intel i5-8279U chip with 16GB of RAM via the option -singleCompThread.
Our main code is available at https://github.com/jmunoz022/phiquadmv and the routines for reproducing the results presented in this article are available at https://github.com/jmunoz022/phiquadmv_paper.
Problem 1 -Heat equation in 3D
We consider the 3D heat equation in Ω = (0, 1) 3 for 0 ≤ t ≤ T = 1, with homogeneous Dirichlet boundary conditions. The matrix A in this case comes from the semidiscretization of the Laplacian operator and is symmetric positive-definite. We consider a uniform spatial discretization using the same number of elements in each spatial direction so that the matrices A x,y,z have the same dimension.
Here we set the timestep τ = 1/8 and we compute the action of ϕ p (−τ A) against the vector b obtained by evaluating the function u 0 (x, y, z) = sin(πx) sin(πy) sin(πz) at the nodal points. We monitor the following relative error measure for every value of where v p are the actions ϕ p (−τ A)b computed with expmv(), and v kron p the actions computed with either phiquadmv gauss() or phiquadmv cc(). Figure 1 shows the relative errors (32) for p = 1, . . . , 20 and different sizes of the matrix A. We select a number of 2 r elements in each space dimension with r = 4, . . . , 7. In Table 1 we compare the computational times in seconds required to compute all 20 actions with phiquadmv gauss(), phiquadmv cc() and routine expmv(). Table 2 shows the number of quadrature points, the scaling factor and the total cost of employing both routines.
We conclude that both phiquadmv routines perform similarly, are accurate (with relative errors below 10 -12 ), and are orders of magnitude faster than routine expmv(). In particular, for a matrix of size near to 2 million, expmv() routine required 12.5 hours to compute all actions while phiquadmv gauss() and phiquadmv cc() took only 25 and 31 seconds (1750 and 1424 times faster), respectively.
Problem 2 -Advection-diffusion problem with a Sishkin mesh
We now consider the 2D Eriksson-Johnson problem over Ω = (−1, 0)× (−0.5, 0.5) for 0 ≤ t ≤ T = 1 as presented in [23]. Here, the matrix A comes from the semidiscretization of the advection-diffusion operator with both Neumman and Dirichlet boundary conditions . We now set the vector b with the nodal values of the initial condition u(x, y, 0) = 10x y 2 − 0.25 + e r 1 x −e r 2 x e −r 1 −e −r 2 cos(πy). As in [23], we select a Sishkin mesh (i.e. a graded, piecewise-uniform mesh in the x direction designed to capture the boundary layer, cf. [31]) with 2 r elements in each space dimension. As a consequence of the mesh structure and of the presence of an advection field, the matrices A and A x,y are non-symmetric. Furthermore, A x,y also have different dimensions since we remove the boundary degrees-of-freedom corresponding to the Dirichlet boundary conditions. We again set τ = 1/8 and compute ϕ p (−τ A)b for p = 1, . . . , 20 and r = 5, . . . , 9 with both phiquadmv gauss() and phiquadmv cc().
We display in Figure 2 the relative errors, in Table 3 the computational times in seconds and in Table 4 the number of quadrature nodes, scaling and total cost of each routine. We conclude that even for this non-symmetric problem, both phiquadmv routines are accurate and faster than expmv(). On the largest matrix, expm() takes 2.18 hours to evaluate the actions while our routines respectively take 7.53 and 11.64 seconds and are 1045 and 676 times faster. Remark 4.1. We note that in Figures 1 and 2 the approximation error is small, yet above the prescribed tolerance of ε = 10 −14 . Even assuming that the expmv() routine is exact, this behavior is likely a consequence of rounding errors, which our analysis from Section 3 does not account for. In particular, independently from the scaling factor phiquadmv gauss() phiquadmv cc() used, we cannot expect to reduce the error below the condition number of the problem times the unit roundoff of the floating-point format used. While the condition number of computing ϕ-functions of matrices has not, to the best of our knowledge, been investigated, we know for instance that for the matrix exponential (cf. Lemma 10.15 in [32]) this is at least as big as A ∞ . Looking at the size of A ∞ in Tables 2 and 4, we can then expect to lose a few digits in our computations.
Problem 3 -Hochbruck-Ostermann equation
We consider the semilinear Hochbruch-Ostermann equation from [5] over Ω = (0, 1) 2 and 0 ≤ t ≤ T = 1 subject to homogeneous Dirichlet boundary conditions. Here, we select the linear source f and the initial condition u 0 using the method of manufactured solutions in such a way that the exact solution is u(x, y, t) = x(1−x)y(1−y)e t .
We compare the performance of our algorithm with three exponential Runge-Kutta methods from [5] defined by the Butcher tableaus in Table 5 (in which we denote ϕ i,j := ϕ i (−c j τ A)). We select c 2 = 1 2 in the two-stage Runge-Kutta method and c 2 = 1 3 in the three-stage one. In Figure 3 we show the errors of the approximations obtained by both routines phiquadmv gauss() and phiquadmv cc() for the three Runge-Kutta methods at the final time T = 1 (both routines deliver the same convergence results). Here, we work with a fixed mesh with 2 10 elements in each space dimension and we monitor the error behaviour in the infinity norm. We observe the expected order of convergence in time as we refine the time-step up until the error in space becomes dominant, showing that our method is accurate and does not affect the convergence of the exponential integrators. Table 5: Butcher tableaus corresponding to Exponential Runge-Kutta methods up to order 3. Here ϕ i,j := ϕ i (−c j τ A). We now compare the efficiency of the methods phiquadmv gauss(), phiquadmv cc(), and expmv() when used in conjunction with exponential integrators to solve the Hochbruck-Ostermann equation. In Tables 6 and 7 we record the total CPU time spent by these routines for different number of time step sizes and for exponential Runge-Kutta methods of order up to 3. We compare two discretizations in space, fixing 2 7 and 2 8 elements per spatial direction, respectively.
We conclude that in all cases phiquadmv gauss() and phiquadmv cc() accelerate the computation of the exponential time integrators compared to expmv(). Nevertheless, we observe that the growth of the computational time for expmv() is slower as we refine the time step size for a fixed discretization in space. Therefore, the largest gain we obtain with the phiquadmv() routines is when the the time step size is large compared to the discretization in space. Also, we observe that in this case phiquadmv gauss() is faster than phiquadmv cc() by a factor between two and three, which is consistent with the results from Section 3.
Remark 4.2. We remark that even though the results presented in this section have been obtained in serial our methods are well-suited for parallelism since computations at different quadrature nodes as well as the squaring of ϕ p (A)b for different p can be performed independently. We leave a parallel implementation of our routines to future work.
Conclusions
We proposed a method that efficiently approximates the action of ϕ-functions of matrices with Kronecker sum structure. The algorithm is based on approximating the integral definition of the ϕ-functions via either adaptive or fixed-point quadrature combined with a scaling and modified squaring approach. The quadrature rule exploits the Kronecker structure of the matrix and only involves actions of 1D matrix exponentials which can be applied efficiently. Evaluation at different quadrature nodes can furthermore be performed in parallel. Additionally, we provided an a priori estimate for the quadrature error which shows that our method converges supergeometrically fast with respect to the number of quadrature nodes. Guided by this result, we also designed a strategy for computing the optimal scaling and number of quadrature points that minimizes the total cost while observing a prescribed error tolerance. Numerical experimentation with 2D/3D time-dependent problems with tensor product structure shows that the new method is accurate, efficient and robust, and is well-suited to be combined with exponential integrators.
A comparison with the expmv() state-of-the-art routine from Al-Mohy and Higham revealed that for matrices with Kronecker sum structure our method can accelerate the computation of the actions of ϕ-matrix-functions by orders of magnitude.
Possible extensions of this work include: (1) The extension of our method to linear combinations of the actions of different ϕ-functions against different vectors (2) A parallel and/or GPU implementation of the algorithm (3) The application of our technique to spatial semidiscretizations with IGA for which the 1D matrices are dense.
|
2022-11-03T01:15:35.591Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2cc5a3b9412de266ca21e2fe5ff5288d23fb5a6d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2cc5a3b9412de266ca21e2fe5ff5288d23fb5a6d",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
254532243
|
pes2o/s2orc
|
v3-fos-license
|
Macular Telangiectasia Type 2: A Classification System Using MultiModal Imaging MacTel Project Report Number 10
Purpose To develop a severity classification for macular telangiectasia type 2 (MacTel) disease using multimodal imaging. Design An algorithm was used on data from a prospective natural history study of MacTel for classification development. Subjects A total of 1733 participants enrolled in an international natural history study of MacTel. Methods The Classification and Regression Trees (CART), a predictive nonparametric algorithm used in machine learning, analyzed the features of the multimodal imaging important for the development of a classification, including reading center gradings of the following digital images: stereoscopic color and red-free fundus photographs, fluorescein angiographic images, fundus autofluorescence images, and spectral-domain (SD)-OCT images. Regression models that used least square method created a decision tree using features of the ocular images into different categories of disease severity. Main Outcome Measures The primary target of interest for the algorithm development by CART was the change in best-corrected visual acuity (BCVA) at baseline for the right and left eyes. These analyses using the algorithm were repeated for the BCVA obtained at the last study visit of the natural history study for the right and left eyes. Results The CART analyses demonstrated 3 important features from the multimodal imaging for the classification: OCT hyper-reflectivity, pigment, and ellipsoid zone loss. By combining these 3 features (as absent, present, noncentral involvement, and central involvement of the macula), a 7-step scale was created, ranging from excellent to poor visual acuity. At grade 0, 3 features are not present. At the most severe grade, pigment and exudative neovascularization are present. To further validate the classification, using the Generalized Estimating Equation regression models, analyses for the annual relative risk of progression over a period of 5 years for vision loss and for progression along the scale were performed. Conclusions This analysis using the data from current imaging modalities in participants followed in the MacTel natural history study informed a classification for MacTel disease severity featuring variables from SD-OCT. This classification is designed to provide better communications to other clinicians, researchers, and patients. Financial Disclosure(s) Proprietary or commercial disclosure may be found after the references.
elucidate the pathogenesis, to develop potential outcome measures for clinical trials, and to identify and test treatments for MacTel. To achieve these goals, an international multicentered natural history study using multimodal imaging was conducted to better characterize the diseases, evaluate risk factors, and determine the rates for progression of MacTel disease.
Another goal of this research group was to develop an up-to-date classification of the disease, incorporating image analysis results from novel image modalities. These newer imaging modalities are particularly relevant as the previous classification system was developed by Dr J. Donald Gass 7 (Table 1), prior to the use of the spectral domain (SD)-OCT and fundus autofluorescence (FAF) which both show characteristic macular changes and are now widely available in clinical practice. The use of these imaging modalities has contributed significantly to the current understanding of MacTel, which had previously been considered a vascular disease. The motivation for developing a new classification system was to capture the structural changes found in the OCT of MacTel, admittedly the most informative of the more recent imaging modalities, while not detracting from the use of both conventional imaging modalities and from the clinical utility of blue light reflectance (BLR) imaging. 8,9 A revised classification using all imaging modalities is likely to facilitate a better description of the grades and progression of MacTel as well as communications between clinical and basic science researchers. These analyses may provide more insights on clinically relevant grades for future clinical trials. A classification system could also provide progression data to better inform and guide both the treating clinicians and patients on the severity level of disease. The purpose of this report is to present the analyses using the data from clinical studies performed in the MacTel Project to construct a revised classification system of MacTel.
MacTel Project
Different cohorts with MacTel provided data for the development and validation of this classification. They consist of the natural history observation study (NHOSd2005e2015) 10 and the natural history observation registry study (NHORd2010 to present). There was overlap of participants in these 2 studies, especially for those participants who have ! 5 years of follow-up from the NHOS and were transferred over to the NHOR. Each participating clinical site obtained approval from their institutional review board or independent ethics committee for the protocol and each participant provided written informed consent. The research was conducted in accordance with the Declaration of Helsinki and where applicable, the study complied with the Health Portability and Accessibility Act.
NHOS (2005e2015)
The natural history of MacTel was studied in an international multicenter prospective observational study designed to evaluate the structural and functional changes associated with MacTel over ! 5 years of follow-up. Seven countries with 22 clinical sites were involved. Each participant who was ! 18 years of age enrolled into the Natural History Study after a diagnosis of MacTel was confirmed on clinical examination at the study sites based upon on stereoscopic color fundus (CF) photographs, OCT (which initially included the time-domain OCT, but by most of the followup, SD-OCT was the common instrument), fluorescein angiography, and FAF images which were graded by the Fundus Reading Center at Moorfields Eye Hospital, London, United Kingdom. At baseline and annual study visits, multimodal imaging and best-corrected visual acuity (BCVA) measured by trained examiners using a standardized protocol and the logarithm of the minimum angle of resolution ETDRS VA charts (scores range from 0 to 100) were obtained.
The standardized imaging involved a 30 stereoscopic field centered on the fovea that included color, red-free and fluorescein angiographic images of the retina that were captured on digital fundus cameras. Blue light FAF images were obtained using Heidelberg Retina Angiograph II or Heidelberg Spectralis SLO(þOCT) scanning laser ophthalmoscope systems (Heidelberg Engineering Inc). Spectral domain-OCT volume scans were obtained with the Heidelberg Spectralis OCT systems (Heidelberg Engineering Inc), or with the Cirrus SD-OCT 4000 systems (Carl Zeiss Meditec).
NHOR
From 2010 to present, the MacTel Project conducted a registry of potential participants for clinical trials. Participants with a clinical diagnosis of MacTel were recruited for 1 study visit examination which included a comprehensive eye exam and the required ophthalmic imaging as previously described for the natural history study. This cohort is followed with annual telephone interviews.
Fundus Reading Center at Moorfields Eye Hospital
The Moorfields Eye Hospital Reading Center at Moorfields Eye Hospital National Health Service Foundation Trust, London, United Kingdom, evaluated all the study images using established standardized protocols. All anonymized images were submitted to the Moorfields Reading Centre using a safe transfer protocol. The images were uploaded to their respective image viewing software. Those subjects deemed to have MacTel following a clinician adjudication process (A.C.B., T.P., C.A.E., F.B.S., T.F.C.H.) were released to the grading queue. Image grading was completed by trained and certified personnel (T.P., F.B.S., I.L.) using a prespecified protocol and data were entered into the MacTel Study's database. Each image was evaluated independently without the knowledge of the fellow eye or prior study visit gradings. Clinician adjudication took place when the graders required further input and the clinician decision was taken as final. This process resulted in the agreed grading results for all imaging modalities to be uploaded to the MacTel study database in order to integrate with the rest of subject related data. At no time did the graders have access to any other medical information but the images. Agreement on grading characteristics was moderate to substantial for all fields.
The reading center personnel graded for MacTel features on the CF photographs, fluorescein angiograms, OCTs, and FAF images. Color fundus photograph were assessed for transparency in the perifoveal retina, pigment epithelial changes, and subretinal neovascularization. The fluorescein angiographic images were graded for abnormal vessels temporal to the fovea and subsequently in the perifoveal capillaries. The fluorescein angiography images were also graded for dilation of outer retinal vessels and the deep hyperfluorescence usually at the level of the retinal pigment epithelium or outer retina. OCT images were graded for central retinal thinning and inner and outer retinal cavities, which do not correspond to changes on fluorescein when these fluorescein angiographic images are superimposed on an "en face" view of the OCT. The presence of the ellipsoid zone (EZ) break on OCT and the location of the break in relation to the center of the fovea were graded on the OCT images. The presence and the position of the EZ break were also evaluated. Linear measurement was made in the horizontal B-scan closest to the foveal center. Other measures were made later using an "en face" methodology to determine the area of the EZ break for phase 1 clinical trial and other studies. This area of EZ break correlated with VA changes and visual field loss, as demonstrated both in the MacTel Project and other academic centers. 11,12 Additional OCT hyperreflectivity was also graded. This was defined as hyper-reflective lesions that were as bright as the retinal pigment epithelium layer of the OCT, which may be linear or appear as mounds that emanate from the retinal pigment epithelium and extend past the external limiting membrane of the retina. [13][14][15][16][17] Fundus autofluorescence images were graded for the typical pattern of hyperfluorescence or the lack of masking at the central macula, as the macular pigment has been noted to be absent in MacTel. 18,19 In addition, the right-angled vein 20 and the black hyperpigmentation 21-23 along the retinal vessels seen on CF photographs are also seen particularly well on the FAF image.
Statistical Analyses
The Classification and Regression Trees (CART) 24 is a predictive algorithm used in machine learning. The CART algorithm is nonparametric and can be used for any type of data. The CART methodology uses recursive partitioning to split the data into several groups based on values of predictor variables that create the best homogeneous group when splitting the data. To appropriately conduct CART the dataset was split into 2 sets, a training set and validation set. The training data set used to build this decision tree model was based upon the right eyes of the MacTel participants in the natural history study while the validation data set used to determine the appropriate tree size needed to achieve the optimal final model was based on the left eyes of MacTel participants. Recursive partition was conducted using dichotomized splits (presence or absence). Regression models that used a least square method created a decision tree to classify eyes into different categories. The description of the resulting decision tree developed using CART for these analyses is found in the Supplementary We conducted further validation analyses on the final classification to assess disease progression. For those participants with available follow-up data, the relative risk of progression over time for progression to ! 5 letter loss, ! 10 letter loss, and 1 or 2 step progression from baseline were determined using the Generalized Estimating Equation. Analyses were also conducted for progression over time to the more severe end of the scale.
The variables considered for analyses included the various features graded by the centralized reading center using the multimodal imaging methods. The primary target of interest for the algorithm development by CART was the change in BCVA at baseline for the right eye and for the left eye. These analyses using the algorithm were repeated for the BCVA obtained at the last study visit of the natural history study for the right eye and for the left eye.
Reading Center Variables Evaluated in the CART Analyses
Variables included in these analyses were both diagnostic of disease and may change over time, such as fluorescein leakage. In CF photographs, the presence and distribution of a loss of retinal transparency and perivascular pigmentary changes were assessed within the central 5 subfields of an International Classification grid 25 centered on the fovea ( Fig 1). Although loss of retinal transparency is better detected with the more recently available confocal BLR imaging ( Fig 1A, B), for historical consistency, images of this modality were not included in the current grading.
Fundus fluorescein angiographic images were analyzed by International Classification grid subfield for the presence and distribution of characteristic dilated, blunted and rightangle vessels (Fig 2), and for the distribution and type of (focal, diffuse, mixed) hyperfluorescence at the level of the deep capillary plexus or the retinal pigment epithelium. The presence of direct and indirect signs of neovascularization (visible neovascularization lesion, hemorrhage, serous retinal detachment, or scar/fibrosis) was evaluated using both CF and fundus fluorescein angiographic images. Fundus autoflorescence images were also graded for the presence and location of vessel-adjacent focal hypo-autofluorescence (absent/present sign of perivascular pigment migration [ Fig 3E, F], for the presence of blunted and right-angle vessels [Fig 3AeD], and for the presence of increased FAF signal in foveal area [a sign of luteal pigment loss/redistribution, Fig 3 A OCT images were evaluated for a discontinuity (break) in the EZ (absent, present non-central, and present central), a sign of photoreceptor degeneration, (Fig 4AeC) and for the presence of a hyper-reflectivity in the outer retina (which may be attributed to perivascular retinal pigment migration with or without neovascularization Fig 4D).
Results
A total of 1733 participants from the NHOS and NHOR had images available for the classification development. A total of 1292 right eye images were used for the test dataset and 1302 left eye images were used for the validation data set. Additional validation was conducted on the last available image for 1733 right and left eyes, respectively, of natural history observation and NHOR participants. For the 1733 participants (687 males and 1046 females; mean age 60.8 years AE 9.7 years), a total of 755 participants (290 males and 465 females, age 60,7 AE 9.2 years) had ! 1 year of follow-up for the additional progression validation analyses.
The images were graded for the varying severity grade using the Gass-Blodi classification system (Table S2). Nearly 50% of the severity of eyes in the natural history study were in the grade 3 Gass classification (presence of right-angle veins) and 20% were in grade 4 (pigment present). The correlation of the 2 classification systems was also assessed in both the right eyes (Table S3) and the left eyes (Table S4).
After the completion of the initial classification development using CART (Fig S1), the following variables were found to be significant for progression to VA loss: EZ loss, pigmentary changes, and OCT hyper-reflectivity ( Figure 5). These factors that had an impact on VA decline were placed into order with other factors, based upon the progression from good VA to poor VA at initial baseline for the right eye ( Fig S1 and Table S1, Supplementary Materials on Results). As validation of this classification, the order of factors was then evaluated using the left eyes of the participants, again, ordering the factors based upon the progression from good VA to poor VA at baseline. Additional analyses were conducted using the same process of ordering factors according to diminishing VA for the last visit of the right eye and again for the left eye. All these results are demonstrated in the Table S1 in the Supplementary Materials on Results. All 4 analyses had placed these factors in the order that produced the final classification.
The Classification of MacTel
Based on these analyses we propose a 7-grade classification of MacTel (Table 2). This classification is used to assess eyes that already have their diagnoses of MacTel confirmed by ocular imaging, particularly with OCT, FAF, BLR, and including fluorescein angiography and CF photographs. There are 7 grades with the initial grade (Grade 0) demonstrating only the key features that are diagnostic of MacTel and none of the 3 factors found to be significant for progression to VA loss (EZ loss, pigmentary changes, and OCT hyper-reflectivity). In the presence of a nonfoveolar EZ break (Grade 1), it is not surprising that the VA is unaffected; this begins in a noncentral location. However, when the EZ break affects the center of the fovea, the VA scores drop by almost 10 letters in grade 2 and 15 letters on average in Grade 3. 26 The next big step for VA decline is heralded by the presence of the OCT hyper-reflectivity (Grade 4), followed by the presence of central pigmentary changes (Grade 5). The presence of pigment and exudative neovascularization result in the most severe grade of this classification (Grade 6).
Follow-Up Analyses Using the Mac Tel Classification System
In the analyses of the 755 participants who had sufficient follow-up data, using the Generalized Estimating Equation regression models, we evaluated the annual relative risk of progression over a period of 5 years for the following outcomes: progression to ! 5 letter loss, ! 10 letter loss from baseline (Figs S2 and S3), 1 or 2 step progression (Figs S4 and S5), and progression to steps 4 and 5 (Figs S6 and S7). For example, an increasing relative risk of progression with time is expected. Most VA decline reached almost 15 letters when progression to level of 3 occurred; thus, the rate of progression to VA loss is much less in those in grade 4, as seen on Figure S3.
Discussion
In this analysis we have used data from MacTel Study participants that incorporates state-of-the-art imaging modalities, including SD-OCT. The CART analyses demonstrated 3 important features key to the construction of this classification: OCT hyper-reflectivity, pigment, and EZ discontinuity (break). The primary target of interest for the algorithm development by CART was the decline in BCVA. This classification was built using the data from the right eye and targeting the loss of VA over time. The validation studies using the left eyes of the NHOS and both eyes from the NHOS last study visit provided further evidence of support for this classification.
It is not surprising that this classification shares features with the Gass classification (Table 1), including features of the most severe end of the scale, the presence of pigmentation and exudative neovascularization, as these events caused the most severe loss of vision. The direct comparison of the 2 systems (Tables S3 and S4) also demonstrated the exact agreement between those with pigmentation and exudative neovascularization. In addition, the first grade of the current classification does not include any OCT abnormalities in the EZ layer. These early grade patients only manifest changes typical for a diagnosis of MacTel (fluorescein angiographic leakage, the presence of retinal opacification, blunted or right-angle veins, and typical BLR and FAF imaging). The correlation between these earlier grades of the MacTel Classification with those of the Gass-Blodi classification was poor (Tables S3 and S4) as the 2 systems used very different variables. The variables in the Gass-Blodi system are descriptive factors that deal mostly with the vascular component of the disease while those in the MacTel Classification used structural OCT changes that translated to VA loss (Table S1).
The subsequent grades of this classification demonstrate structural changes that result in VA lossdnot surprising, as we found correlations of the structural changes with VA changes. 26 The first is the EZ discontinuity (break) which has little impact on VA when present in the noncentral area. However, once the EZ break involves the center of the fovea, VA declines. In the past, we reported that a deep learning algorithm can be used to measure the volumes of the cavitations, 27 hyporeflective space that does not correlate with fluorescein leakage. These cavitation volumes are associated with VA loss, and it appears that these cavitations precede the EZ loss, especially as it encroaches on the foveal center. 28 The next decrease in VA occurs with the presence of noncentral pigment regardless of the EZ loss status. This signifies a progression with worsening of VA. Further VA loss is evident with the presence of OCT hyper-reflectivity. It should be noted that both grades 3 and 4 are closely related, as often the OCT hyper-reflectivity may occur prior to the formation of pigment. This classification is built based upon VA decrease, thus the eye is not expected to go through each of the severity grades. The authors have also examined patients who progressed directly from grade 1 or 2 to exudative neovascularization, a severe end of the classification. Again, this classification is built upon the decrease in VA.
The OCT hyper-reflectivity is the next grade of severity accompanied with VA loss; this was evaluated by the reading center early in the history of the NHOR study when the composition of these lesions was not well understood. Fluorescein angiography of such eyes typically demonstrated the usual perifoveal leakage from the dilated telangiectatic vessels. With the more recent use of OCT angiography (OCTA), these hyper-reflective lesions seen on OCT are now considered to be intraretinal, nonexudative neovascularization. A longitudinal series of 12 patients with OCTA showed these represented retinal-choroidal anastomosis following the subsidence of the retina, which is defined as the loss of the outer nuclear layer with the retina sinking towards the retinal pigment epithelium. 29 The authors considered these lesions to be retinal-choroidal anastomosis (known as type 3 macular neovascularization) colocalized to the area of the OCT hyper-reflectivity. However, other investigators have considered these images only as outer intraretinal neovascularization and not retinalchoroidal anastomoses. 13 These lesions were not associated with retinal thickening and fluorescein angiography leakage (other than the typical leakage found with the perifoveal telangiectasis), lipid, hemorrhage, or any other fluid exudation along with no evidence of subretinal/subretinal pigment epithelial neovascularization. The OCT hyperreflectivity has also been well described recently by other MacTel Project investigators. 13,15 It is reassuring that our findings agree with other groups, reinforcing the relevance of these imaging modalities to disease grade and risk of progression to vision loss. 30 In order to provide a "simple scale" for the clinician for practical use in the clinic while examining patients, this classification can be reduced to the main features for the following grades: 0 to 2 means no EZ loss, noncentral EZ loss, and central EZ loss, respectively; grade 3 heralds the presence of noncentral pigment, grade 4 is the presence of OCT hyper-reflectivity, grade 5 is presence of central pigment, and finally grade 6 is the most severe end with the development of exudative neovascularization ( Table 3). Each of these grades was represented by key lesions with the corresponding decline in the VA to the most severe grade of this simple MacTel classification.
There are limitations to this study. An important limitation is the use of the left eye to validate an algorithm which was developed using the right eye of the same participants. This may be a disadvantage because eyes are highly correlated. Ideally, a training set, a validation set, and a test set would be conducted using both eyes of this dataset and divided by participants. It is also important to be able to validate this algorithm on an entirely separate dataset involving different participants. Such an external validation data set of a longitudinal assessment of cohort of participants with the diagnosis of this somewhat rare disease, MacTel, is not available. However, a limited dataset for the future may be the sham-treated participants who are currently enrolled in our Phase II 31 and III clinical trials of ciliary neurotrophic factor studies (ClinicalTrials.gov identifier (NCT number: NCT03319849 and NCT03316300), which we expect to complete follow-up in the last quarter of 2022. These study participants, however, all have EZ loss to be eligible for the clinical trial and would only validate the more severe end of the classification with relatively short follow-up in ! 200 participants.
Another limitation includes the potential for selection bias for enrollment at early stages of the disease. However, with NHOR, the spread of disease included fairly equal proportions with early to later stages of the disease. In short of doing a population-based study, which is not feasible in this case, there may be concerns of selection bias, but this may be low in this instance.
A final limitation of the study is that an important technology, OCTA, was not included in the natural history study of this condition because it was not yet available at the beginning of the study. For this neurovascular-glial condition, the use of OCTA, as noted previously, has proven to be essential to diagnose both the intraretinal neovascularization (type 3 macular neovascularization) and typical macular neovascularization (type 2). The technology is relatively new, and the data are accumulating for MacTel and the development of neovascularization.
This classification can indeed be influenced again by future imaging that has not yet developed or is not yet commonly used. Experimentally, the fluorescence lifetime imaging ophthalmoscopy (FLIO) has been used to detect early grades of MacTel. 18 The FLIO is a noninvasive method of imaging modality that provides additional information regarding the autofluorescence of the retina. Detecting FAF lifetimes allows the detection of subtle changes that may be found, especially in the early grades of the disease. In fact, children who are considered unaffected may be detected prior to the classic onset of the disease with the use of FLIO. 32 Other researchers demonstrated that the FLIO also provides information regarding the macular pigment and possibly photoreceptor loss. 33 They also concluded that FLIO correlates with disease severity. Similarly, Sauer et al. conducted a longitudinal study of FLIO in persons with MacTel and found that autofluorescence lifetimes slowly prolonged over time with very specific patterns. 34 Although such data on this condition may be informative in future developments of the MacTel classification, it will depend on how FLIO will be incorporated into clinical care. Currently, it is highly experimental and is limited by the small number of instruments available for testing. This may or may not change in the future.
This current classification is considered a "living document" in that newer technology and additional data may provide further findings for yet another revision of the MacTel Classification. We anticipate that with the development of greater resolution of the ocular imaging, potential contributions of the FLIO images and other imaging modalities such as OCTA and adaptive optics, we will gain further insight into MacTel and changes in this classification are inevitable. This newly developed classification may provide a framework to communicate effectively to other researchers, physicians taking care of the patients, and to the patients directly.
|
2022-12-11T16:02:01.558Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "7ee969a7eb23b608f94344e82568f0e2805cec05",
"oa_license": "CCBY",
"oa_url": "http://www.ophthalmologyscience.org/article/S2666914522001506/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a9f649e0016084327d144c0957258f970cd32ba",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
15991725
|
pes2o/s2orc
|
v3-fos-license
|
BJ-1108, a 6-Amino-2,4,5-trimethylpyridin-3-ol analogue, regulates differentiation of Th1 and Th17 cells to ameliorate experimental autoimmune encephalomyelitis
Background CD4+ T cells play an important role in the initiation of an immune response by providing help to other cells. Among the helper T subsets, interferon-γ (IFN-γ)-secreting T helper 1 (Th1) and IL-17-secreting T helper 17 (Th17) cells are indispensable for clearance of intracellular as well as extracellular pathogens. However, Th1 and Th17 cells are also associated with pathogenesis and contribute to the progression of multiple inflammatory conditions and autoimmune diseases. Results In the current study, we found that BJ-1108, a 6-aminopyridin-3-ol analogue, significantly inhibited Th1 and Th17 differentiation in vitro in a concentration-dependent manner, with no effect on proliferation or apoptosis of activated T cells. Moreover, BJ-1108 inhibited differentiation of Th1 and Th17 cells in ovalbumin (OVA)-specific OT II mice. A complete Freund’s adjuvant (CFA)/OVA-induced inflammatory model revealed that BJ-1108 can reduce generation of proinflammatory Th1 and Th17 cells. Furthermore, in vivo studies showed that BJ-1108 delayed onset of disease and suppressed experimental autoimmune encephalomyelitis (EAE) disease progression by inhibiting differentiation of Th1 and Th17 cells. Conclusions BJ-1108 treatment ameliorates inflammation and EAE by inhibiting Th1 and Th17 cells differentiation. Our findings suggest that BJ-1108 is a promising novel therapeutic agent for the treatment of inflammation and autoimmune disease.
Background
CD4 + T cells play an important role in adaptive immunity by orchestrating other immune cells [1]. Upon antigenic exposure, naïve CD4 + T cells undergo differentiation and expansion of distinct effector subsets, which play a major role in mediating immune responses through the secretion of specific cytokines [2,3]. The differentiation of naïve CD4 + T cells begins with antigenic stimulation, which results in interactions between the T cell receptor (TCR), with CD4 as a co-receptor, and the antigen-MHC II complex presented by antigen presenting cells (APCs) [3]. TCR signaling induces downstream signaling that leads to proliferation and differentiation of naïve CD4 T cells into effector cells [4]. Lineage-specific differentiation depends upon TCR signaling, the cytokine environment, and co-stimulatory molecules that direct differentiation of naïve CD4 + T cells into IFN-γ-secreting T-helper 1 (Th1), IL-4-secreting T-helper 2 (Th2), IL-17-secreting T-helper 17 (Th17), and IL-10-secreting regulatory T (Treg) cells [1,5]. Th1 cells participate in the elimination of intracellular pathogens and regulation of organ-specific autoimmune diseases [1]. Similarly, Th17 cells enhance immune responses against extracellular pathogens, particularly bacteria and fungi, as well as tissue inflammation [2,6]. Nevertheless, unrestrained activation of Th1 and Th17 cells is associated with autoimmune and inflammatory disorders such as multiple sclerosis, rheumatoid arthritis, and psoriasis [7,8].
Autoimmune diseases are abnormal immune responses in which activation and expansion of autoreactive T cells and other inflammatory cells play important roles in tissue inflammation and injury [9,10]. Multiple sclerosis (MS) is one of the most common autoimmune diseases of the central nervous system. In MS, inflammatory cells infiltrate and demyelinate the axonal tract in the brain and spinal cord, disrupting neuronal signaling along axons [11]. Finally, neurodegeneration of the brain and spinal cord, mediated by CD4 + T cells directed against myelin, can result in paralysis [12]. Experimental autoimmune encephalomyelitis (EAE) is an animal model of MS that mimics the clinical and pathophysiological features of human MS [13,14]. Although the exact cause of MS is unclear, it is thought to be mediated by a combination of genetic and environmental factors [10,[15][16][17]. Although Th1 cells are considered to be the primary effector T cells in EAE pathology, EAE can occur in IFN-γ knockout mice [18]. Previous studies have shown that Th17 cells that secrete IL-17 and IL-23 are also important to the development of EAE [19][20][21]. Altogether, the studies provide evidence that both proinflammatory Th1 and Th17 cells are associated with pathogenesis of autoimmune diseases like multiple sclerosis and rheumatoid arthritis [22,23]. MS affects more than 2 million people worldwide. A number of chemotherapeutic and immunotherapeutic agents have been approved as MS diseasemodifying therapies [24][25][26][27]. However, these therapies are associated with serious side effects and frequent response failures, and safe medications to manage autoimmune and inflammatory diseases are still needed.
Previous studies have shown that BJ-1108, an analogue with a phenyl group attached to a 6-amino moiety strongly inhibits angiogenesis and tumor growth [28,29]. Inflammation is one of the major pathophysiological characteristics of autoimmune disease and is associated with oxidative stress and reduction in cellular antioxidant capacity [30]. 6-Amino-2,4,5-trimethylpyridin-3-ol analogues have been reported to show antioxidant and antiangiogenic activity [31,32]. Furthermore, Timilshina et al. reported that a 2,4,5-trimethylpyridin derivative inhibits Th1 and Th17 differentiation and subsequently ameliorates EAE progression [33]. These findings prompted us to examine whether BJ-1108 could be used to treat an inflammatory autoimmune disease like MS, using an EAE model.
We investigated the therapeutic potential of a novel derivative (6-amino-2,4,5-trimethylpyridin-3-ol; BJ-1108) on inflammation and autoimmune disease. We found that BJ-1108 significantly suppressed Th cell function by inhibiting Th1 and Th17 differentiation and marginally decreased proliferation of activated T cells without apoptosis. Further, we found that BJ-1108 treatment reduced Th1 and Th17 generation in a complete Freund's adjuvant (CFA)/OVA-immunized inflammatory model. Furthermore, BJ-1108 treatment delayed the onset of EAE and alleviated ongoing EAE by reducing infiltration of mononuclear cells into the central nervous system (CNS), as well as decreased Th1 and Th17 cells in the spleen, draining lymph nodes (dLNs), and CNS of EAE-affected mice.
BJ-1108 inhibits differentiation of Th1 and Th17 cells
Based on reports that 6-aminopyridin-3-ol analogues inhibit oxidative stress and inflammation [29], we examined whether BJ-1108 is involved in autoimmunity and inflammatory immune responses. CD4 + T cells are essential to an immune response, and Th1 and Th17 cells have been extensively studied to understand inflammation and autoimmune diseases [34,35]. Inhibiting differentiation of naïve CD4 + T cells into proinflammatory Th1 and Th17 cells helps to mitigate autoimmune disease [36]. To test the inhibitory effect of BJ-1108 on Th1 and Th17 differentiation, purified splenic CD4 + T cells were cultured in Th1 and Th17-polarizing conditions with cytokine stimulation and TCR ligation by anti-CD3 and anti-CD28 for 3 days. Under Th1-polarizing conditions, approximately 54% of CD4 + T cells were IFN-γ + in the untreated control group, and BJ-1108 treatment significantly inhibited Th1 differentiation by as much as 37%. In addition, up to a 50% reduction group in Th17 differentiation was observed in the BJ-1108-treated mice. Therefore, BJ-1108 (10 μM) treatment significantly reduced IFN-γ + and IL-17 + cells differentiation on day 3 after in vitro stimulation with TCR and cytokines (Fig. 1a). To further investigate the regulatory effects of BJ-1108 on CD4 + T cells differentiation, CD4 + T cells stimulated by TCR and cytokines were treated with varying concentrations of BJ-1108. BJ-1108 treatment decreased the percentage of IFN-γ + Th1 and IL-17 + Th17 cells in a concentration-dependent manner (Fig. 1b). These data suggest that BJ-1108 significantly decreased differentiation of Th1 and Th17 cells.
BJ-1108 inhibits antigen-specific CD4 + T cell differentiation
To examine whether BJ-1108 can inhibit antigen-specific Th1 and Th17 differentiation of CD4 + T cells, we used ovalbumin (OVA)-specific OT-II TCR transgenic mice. OT-II CD4 + T cells express transgenic alpha-chain and beta-chain TCRs that are specific for chicken OVA 323-339 in the context of MHC class II [37]. Naïve CD4 + T cells were isolated from spleens and lymph nodes (LNs) of OT-II TCR transgenic mice and cultured with BJ-1108 in the presence of OVA peptide and APCs for 3 days. Consistent with Fig. 1a, BJ-1108 inhibited generation of IFN-γ + CD4 + T cells by 30% and IL-17 + CD4 + T cells by 50% (Fig. 2a). To examine the effects of BJ-1108 on OVAspecific Th1 and Th17 differentiation, OT-II CD4 + T cells were treated with various concentrations of BJ-1108 in the presence of OVA peptide and APCs. The percentage of IFN-γ-producing Th1-and IL-17-producing Th17 cells was decreased in a concentration-dependent manner by BJ-1108 (Fig. 2b). Generation of IL-17-secreting Th17 cells was suppressed more than IFN-γ-secreting Th1 cells by treatment with BJ-1108. Thus, BJ-1108 can directly inhibit differentiation of antigen-specific T cells.
BJ-1108 has no significant effect on T cell proliferation
To test whether the regulatory effect of BJ-1108 on Th cell differentiation is mediated by cytotoxicity or reduced proliferation, we checked the effect of our compound on apoptosis and proliferation of CD4 + T cells. CD4 + T cells were isolated and cultured under anti-CD3 and anti-CD28 stimulation in the presence or absence of BJ-1108 for 3 days. On day 3 after activation, apoptosis was assessed with Annexin-V and propidium iodide (PI) staining. The percentages of viable cells were comparable between untreated cells and those treated with various concentrations of BJ-1108 (Fig. 3a). Next, carboxyfluorescein succinimidyl ester (CFSE)-labeled CD4 + T cells were cultured with various concentrations of BJ-1108 in Fig. 1 BJ-1108 inhibits CD4 + T cell differentiation. a Naïve CD4 + T cells isolated from spleens and draining lymph nodes were stimulated under Th1-and Th17-polarizing conditions in the presence or absence of 10 μM BJ-1108 for 72 h. Cells were then re-stimulated with phorbol 12-myristate 13-acetate, ionomycin, and GolgiStop for 4 h, followed by intracellular cytokine staining and flow cytometry. b Th1 and Th17 differentiation with multiple concentrations of BJ-1108. Representative data (mean ± SEM) of three independent experiments are shown. **p < 0.001 and ***p < 0.0001 versus vehicle Th1-and Th17-polarizing conditions for 3 days. Based on CFSE dilution, treatment with different concentrations of BJ-1108 demonstrated a slight decrease in Th1 and Th17 cell proliferation (Fig. 3b). However, decrease in proliferation was negligible compared to BJ-1108-mediated differentiation. Furthermore, in vitro proliferation measured by thymidine analog bromodeoxyuridine (BrdU) labeling assay demonstrated that BJ-1108 treatment slightly decreased proliferation under Th1-polarizing conditions (Fig. 3c). Similarly, Ki-67, a nuclear protein indicating cell proliferation, was analyzed after 3 days of culture under Th1-polarizing conditions. Proliferation of IL-12-treated cells increased in a manner relative to that of cells not treated with cytokine, whereas BJ-1108 treatment reduced the rate of Ki-67 expression to less than 10% of that in cells not treated with the compound (Fig. 3d). Altogether, these data suggest that although BJ-1108 slightly affects CD4 + T cell proliferation, but that inhibition of Th cell differentiation is not a result of reduced proliferation or increased apoptosis.
BJ-1108 reduces the inflammatory response in CFA/ OVA-immunized mice
Th1 and Th17 cells are critical for the progression and pathology of inflammation and autoimmune diseases [8]. Inhibition of Th1 and Th17 cell differentiation in vitro by BJ-1108 prompted us to examine whether this compound could inhibit inflammatory responses initiated by IFN-γ and IL-17A. Mice were administrated OVA (2 mg/ml) in CFA by intraperitoneal injection. CFA/OVA administration induced inflammation through the generation of Th1 and Th17 cells. BJ-1108 (1 mg/kg) was injected every day for up to 4 days, and mice were sacrificed on day 5. We found that the size of spleens, lymph node (LN) and Cells were then re-stimulated with phorbol 12-myristate 13-acetate, ionomycin, and GolgiStop for 4 h, followed by intracellular cytokine staining and flow cytometry. Representative data (mean ± SEM) of three independent experiments are shown. *p < 0.01 and ***p < 0.0001 versus vehicle draining lymph nodes (dLNs) in BJ-1108-treated CFA/ OVA-immunized mice were smaller than those in mice immunized with CFA/OVA alone (Fig. 4a). Furthermore, Th cells from spleens and LNs of CFA/OVA-immunized mice that received either BJ-1108 or no treatment were analyzed. The results showed that CFA/OVA administration promoted IFN-γ and IL-17A generation as compared to no CFA/OVA immunized mice, and BJ-1108 treatment inhibited generation of IFN-γ and IL-17A in LNs and spleens in CFA/OVA immunized mice (Fig. 4b, c). Thus, BJ-1108 inhibits inflammation by reducing IFNγ-producing Th1 and IL-17A-producing Th17 cells in vivo.
BJ-1108 attenuates EAE pathology by negatively regulating inflammatory T cells
The finding that BJ-1108 inhibited Th1 and Th17 differentiation in vitro and reduced inflammation by decreasing IFN-γ-producing Th1 and IL-17A-producing Th17 cells in vivo prompted us to investigate whether BJ-1108 treatment affects the development of inflammatory autoimmune disease. To address this question, we employed the EAE model, a well-established model of MS, because Th1 and Th17 cells are critical for the progression and pathology of MS [21]. To investigate the possible protective role of BJ-1108 in the development of EAE, we immunized female C57BL/6 mice with MOG 35−55 peptide emulsified with CFA and pertussis toxin as described in "Methods" section. Vehicle or BJ-1108 (1 mg/kg) was administrated intraperitoneally every other day beginning 1 day after immunization. The severity of the resulting paralysis was assigned a disease score. All mice in the vehicle-treated group developed severe EAE with a mean peak clinical score of 3.5, whereas BJ-1108-treated mice showed delayed disease onset and significantly diminished EAE severity, with a 2.6 mean peak clinical score (Fig. 5a). The total cell number from spleen and CNS were also decreased in drug treated EAE mice (Fig. 5b). Furthermore, CNS-infiltrated mononuclear cells were enriched by density gradient centrifugation and analyzed by flow cytometry. As depicted in Fig. 5c, significantly reduced infiltration of CD4 + T cells, CD8 + T cells, B220 + B cells, and CD11b + macrophages/microglia was observed in the brains and spinal cords of BJ-1108-treated EAE mice. Because autoreactive CD4 + T cells, especially Th1 and Th17 cells, are critical to the induction of EAE, we analyzed Th cells in EAE mice. As expected, BJ-1108 treatment significantly reduced IFN-γ-secreting Th1 and IL-17-secreting Th17 cells in spleens, dLNs, and CNS of EAE-induced mice (Fig. 5d). These data suggest that BJ-1108 is effective in ameliorating ongoing EAE by restricting Th1 and Th17 cell differentiation.
Discussion
Our study demonstrated BJ-1108 suppression of Th1 and Th17 cell differentiation with no effect on proliferation and apoptosis of activated T cells in vitro. BJ-1108 restricted CFA/OVA-induced inflammation by reducing IFN-γ-producing Th1 and IL-17A-producing Th17 cells in vivo. Furthermore, BJ-1108 treatment alleviated inflammatory infiltration and reduced leakage of mononuclear cells from the blood brain barrier. Mice that received BJ-1108 treatment displayed lower EAE scores and better clinical recovery from EAE. Moreover, BJ-1108 administration reduced the frequencies of Th1 and Th17 cells in the spleens, LNs, and spinal cords of EAE mice.
CD4 + Th cells play an important role in activating and directing other immune cells [1]. IFN-γ secretioninduced Th1 cell differentiation depends on signaling through IFN-γ receptor, IL-12 receptor and their downstream signaling transcription factor signal transducer and activator of transcription 1 (STAT1) and STAT4. Likewise, IL-17-producing Th17 cell differentiation is initiated after IL-6 stimulation and subsequent activation of STAT3 [36]. These proinflammatory Th1 and Th17
Fig. 4 Suppression of inflammation in vivo by BJ-1080 in complete
Freund's adjuvant/ovalbumin (CFA/OVA)-immunized mice. Acute inflammation was induced in 8-to 12-week-old C57BL/6 mice by intraperitoneal immunization with OVA in CFA, and then 1× PBS or 1 mg/kg BJ-1108 was administered intraperitoneally each day. a Images of spleens, lymph nodes, and draining lymph nodes (dLNs) from CFA/OVA-immunized mice treated or untreated with BJ-1108 after 4 days. CD4 + T cells from b dLNs and c spleens were re-stimulated with phorbol 12-myristate 13-acetate and ionomycin for 4 h, followed by measurement of IFN-γ-and IL-17A-producing CD4 + T cells by flow cytometry. Numbers in the dot plots represent percentages of Th1 and Th17 cells. The mean ± SEM of five independent experiments is shown. # p < 0.01 versus vehicle. *p < 0.01 and **p < 0.001 versus CFA/OVA-treated group in complete Freund's adjuvant and pertussis toxin. Mice were administered 1 mg/kg BJ-1108 or vehicle intraperitoneally each day. a Clinical scores were assigned daily. b Total cell count in spleen and CNS of drug treated and untreated EAE mice. c Twenty-four days later, total mononuclear cells were isolated from brains and spinal cords of mice and analyzed by flow cytometry. Total percent of infiltrated CD4 + T cells, CD8 + T cells CD11 + cells and B220 + cells in CNS. d 24 days later, lymphocytes from spleen, LNs, and spinal cords were re-stimulated with phorbol 12-myristate 13-acetate and ionomycin for 4 h, followed by measurement of IFN-γ-and IL-17A-producing CD4 + T cells using flow cytometry. Numbers in the dot plots represent percentages of Th1 and Th17 cells. The mean ± SEM of five independent experiments is shown. *p < 0.01 and **p < 0.001 versus vehicle cells are key mediators of inflammation and the development of autoimmune disease. Th1-and Th17-associated cytokines have a significant impact on inflammation in the brain and severity of disease [38,39]. The attenuation of inflammation in BJ-1108-treated mice was associated with a decrease in the differentiation of Th1 and Th17 cells and therefore a reduction in IFN-γ and IL-17 cytokine expression in spleens, lymph nodes and CNS.
CD4 + T cell responses to antigen are instructed by innate immune factors. The environment in which APCs initially encounter antigens is associated with specific adjuvants. Presentation of processed antigen with costimulatory molecules and a precise combination of cytokines drives differentiation of naïve CD4 + T cells toward a specific effector lineage, including that of Th1, Th2 and Th17 cells [40]. Therefore, we used an OVAbased mouse inflammatory disease model in which OVA combined with CFA, a potent Th1/Th17 skewing adjuvant, induced a powerful Ova-specific Th1 and Th17 inflammatory immune response. BJ-1108 treatment inhibited inflammation of CFA/OVA-induced mice by negatively regulating differentiation of IFN-γ + Th1 and IL-17 + Th17 cells.
EAE, an animal model of human MS, is mediated by autoreactive T cells that secrete pro-inflammatory cytokines in the CNS, leading to inflammation and demyelination [11,12,41]. Th1 cells have been considered the primary effector T cells in the pathology of EAE and MS [8,42,43]. However, accumulating evidence reveals that both Th1 and Th17 cells are crucial for autoimmune disease [8,22,44,45]. Proinflammatory cytokines such as IFN-γ and IL-17, secreted by Th1 and Th17 cells, cause inflammation, and are primary causes for aggravation autoimmune disorder [44]. Therefore, investigating drugs that target Th1 and Th17 cells to manage autoimmune diseases has clinical significance. We provide in vitro and in vivo evidence that BJ-1108 represses the development of Th1 and Th17 cells and ameliorates EAE. BJ-1108 treatment significantly reduced the generation of Th1 and Th17 cells in spleens, dLNs, and CNS of EAE mice at the peak of disease. However, APCs such as microglia, astrocytes, macrophages, and B cells act as the first line of defense against infection or inflammation and can participate in self-destructive mechanisms by secreting inflammatory factors and/or presenting myelin epitopes to autoreactive T cells [46]. How BJ-1108 affects myeloid cell function is unknown; however, a significant reduction in infiltrating CD11b + macrophages/microglia and B220 + B cells in the brain and spinal cord suggests that BJ-1108 may regulate myeloid cells by regulating T cell function.
The antioxidant effects of 6-amino-2,4,5-trimethylpyridin-3-ol scaffold have been reported in several studies [31,32]. Recently, BJ-1108 was shown to significantly inhibit angiogenesis and reactive oxygen species (ROS) production in cancer cells [29]. T cells, especially Th1 and Th17 cells, function in tumor immunity by secreting cytokines and transcription factors [47]. ROS produced in response to NOX-2 are associated with the differentiation of T cells, but are not required for T cell activation or proliferation [48]. The current study revealed anti-inflammatory activities of BJ-1108 in an inflammatory disease model, mediated by a reduction in Th1 and Th17 cell differentiation. NOX-2-derived ROS are associated with T cells differentiation, but do not affect T cell proliferation and activation [48][49][50]. Bonini et al. reported that administration of ROS scavengers reduced EAE lethality in negative regulator of ROS (NRROS)-knockout mice [51]. NRROS interacts with NOX-2 and maintains its stability [51]. BJ-1108 significantly inhibits NOX-2-derived ROS, which may lead to reduced Th1 and Th17 differentiation [29]. Altogether, the studies suggest that the effects of BJ-1108 on T cell differentiation correlate with inhibition of NOX-2-derived ROS and subsequently ameliorate inflammation and autoimmune disease.
In conclusion, the current study revealed the therapeutic potential of BJ-1108 for inflammation and autoimmune diseases. BJ-1108 treatment reduced the severity of inflammation and EAE disease by inhibiting differentiation of naïve CD4 + T cells into Th1 and Th17 cells. However, because previous studies have indicated that Th1 and Th17 differentiation is caused by inhibition of NOX-2-derived ROS, further research is needed to define the precise target of BJ-1108. Collectively, these data imply that BJ-1108 could be a promising therapeutic compound for the management of Th1-and Th17-mediated inflammation and autoimmune disease.
Mice
C57BL/6 mice were maintained in pathogen-free conditions at the Animal Center of Yeungnam University. The gradual fill method of CO 2 inhalation was used to euthanize mice with minimal pain. No animals died during the study. Animal experiments were approved by Institutional Animal Care and Use Committee (IACUC) of Yeungnam University (Approval No: 2015-029).
Apoptosis assay
Naïve CD4 + T cells were purified using microbeads (Miltenyi Biotec) and cultured under Th1-polarizing conditions with anti-CD3 (5 μg/ml) stimulation. After 3 days, apoptosis was assessed by staining for Annexin V-APC and PI according to the manufacturer's protocol (BD Biosciences), followed by flow cytometry.
Immunization
To induce an inflammatory response, 6 to 8-week-old mice were immunized intraperitoneally with 2 mg/ml OVA and an equal volume of CFA in the presence or absence of 1 mg/kg BJ-1108 daily. After 5 days, spleens and dLNs were collected and analyzed by flow cytometry. To induce EAE, 6 to 8-week-old mice were immunized subcutaneously with 6 mg/ml MOG 35-55 peptide (MEVGWYRSPFSRVVHLYRNGK) emulsified in CFA containing 5 mg/ml Mycobacterium tuberculosis H37RA (Difco). Mice were injected intraperitoneally with 250 ng pertussis toxin (List Biological Laboratories) on the day of immunization and 48 h later. Mice were monitored daily, and disease was scored as follows: 0 = normal; 1 = limp tail; 2 = paraparesis (limp tail and incomplete paralysis of one or two hind limbs); 3 = paraplegia (limp tail and complete paralysis of two hind limbs); 4 = paraplegia with forelimb weakness or paralysis; 5 = moribund appearance or death. One milligram per kilogram BJ-1108 in phosphate-buffered saline (PBS) or PBS only (vehicle) were administered intraperitoneally on day 0 and every other day subsequently.
Statistical analysis
Data are expressed as the mean ± SEM. Student's t test or one-way ANOVA were used to assess the significance of differences between experimental groups using Prism software (GraphPad).
Authors' contributions
YK performed the research, prepared the figures, and wrote part of the manuscript; MT prepared the figures and wrote part of the manuscript; TN and BJ designed and provided the compounds and wrote part of the manuscript; JC designed the study and wrote the manuscript. All authors read and approved the final manuscript.
|
2017-06-27T19:11:46.812Z
|
2017-02-28T00:00:00.000
|
{
"year": 2017,
"sha1": "087cb9fdb4bbcfa00fd4155ca13134e4e835f6bb",
"oa_license": "CCBY",
"oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-017-0113-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1c0e2a02933156951af6bb93fdd66b8a7299b4f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247570744
|
pes2o/s2orc
|
v3-fos-license
|
TUNNEL WIDENING OF ACL RECONSTRUCTION AUGMENTED BY AN PLATELET RICH OSTEOCONDUCTIVE-OSTEOINDUCTIVE ALLOGRAFT COMPOUND: A RANDOMIZED BLIND-ANALYSIS PILOT STUDY
Background: The anterior cruciate ligament (ACL) is a commonly injured ligament in the knee. Bone tunnel widening is a known phenomenon after soft-tissue ACL reconstruction and etiology and clinical relevance have not been fully elucidated. Osteoconductive compounds are biomaterials providing an appropriate scaffold for bone formation such as demineralized bone matrix. Osteoinductive materials contain growth factors stimulating bone lineage cells and bone growth. A possible application of osteoinductive/osteoconductive (OIC) material is in ACL surgery. Questions/Purposes: We hypothesized that OIC placed in ACL bone tunnels: 1) reduces tunnel widening, 2) improves graft maturation and 3) reduces tunnel ganglion cyst formation. To test this hypothesis, this study evaluated the osteogenic effects of demineralized bone matrix (DBM) and platelet rich plasma (PRP) on tunnel widening, graft maturation, and ganglion cyst formation. Study Design: Randomized controlled clinical trial pilot study. Methods: A total of 26 patients electing ACL reconstruction surgery were randomized between OIC and control group. Measurements of tunnel expansion and graft-tunnel incorporation were conducted via quantitative image analysis of MRI scans performed at six months after surgery for both groups. Results: No patients had adverse post-operative reactions or infections. The use of OIC significantly reduced tunnel widening (p < 0.05) and improved graft maturation (p < 0.05). Patients treated with OIC presented with a significantly lower prevalence of ganglion cyst compared to the control group (p < 0.05). Conclusion: The use of OIC has measurable effects on the reduction of tunnel widening, improved graft maturation and decreased size of ganglion cyst after ACL reconstruction. Clinical Relevance: This study explored the utilization of biologics to minimize bone tunnel widening in ACL reconstruction surgery. Study Design: Randomized controlled clinical trial.
INTRODUCTION
Anterior cruciate ligament (ACL) reconstruction surgery is a standard treatment with more than 100,000 procedures performed annually in the United States [2,4,16,20,34]. The surgery aims to restore knee stability and improve the patient's quality of life. Historically, the success rate of ACL surgery in returning patients to their previous level of sports activity ranges between 75-90% [17]. However, the process of rehabilitation after ACL injury can last for several months or even years, and represents a significant psychological and economic burden for the patient [42].
Bone tunnel widening is a known phenomenon after ACL reconstruction and has a significant correlation with the utilization of all-soft tissue grafts [5]. The largest percentage of tunnel widening occurs during the first six weeks after surgery and can continue over two years after surgery [32]. The incidence of tunnel widening ranges from 25 to 100% and 29 to 100% in femoral and tibial tunnels respectively [9,14,28,33]. Graft fixation implants including cortical fixation devices and bioabsorbable interference screws have been correlated with increased tunnel widening [8]. The etiology of widening is multifactorial, including mechanical factors (e.g., tunnel mal-positioning, graft motion wind-shield wiper effect, longitudinal elongation bungee-cording etc.), bone necrosis from the drilling technique, early rehabilitation, cytokine induced bone resorption, and synovial fluid ingress into the bone tunnels [5,8]. Although the correlation of tunnel widening and clinical outcomes remains unclear, significant widening of the tunnels may complicate revision surgery [23].
Osteoinductive/osteoconductive compounds (OIC) are biomaterials characterized by bioactive properties: they provide an appropriate scaffold for bone formation (osteoconductivity) and are able to bind and concentrate endogenous bone morphogenetic proteins (BMP) (osteoinductivity) in circulation, thus promoting osteogenesis [19]. The use of demineralized bone matrix (DBM) has gained popularity in ACL reconstruction. DBM is a type I collagen matrix of allograft bone that remains after extensive . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 20, 2022. ; https://doi.org/10.1101/2022.03.17.22272560 doi: medRxiv preprint processing of removing blood, cells, and minerals. The final result is a small particulate which can be applied to form a three-dimensional scaffold (osteoconductive) with osteogenic growth factors like BMPs, a family of transforming growth factor-β (TGF-β). Moreover, autologous platelet-rich plasma (PRP) is a concentrated solution of a patient's own platelets (or thrombocytes). By centrifuging the patient's whole blood, the red blood cells can be separated from the serum containing the concentrated platelets. Activated platelets have demonstrated osteoinductive properties on mesenchymal stem cells [43]. A possible application of DBM and PRP is to facilitate the osseointegration of the ACL graft into the bone tunnels to minimize tunnel widening and improve the structural stability of the graft construct.
This in turn may allow the knee joint to adapt to bearing physiological mechanical loads sooner, with ultimately faster recovery of the patient.
As an initial step to validate our research hypothesis, this pilot study aimed to observe the osteogenic effects of DBM and PRP on tunnel widening, graft maturation, and tunnel ganglion cyst formation relative to control groups.
Patients allocation:
Twenty-seven patients (twenty-eight knees) were prospectively enrolled in a randomized controlled trial from 2016 to 2019. Patients were randomized to the OIC group after electing to have surgery. There were a total of 13 patients in the OIC group and 14 in the control group.
There were 14 males and 12 females with an average age of 31 y.o. One patient declined to participate to the study after undergoing treatment, and four others were lost to follow-up MRI. The Reporting Trials flow diagram is shown in figure 1. The demographic data of all participants are summarized in Table 1. The experimental protocol was approved by the Institutional Review Board at the University of Miami. All subjects provided written consent prior to participation in the study. All the surgical . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 20, 2022. ; https://doi.org/10.1101/2022.03.17.22272560 doi: medRxiv preprint procedures were performed by JPH. Data analysis was conducted at the University of Miami and the Hommen Orthopedic Institute.
Surgical Procedures:
Patients randomized to the OIC group had blood drawn in the pre-op holding area, which was subsequently prepared utilizing the Arthrex ACP® system to yield platelet rich plasma (PRP). The PRP was mixed with 5 ml of StimuBlast® (Arthrex, Inc., Naples, FL). The graft was constructed using an allograft peroneus longus tendon with an ACL Tightrope® button (Arthrex, Inc., Naples, FL) at each end. The femoral and tibial tunnels were reamed to match the graft size. We utilized a low-profile reamer through a medial portal to create the femoral tunnel and a FlipCutter® (Arthrex, Inc., Naples, FL) for the tibia tunnel. Prior to docking the graft into the femoral and tibial sockets in the OIC group, 1 to 2 ml of mixed PRP/DBM was injected via a syringe into each tunnel with the arthroscopic irrigation turned off to avoid extravasation. The graft was then secured into the sockets utilizing the tension-slide button mechanism. Additional PRP/DBM was injected into both tunnels at the graft-bone interface after securing the graft. Concomitant meniscus pathology was addressed by meniscectomy or repaired utilizing the surgeon's choice of an all-inside, inside-out, or outside-in technique. Chondral injuries were addressed with chondroplasty. A hinged knee brace was . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 20, 2022. ; https://doi.org/10.1101/2022.03.17.22272560 doi: medRxiv preprint placed postoperatively for three weeks for isolated ACL reconstructions and six weeks for concomitant meniscus repairs. Immediate weight bearing as tolerated versus partial foot flat weight bearing was allowed for isolated ACL reconstructions and meniscus repairs, respectively.
MRI scanning protocol:
The intra-articular graft, tunnels, and graft-tunnel interface were assessed using a 1.5T MR System (Optima MR430s; GE Healthcare) with sequences in the sagittal and coronal planes as well as axial cuts perpendicular to the femoral and tibia tunnels. A summary of the parameters used in the MRI scanning protocol is reported in Table 2.
Measurement of tunnel widening:
All measurements were conducted by a researcher blinded to treatment allocation prior to statistical analysis. Measurements of both tunnel expansion and graft maturation were conducted via quantitative image analysis of MRI scans performed at six months after surgery for both groups. Measurements of the tibial and femoral tunnel widening diameters were conducted using an established approach [6]. The femoral tunnel and tibial tunnels were measured at four points: the aperture, midpoint, tunnel end, and at the greatest tunnel diameter by taking the measurement of the diameter perpendicular to the tunnel, see Figure 2A-F. The tunnel expansion was calculated in the femur and tibia by subtraction of the initial surgical tunnel diameters from the largest measurement taken in each respective region. The quantity of patients with moderate (5mm) and large (10mm+) tunnel expansion was also noted for each group.
Measurement of graft maturation:
Graft maturation was quantified using a previously reported approach to evaluate the signal/noise quotient (SNQ) by taking the average intensity across three areas of the ACL graft: the proximal, central, and distal intra-articular regions [11,22,37], see Figure 3. The measurement of SNQ was carried out as follows: in correspondence of each region where the measurement was performed, a circular region of interest of 5-mm diameter was defined and the signal intensity from the MRI scan was measured and averaged across the region (S ROI ). In a similar manner, . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 20, 2022. ; https://doi.org/10.1101/2022.03.17.22272560 doi: medRxiv preprint the signal intensity (S PCL ) was also measured in the intact PCL with the purpose of normalizing the signal intensities measured in the ACL graft. To eliminate background noise, the signal intensity (S BACK ) was also measured at a background region of interest 2 cm anterior to the patellar tendon. The SNQ was then calculated using the following equation [39]:
Tunnel ganglion cyst formation:
Additionally, patients with tunnel ganglion cysts were quantified for each group by the blinded radiologist. Post-surgical graft failures were also assessed. was used. All the data were reported in terms of mean ± standard deviation.
RESULTS
No patients had adverse post-operative reactions or infections. One patient in the OIC group suffered graft failure 3 months postoperative after falling on a boat. No other participants of the study presented with graft failure.
Tunnel Widening.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Tables 3 and 4
Graft maturation.
A
Tunnel ganglion cyst formation.
Two of the ten OIC patients (20%) and eight of the eleven Control patients (72.72%) presented with ganglion cysts in the tibial tunnel. The difference between these proportions was statistically significant (P=0.048).
DISCUSSION
Injury at the ACL may represent a significant psychological and economic burden to patients, given the lengthy recovery process [42]. There is significant research aimed at increasing the rate of graft-totunnel incorporation and graft maturation [3,38] as well as reducing the recovery time after surgery via innovative rehabilitation modalities [17]. Since OIC materials have been heralded as osteogenic [19], we hypothesized that the injection of these materials into the bone tunnel would enhance graft incorporation and graft maturation. Aimed at testing this hypothesis, we compared the use of DBM . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 20, 2022. ; https://doi.org/10.1101/2022.03.17.22272560 doi: medRxiv preprint combined with PRP in soft-tissue graft reconstructions to a control group, and monitored the postoperative events of tunnel widening, graft maturation and tunnel ganglion cyst formation through a quantitative analysis of MRI images. We adopted StimuBlast®, a DBM, due to its osteoconductive and inductive properties. The DBM is manufactured with a reverse phase medium that allows it to be fluidphased during handling and more viscous at body temperature. The DBM putty was augmented with anabolic properties of ACP®, a leukocyte-poor PRP, to promote the healing response during the phases of inflammation, cellular proliferation, and subsequent tissue remodeling. Unlike leukocyte-rich PRP, this does not induce the potential catabolic cytokines such as interleukin-1β, tumor necrosis factor-α, and metalloproteinases involved in the inhibition of bone-tunnel osteointegration [36]. The intraoperative application of DBM/PRP is relatively simple for the surgeon and assistants. The mixture is easily mixed and can be readily injected from a syringe through a curved needle applicator into the tunnels. Once placed, the mixture is viscous enough to remain in the tunnels and becomes more viscous at body temperature.
Tunnel widening has several implications after ACL surgery including the possibility of postoperative tibia stress fractures [7,24,35] and delays or failures of graft incorporation [31]. In the setting of revision surgery, significant tunnel widening may require a bone grafting procedure followed by subsequently staged ACL revision surgery [23,32]. The presence of ganglion cysts within the tunnels has been associated with possible incomplete graft-bone incorporation [10,15]. Tunnel widening and ganglion cysts formation may represent a localized osteolytic process due to a cell-mediated cytokine response [44]. Our results indicate that the use of DBM and PRP: 1) significantly reduced tunnel widening, 2) had a positive effect on graft maturation, and 3) reduced presence of ganglion cyst formation within the tibial tunnel.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 20, 2022 It should be noted that the present contribution is a pilot study, and its findings do not allow establishing causal relationships among the DBM/PRP injected and the postoperative outcomes.
Nevertheless, we postulate that the use of DBM/PRP may have enhanced the organization of the fibrocartilage insertion and mineralization, as well as reduced the cell-mediated osteolysis at the graftbone interface. In animal studies, the use of DBM has been shown to create a 4-zone fibrocartilage tendon-to-bone healing [12]. In a rabbit model, Anderson et al. wrapped a BMP and TGF-β soaked collagen sponge around the graft inside the tunnel. Histologically, they observed more consistent boneto-graft apposition and a fibrocartilaginous interface relative to controls at 2, 4 and 8 weeks.
Biomechanically, the grafts also demonstrated significantly increased ultimate tensile strength at 2 and 8 weeks [1]. It has been proposed that synovial cytokines, such as TNF-α, interleukin 1β, IL-6, BMPs, and nitric oxide may mediate ACL tunnel widening [44]. DBM/PRP may have served as a grout, limiting the ingress of synovial fluid into at the interface between the graft and tunnel wall. The use of DBM/PRP may also have improved the location of graft fixation within the tunnel. In the control group, the graft relied solely on cortical suspensory fixation which may have led to increased graft motion due windshield wiper and bungee cord effects [30]. Instead, in the treatment group, the grafts were "potted" into a DBM/PRP putty which may have resulted in more aperture fixation and less micromotion. If DBM/PRP can enhance osteointegration of the graft within the tunnels, this may lead to earlier graft "ligamentization", or maturation, due to increased revascularization of the implanted graft, as may have been demonstrated by the MRI signal intensity findings in our study [25,39].
Several studies have investigated the effects of injectable OICs in tendon-to-bone healing.
Animal studies conducted on rats [40], dogs [13], rabbits [18,41] and goats [27] have found accelerated graft-to-bone healing rates with calcium phosphate. In fact, Ma et al. demonstrated that calcium phosphate could deliver increased local BMPs in tendon-to-bone healing [21]. Accordingly, in a . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted March 20, 2022 [26]. At 1 year, tunnel expansion in patients treated with calcium phosphate was reduced by approximately 7% when compared to the control group. The use of calcium phosphate remains controversial due to its potentially slow degradation rate, which may slow osteointegration [29].
As a pilot study, we used a convenience sample of 26 patients performed by a single surgeon.
A multicenter study, including a larger sample size and more surgeons will be needed to confirm the preliminary evidence reported in this contribution. Another limitation of this study is the short observation period of less than 1 year as a longer-term analysis may show larger differences in graft incorporation and maturation between OIC and Control groups.
In conclusion, an in vivo study was conducted to evaluate the effects of an injectable OIC compound on the graft incorporation and tunnel widening after ACL reconstructive surgery. The results of this investigation indicate that the injection of OIC has measurable effects on the reduction of tunnel widening and ganglion formation and enhanced graft maturation after ACL reconstruction. A comparison of our findings to those of previous studies suggests that the benefits of augmenting ACL reconstruction with the injection of OIC may be observed within one year after surgery. The use of osteogenic compounds may enhance ACL reconstruction graft incorporation and thereby promote earlier rehabilitation and return to sports.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 20, 2022
|
2022-03-20T19:07:28.648Z
|
2022-03-20T00:00:00.000
|
{
"year": 2022,
"sha1": "11141f62749a1f84d67f7db0dda9de3900bb7470",
"oa_license": "CCBY",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2022/03/20/2022.03.17.22272560.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MedRxiv",
"pdf_hash": "11141f62749a1f84d67f7db0dda9de3900bb7470",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233940689
|
pes2o/s2orc
|
v3-fos-license
|
EFFECT OF PAPER SURFACE PROPERTIES ON INK COLOR CHANGE, PRINT GLOSS AND LIGHT FASTNESS RESISTANCE
Printability is a combination of paper-related factors that contribute to achieving the desired print quality level and relates to the paper's ability to absorb ink. An important property of ink on paper is its setting behavior. The spread and placement of the ink on the paper surface is affected by the surface structure of the paper. The surface topography of the paper is decisive in the process of ink placement on the paper surface. In this study, the effects of surface roughness of the paper on wettability, print gloss, ink color change and light fastness change were investigated. For this purpose, prints on papers with different surface roughness were made in accordance with ISO 12647-2 with Cyan color ink in accordance with DIN ISO 2846-1. The CIE L*a*b* and gloss values of the test prints, which were allowed to dry in order to detect color and print gloss differences on the paper surfaces, were measured periodically until the ink film was completely dry. In addition, the effects of the paper surface on the light fastness of the ink were measured and recorded. The results were discussed in terms of print quality.
INTRODUCTION
Paper is a composite structure in layers with different porosity, obtained by random arrangement of cellulose fibers. 1 This variable porous composite structure is extremely important for both liquids and gases. 2,3 The visual quality of the print result in printing systems largely depends on a physico-chemical interaction between paper and ink, especially the process of ink setting and drying on the printing substrate. 4,5 In the process of fluid ink settling and absorption onto the paper surface, the surface characteristics of the paper are extremely important. Therefore, the surface structure of the paper is the most effective parameter for achieving print quality. 6,7 The most important surface feature of the paper is surface smoothness or roughness. The distribution of the fibers on the screen during the paper production, the short-long fiber structure, the amount of filler and the calendering degree determine the surface smoothness or roughness of the paper. Surface roughness is a measure of the degree of deviation of the paper surface from a flat surface and characterizes the indentation and topography of the paper surface. From the topography of the paper, the geometric structure of the paper surface is understood. As the profile rises and falls, the paper becomes coarser. This determines the visual effect, texture, and printability of the paper. 8 Printability is the main parameter for high quality colour reproduction, increased ink gloss, uniform appearance or preventing print defects. 9 Depth and width differences on the paper surface can affect the quality parameters of the ink film, such as ink settling on the paper, print density, print gloss, and color CIE L*a*b*. 10 One of the most important reasons for controlling and measuring the surface smoothness is print quality. In contact printing processes, the ink film is transferred to the paper surface by physical contact. When the pores on the paper surface are deep enough to prevent contact, the ink cannot be transferred to these low areas and uneven ink transfer will result in poor print quality.
A measurement of the ink film thickness is done with its optical density. The ability of an ink to spread laterally across the sheet has a significant effect on print density. The change in print density is an important parameter that determines the color and gloss of the ink film. As surface smoothness increases, the ink requirement and print density for sufficient coating decreases. When the ink film is adjusted to provide satisfactory print density on rough areas of the paper, this time the same ink thickness may be too high for optimum print quality on the smooth areas of the paper. This can cause mottling and other printing problems. 11 The degree of ink film settling on the paper surface is also important in printing due to the contrast created between the ink and the paper. If the ink is on paper, it absorbs light effectively. The more ink settling on the paper, the less ink that will absorb light remains on the surface. Thus, the ink loses its gloss and is perceived as different in color. The same ink film printed on paper with different surface properties appears as having different colors.
The resistance of the ink film to light is known to depend on the exposure of the ink to the light, as well as the thickness, transparency, pigmentation, high filler or white pigment content factors of the ink film. However, studies on the effect of the printing substrate on ink film light fastness are limited.
The main purpose of the present study is to determine the effects of paper surface roughness on the print quality parameters. For this purpose, first of all, water contact angles in papers with different surface roughness were measured and surface energies were calculated. Then, offset printing was performed on these controlled papers in accordance with Standards 12647-2 in the laboratory.
By applying colorimetric measurements on printed samples, the effects of surface roughness of the paper on ink color change, print gloss and ink light fastness were examined and discussed.
Materials and methods
In the study, firstly, surface roughness and water contact angle parameters, which determine the surface properties of papers with different surface structures, were measured; surface energy values were calculated and recorded. These papers with different surface characterization were tested in the laboratory environment with Sun Chemical SunLit Express Offset Cyan ink, which meets DIN ISO 2846-1 standards. Standard printing room conditions were maintained during the printing process. The color change (ISO 12647-2:2013), print gloss (ISO 2813:2014) and resistance to light (British Standards (BS) 4321) parameters of the ink film, which determine the quality of printing, were measured and recorded.
Characterization of paper surface
The structure and surface energy of the paper greatly affect its interaction with liquids. 12 Since surface roughness affects the surface energy of the paper, it causes differences in capillary absorption. [13][14][15] Thus, the absorption rate of the liquid by the paper and the spread on the surface is affected in different directions. 12 The surface energy of paper is commonly determined by liquid contact angle measurement. 16 The contact angle measured at the base of the droplet describes the relationship between the surface tension of the liquid and the surface free energy of a solid surface. 17 In the study, 80 g/m 2 uncoated papers with different surface roughness were used as print substrate. The roughness of the papers was measured using the Bentsen method, with a Lorentzen & Wettre Roughness Tester, applying constant compressed air (98 kPa), as specified in the SCAN-P 21 TAPPI UM 535 standard test method.
In order to determine the effect of the surface roughness of papers on their surface energies, water contact angle measurements were made using the sessile drop technique (Fig. 1). A PGX Pocket Goniometer (FIBRO System AB), with program version 3.4, was used for the measurements, in accordance with the Tappi T458 standard test method. The measurements were carried out in a closed system and were recorded with a CCD camera.
The surface energies of the papers were calculated according to the ASTM D5946 standard test method, 18 depending on the water contact angle values. The relationship between a static contact angle and the surface energy forces was defined by Young from the interfacial tensions (Eq. (1)): The surface roughness, contact angle and surface energy values of the papers with the same weight but different surface structures are given in Table 1. Surface energy is extremely important as it directly affects how well the ink wets the substrate surface. In general, a surface with low surface energy causes poor wetting. The study showed that as the test papers' surface roughness increased, the liquid contact angle increased and the surface energy decreased. This demonstrates that, similarly to polymer films, better wettability can be achieved in printing papers with higher surface energy and low roughness.
Ink transfer onto paper substrates
The papers with specified surface properties were conditioned for 24 hours at 23±1 °C temperature and 50%±3% relative humidity, in accordance with the DIN EN 20187 standard, in the printing environment before the test prints were made. Test prints were made with the IGT-C1 printability tester at 350 N/m 2 printing pressure and 0.3 m/s printing speed, in accordance with the ISO 12647-2: 2013 standard under optimum printing room conditions (35 mm printing width). In the test prints made under controlled conditions, mineral oil based uniform cyan color sheetfed offset ink was used in accordance with DIN ISO 2846-1 standards. Standard printing conditions have been maintained in colorimetric measurements after printing. On average 20 replicates were made per test and the results of these 20 test print samples were then averaged.
Colorimetric analysis
The CIELAB was set up as a three-dimensional color universe that creates a rectangular or Cartesian coordinate space with the L*, a* and b* axes (Fig. 3). The L* axis denotes brightness, while the a* and b* axes represent the red-green and yellow-blue contrasting colors, respectively. 20 The color difference is the distance measurement between the two colors in the color space. 21 The change in the printed ink color over time is expressed as color difference (∆E* ab ). The difference between the two colors can be calculated by placing the color coordinates in the three-dimensional CIE L*a*b* color space. 22,23 In this study, CIE L*a*b* (CIELAB) color measurements of the test samples were carried out with an X-Rite eXact ™ Spectrophotometer. Colorimetric measurements were repeated at certain time intervals until the ink film on the paper surface was completely dry (10 days).
The measurements were carried out under M1 measurement conditions (400-700 nm range D50 illuminator, 2° observer, 0/45 geometry, backrest and polarization filter on) in accordance with the ISO 12647-2:2013 standard. Color differences were calculated in accordance with CIE ∆E* ab 1976 ISO 13655 standard, depending on time. The print gloss measurements of the test samples were carried out at the 60º measurement angle with a BYK-Gardner Glossmeter (Geretsried, Germany) at the moment of printing.
Figure 3: Schematic diagram of Color CIE L*a*b* axes
In order to determine the fading effect of the printing substrate on the printed ink film, samples printed with standard ink under standard printing conditions were exposed to 192 hours of fading in a Solarbox 1500 KFG-2400 Xenon Arc Light Fastness Tester. Color CIE L*a*b* measurements were made before and after fading with the X-Rite eXact Spectrophotometer, and the color differences (∆E*) in the samples were calculated.
RESULTS AND DISCUSSION
Paper roughness of is a very important feature for print quality. 24 The effect of surface roughness of papers on the print quality was evaluated according to the color change, print gloss and light fastness criteria in the printed ink film.
Effect of paper surface on print color change
The differences of color L*a*b* values measured in different time periods, compared to the printing moment color, were calculated using CIELAB based CIE 1976 color-difference formula (Eq. (2)): (2) 25 where ∆L*, ∆a*, ∆b*: difference in L*, a*, and b* values between the specimen color and the target color. Color difference (Delta E) results are given in Figure 4.
Taking the printing moment color CIE L*a*b* value as reference, it has been observed that the ink film color differences (∆E* ab ) that increase over time depend on the decreasing surface smoothness and surface energy of the paper. The ink color printed on papers with high surface energy (smooth surface) changed less than the ink color printed on papers with low surface energy (rough surface). This shows that the surface properties of the paper play a significant role to in ink film color change.
Effect of paper surface on print gloss
Print gloss is one of the most important features of the printed ink film. The relationship between print gloss and the surface properties of the paper is extremely important in terms of print quality. Gloss is a measure of light reflection and is largely affected by paper smoothness. 26 If ink is on paper, it absorbs light effectively. The more ink settling on the paper, the less ink that will absorb light remains on the surface. The rough and macroporous structure of the paper surface causes the printed ink to spread and settle on the surface of the paper. Thus, the ink loses its gloss and the color is perceived differently. In this study, the printing gloss measurements of the ink films printed on paper surfaces with different roughness were performed with a BYK Gardner Glossmeter (Sheen Instrument, U.K.), at a 60º measurement angle. The test prints indicated that the ink film gloss decreased with the increase of surface roughness of the paper (Fig. 5). The highest printing gloss was obtained on papers with the highest surface smoothness and surface energy. Based on these results, it can be said that the surface roughness of the paper and the surface energy have an important effect on the print gloss.
Effect of paper surface on ink light fastness
Apart from their printing and finishing characteristics, inks often have to meet different requirements. Such requirements mainly concern light fastness and resistance to chemicals. Light fastness refers to the resistance of a color to fading under the sunlight. 27 Light causes the color to become weak or its tone to change.
The science of light fastness includes standards that have been established to consistently evaluate light fastness, as well as the laboratory analysis of light fastness, which includes using accelerated light fastness testing and the subsequent evaluation. 28 The most common form of calibration for light fastness testing is the blue wool (BW) scale. 29 The light fastness standard consists of a graded set of blue colored wool in 8 light fastness steps, therefore referred to as the wool scale. The degrees of light fastness determined in this way are classified as follows: 1 = very poor, 2 = poor, 3 = moderate, 4 = fairly good, 5 = good, 6 = very good, 7 = excellent, 8 = maximum. 30 In order to determine the effect of paper surfaces on the printing color fading behavior, printing samples were gradually withered using a Solarbox 1500 KFG-2400 Light Fastness Tester, with a xenon system. Measurements were made in accordance with British Standards (BS) 4321 and the results were evaluated according to the blue wool scale. The color change (∆E* ab ) on the paper surfaces after the light fastness test was determined by calculating the difference between the color L*a*b* values measured with the spectrophotometer before and after fading.
After the fading test, differences in the printing colors on the test papers were detected ( Table 2). The fact that the prints were made under the same environment conditions and using the same ink reveals that the color changes were caused by the paper characteristics. However, the fact that the color changes are not directly proportional to the surface roughness of the paper shows that other surface and structural properties of the paper can also be effective in the light fastness of the ink. Therefore, the effect of other surface and structural parameters of the paper on light fastness should be also investigated.
CONCLUSION
It largely depends on the surface structure of the paper whether printing can be obtained without loss and at the desired color value. As surface roughness increases in test papers, the change in printed ink color has also increased over time. Therefore, considering that the color may change over time in multicolored prints, smoother papers with high wettability (low liquid contact angle) and surface energy should be preferred.
To obtain a homogeneous and glossy ink film on the paper surface, the film thickness of the ink must be greater than the roughness on the surface of the paper. However, increasing the ink film thickness adversely affects other print quality parameters and the ink takes longer to settle on the paper. Dot gain, color deviation, set-off and strike-through problems increase in printing. For these reasons, it is not a correct approach to increase the ink film thickness or ink density in printing works where high print gloss is desired. Instead, smooth surface papers should be preferred and the ideal print gloss should be achieved by printing at optimum ink density values. The study showed that there is a direct connection between the surface roughness of the paper and the print gloss. In test prints, higher printing brightness was obtained with low paper surface roughness. This result shows that the use of paper with low surface roughness is important in terms of enabling printability with lower solid ink density, reducing drying and printing problems.
Finally, this study showed that, in addition to the color pigment, paper as a printing substrate also had an effect on ink light fastness. Therefore, the printing substrate factor should also be considered in printing studies on light fastness sensitivity.
|
2021-05-08T00:04:21.457Z
|
2021-02-12T00:00:00.000
|
{
"year": 2021,
"sha1": "fea9d9b5fd7020793bff82edeb83278aa4560003",
"oa_license": null,
"oa_url": "https://doi.org/10.35812/cellulosechemtechnol.2021.55.14",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8fa8ecf8314d6bbfe9f79687b4fe9b23c235a141",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
244097647
|
pes2o/s2orc
|
v3-fos-license
|
An Atypical Presentation of Motor Aphasia: A Case Report and Review of Literature
Broca's aphasia results due to lesions involving the anterior perisylvian speech area. Patients have intact comprehension and writing but have labored, nonfluent speech with decreased linguistic output. We hereby present a case of a 47-year-old female who was operated on for left ventricular trigonal meningioma by a modified middle temporal gyrus approach and developed motor aphasia as a complication. She had intact comprehension and writing but had decreased linguistic, labored output. It could not be labeled as subcortical aphasia as she had no repetition. Eventually, her aphasia improved completely. Our case is the first of its kind and hence we propose that the posterior middle temporal gyrus area has speech output function, the lesion of which could cause motor aphasia.
Introduction
Broca's aphasia is a common language disorder, in which people ordinarily have impeded speech and preserved comprehension. In 1861, Broca acknowledged a patient, Louis Victor Leborgne who had a loss of speech and paralysis yet no loss of understanding [1]. Leborgne was withdrawn and merely spoke one word "tan". After his death, Broca performed an autopsy and found a lesion in the left frontal lobe. Thereafter, he performed autopsies in 12 patients and concluded that the posterior part of the inferior frontal gyrus is a domain of speech, and an insult to it leads to Broca's aphasia. A significant number of Broca's counterparts couldn't agree with his teachings. However, the strongest counterargument was presented by Broca's intern, Pierre Marie who stated that left frontal convolution assumes no job in speech production [2]. Marie proposed that Broca's aphasia was a blend caused by infliction to Wernicke's region and subcortical structures (putamen and pallidum). Dejerine, neurologist and adversary of Marie, defended Broca's unique suggestion contending that harm to Broca's territory was essential for Broca's aphasia, although he likewise included the vital commitment of other structures, like the insula, parietal region, and underlying white matter [3,4]. However, Marie held that Dejerine and others failed to prove a consistent relationship between harm to third frontal convolution and Broca's aphasia [5,6]. The discussion continues to present times, Dronkers et al. utilized high-resolution MRI to examine Leborgne's brain and discovered expansion far beyond the inferior frontal gyrus and included structures such as the insula and inferior parietal lobe [7,8] and disagreements continue about the regions causing Broca's aphasia.
The current consensus is that the harm most likely incorporates parts of Broca's zone and some adjoining structures [7,9,10]. The correct neighboring structures are anyways obscure. In our case, there was an intraventricular mass in the left lateral ventricle which was operated by the middle temporal gyrus approach and totally excised. With this approach, the ordinarily expected aphasias are sensory or transcortical aphasias (commonly sensory), and not motor aphasia. But our patient manifested pure motor aphasia and hence we are proffering an atypical case of developing transient Broca's aphasia as a complication.
Case Presentation
A 47-year-old female presented with a history of headache for four months and four episodes of generalized seizures. There were no visual disturbances, limb weakness, sensory deficits, behavioral changes, speech disturbances, or cranial nerve deficits. MRI brain showed large well defined lobulated T1 isointense, T2/FLAIR heterogeneously hyperintense intraventricular mass lesion arising from the atrium of left lateral ventricle, and intense heterogeneous enhancement on contrast with moderate peri-lesion edema in left parieto temporal white matter with mass effect (Figures 1, 2). CT cerebral angiogram showed arterial feeders from the left posterior cerebral artery and venous drainage into a left internal cerebral vein.
FIGURE 2: Preoperative MRI brain contrast -B
Left temporal craniotomy was done and a modified middle temporal gyrus approach (middle one-third) was used as it was considered the safest with minimal risk of manipulation to the neighboring structures. With this total excision of the lesion was successfully achieved. Postoperative CT scan was done to rule out any injuries to the neighboring structures, hematomas or any other complications acquired during the surgery (Figures 3, 4). Histopathological examination was diagnostic of meningioma, transitional type, WHO Grade I.
FIGURE 4: Postoperative CT scan brain -B
The patient postoperatively developed motor aphasia. On evaluation, aphasia was pure motor (Broca) type which is unusual considering the location of the tumor and approach used. The aphasia was transient with complete resolution over three weeks.
Discussion
Aphasia is a language impairment, affecting speech components. There are six components of speech, i.e., fluency, comprehension, naming, repetition, reading, and writing. Edwin Smith Papyrus reported the first case of aphasia in a person with traumatic injury to the temporal lobe. Its spectrum ranges from an inability to retrieve the names of objects, or put words together into sentences, or read to being completely inarticulate.
In Wernicke's aphasia patients have intense comprehension deficits with an intact motor component. Henceforth, it is also called 'fluent' or 'receptive aphasia'. Reading, writing, repetition, naming are impaired and they speak non-existent or irrelevant words without knowledge.
Conduction aphasia is an uncommon type with a characteristic debt in repetition with coherent (yet paraphasic) speech production.
Transcortical aphasias include motor, sensory and mixed transcortical aphasia. Their characteristic feature is preserved repetition. People with transcortical motor aphasia typically have intact comprehension and impaired speech production. People with sensory and mixed transcortical aphasia have poor comprehension and are unaware of their errors.
Expressive aphasia, also known as Broca's aphasia, is characterized by a loss of capacity to produce language (spoken, manual, or sometimes written), although comprehension generally remains intact. They manifest labored spontaneous speech. Speech generally contains important words conveying in very short phrases known as "telegraphic speech". It is caused by acquired damage to the anterior regions of the brain, such as the left posterior inferior frontal gyrus or inferior frontal operculum, also described as Broca's area (Brodmann area 44 and 45).
The following tabulation has been attempted to simplify the understanding of commonly known aphasias for a better understanding of the case ( Table 1).
Types Area Fluency Comprehension Repetition Naming Reading Writing
Broca's Posterior inferior frontal gyrus - Preserved both anterior and posterior speech area but disconnected from rest of brain --+ ---
TABLE 1: Types of aphasias
In our case there was preserved comprehension; reading, naming, speech, and repetition were impaired. These features are symbolic of pure motor aphasia. As described above, the tumor was resected by a modified middle temporal gyrus approach. According to standard learning, the posterior temporal region is a seat of sensory aphasia/conduction aphasia/sensory type subcortical aphasia, and therefore the expected dysfunction with this approach was sensory or transcortical aphasias and pure motor aphasia was challenging to explain.
Marie in his study in 1906 showed that Broca's aphasia could infrequently develop without compromising the left inferior frontal gyrus [5]. Similarly, in our case tumor was approached through the posterior part of the middle temporal region, yet the patient had Broca's aphasia. In a recent article in 2007, Fridriksson et al. illustrated that Broca's aphasia developed without damage to the classical Broca's area [9]. This drives to a conclusion that Broca's area is not the only area whose injury may result in motor aphasia and there could be some other unexplored areas for word output. This theory could explain motor aphasia in our case.
In 2015, Fridriksson et al. in another article explained that a chronic Broca's aphasia is provoked not only due to damage to Broca's area but also to the left superior temporal gyrus [11]. There is no data to date demonstrating that lesions around the posterior aspect of the middle temporal area are associated with Broca's aphasia.
Okada and Hickok in 2006 showed that the superior temporal gyrus is involved in both speech perception and production [12]. But in our case, the lesion was approached through the middle temporal gyrus which is distant from the superior temporal gyrus. Even if retraction injuries to the superior temporal gyrus were to be considered as a reason for aphasia, there had to be a certain amount of sensory deficits. But our patient had no sensory deficits.
Our case is the first of its kind with transient motor aphasia due to intervention involving the posterior aspect of the middle temporal gyrus. Hence we intend that the middle temporal gyrus could also have a role in speech articulation and not just the usual posterior inferior frontal gyrus or recently described superior temporal area.
Conclusions
From the above case, we can conclude that motor speech is more widely distributed than just primarily in the dominant frontal lobe. Patients can develop motor aphasia despite the lesion being distant from the classical posterior inferior frontal gyrus. The middle temporal gyrus could have a role in the motor language though there is no literature supporting this. Hence this could be an anatomically variant of Broca's aphasia.
Such rare studies help one to focus on decision making and help clinicians encountering such situations to act accordingly.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2021-11-14T16:07:56.662Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4f043390987fdb4281c5de6150528cccb3ee6708",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/73746-an-atypical-presentation-of-motor-aphasia-a-case-report-and-review-of-literature.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "512b7de7cd289ca4acb0413254f66ab4ca830f0d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238586855
|
pes2o/s2orc
|
v3-fos-license
|
Firearm violence: a neglected “Global Health” issue
Populations around the world are facing an increasing burden of firearm violence on mortality and disability. While firearm violence affects every country globally, the burden is significantly higher in many low- and middle-income countries. However, despite overwhelming statistics, there is a lack of research, reporting, and prioritization of firearm violence as a global public health issue, and when attention is given it is focused on high-income countries. This paper discusses the impact of firearm violence, the factors which shape such violence, and how it fits into global public health frameworks in order to illustrate how firearm violence is a global health issue which warrants evidence-based advocacy around the world.
Background
Firearm violence is a pressing public health issue that is growing on a global scale. Mortality from firearms contributes more than 250,000 deaths each year worldwide, yet firearms still hold a massive commercial industry, with over 1 billion weapons in circulation as of 2017 [1,2]. 857 million of these firearms are in civilian hands; an estimate that has increased by 32% from 2006 [1]. Even as research on non-communicable diseases and injuries (NCDIs) continues to progress, the burden of firearms on global mortality has received less attention. In fact, research, reporting, and prioritization of firearm violence is often concentrated in high-income countries (HICs), despite this issue affecting all populations globally.
Given the increasing importance of firearm violence, this commentary aims to restate the impact of firearms on health using selected research on the burden of firearm violence. It will demonstrate how politics, globalization, and spread of culture and ideals influence firearm violence on a global scale, and recognizing the diversity in definitions of the term "global health," provides three definitions to demonstrate how firearm violence ought to be framed as a global health issue [3][4][5]. The need to recognize firearm violence as a global health issue is important to support prioritizing research and creating sound interventions, especially in low-and middle-income countries (LMICs).
The impact of firearm violence
Injury and mortality rates from firearm violence, highrisk populations, causal factors, and societal issues which impact the rates of firearm violence are available from the Global Burden of Disease (GBD) data (Table 1). However, globally civilian populations share a majority of the burden of firearm violence; only 10% of mortality from firearms occurs in conflict situations [3,6]. Mortality from firearms is also disproportionately concentrated in LMICs in South America, in addition to the United States; the GBD data from 2017 estimates that 50% of mortality from firearms occurred in countries that make up just 10% of the global population [1]. Worldwide, populations see similar patterns of firearm mortality; in every country, death rates are higher for men than women, and often the highest risk is among 20-to 24-year-olds. The majority of firearm deaths globally are in fact as a result of homicides [1].
Overall, fatality from firearm violence in the US has not decreased in over a decade. This is even as homicide risk is decreasing and suicide risk increasing over time; around 70% of homicides and 50% of suicides in the US involve firearms [7]. A disproportionate amount of these homicides are concentrated among black males; while white males have the highest risk for suicide by firearms, which also starts in younger populations and contributes to more than half of firearm mortality. At-risk populations also differ geographically in the US; suicide rates are 54% higher in rural areas while homicide rates are 90% higher in urban centers. Given these types of statistics, the US is an outlier in mortality from firearms as compared to other nations worldwide.
The GBD data separates firearm violence into three categories-deaths from firearms, unintentional injuries from firearms, and self-harm from firearms [1]. Collectively, mortality from these three categories contributes to over 250,000 preventable deaths per year, and over 46,000 Disability Adjusted Life Years (DALYs) lost (Table 1) [1]. Mortality from firearms is also significantly higher in LMICs; those in Central and South America, most notably Guatemala, Venezuela, and El Salvador, have mortality rates significantly higher at around 40 per 100,000 deaths as compared to the global average of 6 per 100,000 [1]. 2400 per 100,000 DALYs are lost from physical violence in these countries, compared to a global average of 171 per 100,000 [1].
Firearm violence also places a substantial burden on healthcare systems, economies, and societies around the world. In 2010 alone, the societal costs of firearm violence totaled $164 billion, the equivalent of 1.1% of the gross domestic product (GDP) of the United States in that same year. The 2015 Global Burden of Armed Violence report noted that almost USD 2 trillion could have been saved in a decade if the global homicide rate had been decreased from 7.4 to 3 deaths per 100,000; this is the equivalent of 2.64% of global GDP from 2010 [6].
Factors that influence global firearm violence
The global political landscape directly influences firearm violence, particularly in LMICs [8]. The dynamic between high-income and low-income countries around the world also shapes the burden of firearm violence as policies, trade, and globalization worsen the problem. For example, extensive supply chains and the import of arms from HICs like the United States to other countries around the world highlight the importance of considering firearm violence as a global health issue [9]. More recently, the COVID-19 pandemic caused a surge of more than 90 million USD worth of firearms being exported to LMICs, particularly in Asia [9]. As scholars begin to better understand the private industry's role in firearm violence, it is clear that such expansions in trade, supply chains, and marketing of arms on a global scale are contributing to the burden of firearm violence worldwide.
The global war on drugs has also extensively impacted the burden of firearm violence. In Mexico, government efforts to crack down on drug trafficking organizations resulted in an escalation of drug-related violence [10]. In addition, the political influence of HICs (such as the United States) has contributed to this issue, taking into account increasing drug consumption, loose firearms regulations, and regimes which are reportedly key actors in the drugs trade [8]. In fact, the political climate in HICs like the United States has the influence to shape how firearm regulations are interpreted, and enacted, in LMICs around the world. Most recently, the Mexican government has sued gun manufacturers in the United States for facilitating the trafficking of weapons with their negligent practices [11]. Roughly 70% of the trafficked firearms in Mexico come from the U.S., and 17,000 homicides can be linked to these weapons annually. The estimated damage of these trafficked weapons in Mexico is nearly 2% of the country's GDP, which they will seek in the lawsuit, aiming to reduce further homicides in Mexico [11].
Additionally, globalization plays a role in increasing firearm violence as dynamics between HICs and LMICs change over time. Increases in foreign imports around the world have escalated competition within arms markets to produce higher performing weapons that can fire multiple types of ammunition, increasing both the accessibility and lethality of firearms [9]. Increasing globalization has also been found to promote the openness of trade and weaken public authority, making small arms ownership more likely [12]. The demand for small arms in HICs, which are responsible for the majority of the manufacturing and trading of these weapons, has caused the proliferation of small arms in LMICs as they are recycled down the "economic ladder." [12] While research shows that attempting to limit globalization would not stop the trade of firearms, it is critical that the public health community more closely studies the movement of arms from HICs to LMICs [12].
Researchers have also suggested that "gun culture" from HICs (like the United States) has been "sold" to LMICs through various forms of media. In India, a country with historically low gun ownership rates, globalization has caused rates to increase [13]. In the city of Shivpuri, a program was sponsored which fast tracked the firearm permit process for men who were undergoing vasectomies, noting that this program allowed men to trade one aspect of their masculinity for another [13]. The shifting ideals surrounding the policies and accessibility of firearms caused by globalization has drastically impacted LMICs, a critical consideration for public health professionals as they begin to target this issue [12].
Finally, foreign policies can have an impact on arms trade and firearm violence in LMICs. In 2006, the United States was the only nation to dissent from the UN vote to implement stricter standards using an international arms trade treaty [14]. Traditionally the US policy positions on firearms have weakened efforts to further international agreements and gun control policies, and have particularly affected Latin America, as loopholes in laws have allowed for a consistent flow of arms across the US and Mexico border [14]. Alternatively, in the 1990s, the United States suspended arms exports to Paraguay until they were able to improve arms policies, illustrating how foreign policy can influence firearm violence in both positive and negative ways [14].
It is also clear that local policy can be successful in reducing violence; in 2021, Colombia introduced gun carrying restrictions in the cities of Bogotá and Medellín. Within 6 years, these cities saw a 22.3% reduction in firearm violence (adjusted for the standard annual reduction in control cities) [15]. Given this research, it is critical that global and local policy focuses their efforts on combating firearm violence.
Firearm violence is a Global Health issue
As the body of research on firearm violence continues to grow, the public health and medical communities have shifted towards treating firearm violence not only as a preventable condition but also akin to an "infectious disease." [16] Individuals who are more susceptible to firearm violence typically share various common exposures; like traditional disease epidemics. The environment, social networks, socioeconomic status, and education all influence the prevalence of firearm violence among communities and individuals [16].
This shift in knowledge has informed a variety of public health interventions aimed at decreasing the burden of firearm violence. However, to strengthen these interventions, it is critical that firearm violence be framed as a global health issue. As is evident from the factors which shape firearm violence, this issue takes place on a global scale, affecting the world's most vulnerable populations. Politics, globalization, and the spread of culture and ideals through multi-media sources around the world has all contributed to the spread of firearm violence, which must be addressed by the global public health community [8,12,13].
Firearm violence also fits within the numerous frameworks and definitions proposed for 'global health.' In 2008, United Kingdom launched a 5-year strategy to target global health issues, defined as "health issues where the determinants circumvent, undermine or are oblivious to the territorial boundaries of states, and are thus beyond the capacity of individual countries to address through domestic institutions." [3,17] Firearm violence is certainly a global health issue by this standard, in urgent need of research, attention, resources, and intervention. It is also beyond the capacity of individual nations to address this through domestic institutions; many countries are struggling to achieve a social and health system in which firearm violence is no longer a cause for mortality, and due to the effects of globalization, it will take interventions in every country to curb the transfer of firearms between HICs and LMICs [12]. Koplan defines global health as "an area for study, research, and practice that places a priority on improving health and achieving health equity for all people worldwide." [4] Reducing mortality from firearm violence is an area of critical research and practice and has direct health and equity implications. By better understanding patterns of violence in at-risk populations around the world and synthesizing current research on successful interventions, this knowledge can be used to develop strategies to reduce the burden of firearm violence on individuals, communities, and economies.
Scholars also state that for an issue to be considered a global priority, it should have four tenets: a global conceptualization of health, the synthesis of populationbased approaches, the central concept of equity in health, and a cross-sectoral, interdisciplinary approach [5]. Firearm violence fits well within this conceptualization of global health; while all countries experience varying degrees of burden on their economies, healthcare systems, and populations as a result of firearm violence, globally it affects everyone and therefore warrants solutions rooted in research, evidence-based policy, and interventions. Given the applicability of firearm violence to the current definitions and frameworks for global health, the need for transnational solutions to firearm violence, and the ways in which firearm violence is influenced by global politics, it is apparent that given its vast impact on the health of world populations, firearm violence is a global health issue [8,12,13].
Conclusions
Given the existing evidence of negative worldwide health outcomes associated with firearm violence, it is imperative that this becomes a priority topic of discussion in the field of global public health. As a result, this commentary argues that firearm violence needs to be increasingly framed as a major global health issue, further energize a wider global community of activists, and develop momentum around a narrative that is compelling in terms of health impact and known interventions. This pathway has done well for other global health priorities and firearm violence could follow the same success [18].
In order to achieve this change, a multi-disciplinary research effort surrounding firearms is necessary to create strong solutions. As such, we recommend three avenues of research that are of high priority in relation to firearm violence and its pertinence to global health. First, the health community should study the commercial determinants of health associated with firearm violence. Commercial determinants of health are defined by Kickbusch (2016) as "strategies and approaches used by the private sector to promote products and choices that are detrimental to health." [19] Like many other industries, the firearms industry uses tactics such as marketing, lobbying, or corporate social responsibility to divert attention away from the negative health outcomes associated with their products-in this case, weapons [9]. Studying the issue through this perspective can provide a valuable perspective on one of the root causes of firearm violence and potential solutions.
Second, future research should be inclusive of both the direct and indirect health outcomes associated with firearm violence. Existing data has given the global public health community strong evidence of the direct burden of firearm violence on health. However, research is just beginning to understand how this violence can affect other aspects of health, such as mental and physical health outcomes associated with grief, fear, PTSD, costs of medical care, and other factors associated with firearm violence [20].
Finally, research must bring perspectives from LMICs to high-income countries, including those population groups most affected by firearms around the world.
Studying the political economy, socio-cultural issues, and equity issues associated with firearm violence in different risk groups can help the global public health community best tailor policies and interventions to address this issue not just in the United States, but worldwide. Ultimately, bringing the best science to bear on this global perspective will help enable evidence-based advocacy to both national and international audiences to change mindset.
|
2021-10-12T13:44:17.378Z
|
2021-10-12T00:00:00.000
|
{
"year": 2021,
"sha1": "b4dbecced97d74d56ce66dc10478d4f03b55103d",
"oa_license": "CCBY",
"oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/s12992-021-00771-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4dbecced97d74d56ce66dc10478d4f03b55103d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270745230
|
pes2o/s2orc
|
v3-fos-license
|
Citropten Inhibits Vascular Smooth Muscle Cell Proliferation and Migration via the TRPV1 Receptor
Vascular smooth muscle cell (VSMC) proliferation and migration play critical roles in arterial remodeling. Citropten, a natural organic compound belonging to coumarin and its derivative classes, exhibits various biological activities. However, mechanisms by which citropten protects against vascular remodeling remain unknown. Therefore, in this study, we investigated the inhibitory effects of citropten on VSMC proliferation and migration under high-glucose (HG) stimulation. Citropten abolished the proliferation and migration of rat vascular smooth muscle cells (RVSMCs) in a concentration-dependent manner. Also, citropten inhibited the expression of proliferation-related proteins, including proliferating cell nuclear antigen (PCNA), cyclin E1, cyclin D1, and migration-related markers such as matrix metalloproteinase (MMP), MMP2 and MMP9, in a concentration-dependent manner. In addition, citropten inhibited the phosphorylation of ERK and AKT, as well as hypoxia-inducible factor-1α (HIF-1α) expression, mediated to the Krüppel-like factor 4 (KLF4) transcription factor. Using pharmacological inhibitors of ERK, AKT, and HIF-1α also strongly blocked the expression of MMP9, PCNA, and cyclin D1, as well as migration and the proliferation rate. Finally, molecular docking suggested that citropten docked onto the binding site of transient receptor potential vanilloid 1 (TRPV1), like epigallocatechin gallate (EGCG), a well-known agonist of TRPV1. These data suggest that citropten inhibits VSMC proliferation and migration by activating the TRPV1 channel.
INTRODUCTION
Cardiovascular diseases and conditions, including myocardial infarction, atherosclerosis, stroke, ischemic heart disease, hyperplasia, and hypertension, are the leading causes of death worldwide. 1,2Vascular smooth muscle cells (VSMCs) are prominent components of the vascular wall that play a crucial role in maintaining vascular homeostasis.Abnormal proliferation and migration of VSMCs following vascular injury are critical events for developing atherosclerosis and intimal hyperplasia. 3,4Physiologically, VSMCs exist in a quiescent contractile state to regulate the vascular tone via vessel constriction. 5−8 Phenotypic switching of VSMCs is commonly observed in atherosclerosis, intimal hyperplasia, hypertension, and postangioplasty restenosis. 9,10Elucidation of the mechanisms of VSMCs phenotypic switching may provide novel therapeutic targets for the prevention and treatment of these diseases.Aberrant VSMCs proliferation and migration play essential roles in the development of vascular diseases and conditions, such as atherosclerosis, restenosis, and hypertension. 11,12herefore, development of new strategies against VSMCs phenotypic switching, proliferation, and migration may aid the treatment of VSMC-related pathological conditions.
Transient receptor potential vanilloid type 1 (TRPV1) is a ligand-gated cationic channel with considerable Ca 2+ permeability.TRPV1 is expressed in blood vessels in the skeletal muscle, mesenteric and skin tissues, aorta, and carotid arteries. 13TRPV1 is activated by many physical and chemical stimuli, including noxious heat, acidic pH, inflammatory mediators, and vanilloid compounds. 14In addition to its key role in neuronal functions, TRPV1 also regulates the vascular tone, blood pressure, and pathogenesis of cardiovascular diseases. 15,16Tissue-specific activation of TRPV1 channels stimulates the activation of the endothelial nitric oxide synthase (eNOS)−nitric oxide (NO) pathway in vascular endothelial cells, thus providing protection against cardiovascular diseases by regulating endothelium-dependent vasodilation and promoting angiogenesis.Moreover, TRPV1 is abundantly expressed in VSMCs and activation TRPV1 by capsaicin leads to arteriole constriction. 17Overexpression of TRPV1 closely involved in inhibiting VSMC migration and phenotype transitions-induced by angiotensin II (Ang II). 18ctivation of TRPV1 blocked VSMC proliferation and migration by upregulating the expression of peroxisome proliferator-activated receptor alpha (PPARα). 19itropten (5,7-dimethoxycoumarin, limettin) is a coumarin derivative that exhibits various biological activities.It exerts antiproliferative effects on B16 melanoma cells 20 and preventive effects against chronic-depression-induced mild stress in rats. 21Recently, we demonstrated that citropten ameliorates osteoclastogenesis related to the mitogen-activated protein kinase (MAPK) and phospholipase Cγ (PLCγ)/Ca 2+ signaling pathway. 22However, to date, no studies have investigated the effects of citropten on the proliferation and migration of VSMCs and the association between citropten and the TRPV1 channel.Therefore, we aimed to explore the roles and mechanisms of action of citropten in the regulation of the VSMC physiology.
Citropten Inhibits High Glucose (HG)-Induced
RVSMCs Proliferation.To assess the effect of citropten on RVSMCs proliferation, we examined the cytotoxicity effects of citropten on RVSMCs using the 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT) assay.As depicted in Figure 1B, citropten (5, 10, 20, and 40 μM) did not exert any cytotoxic effects on RVSMCs.Therefore, these concentrations were selected for subsequent experiments.HG was used to induce cell proliferation, as described previously. 23Here, HG enhanced the cell viability and proliferation.HG-stimulated increase in cell proliferation was gradually inhibited via pretreatment with citropten in a concentration-dependent manner.The results of the cell counting kit-8 assay were similar to those of the MTT assays, indicating that citropten reduces the proliferation of RVSMCs (Figure 1C,D).Consistently, HG increased the mRNA and protein levels of cyclin D1, KLF4, proliferating cell nuclear antigen (PCNA), and other proliferation markers.However, pretreatment with citropten reversed these effects (Figure 1E,F).Decreased contractile marker levels, increased synthetic marker levels, and increased proliferation rate are the main features of VSMCs phenotypic switching. 4As indicated in Figure S1, the effect of citropten on the HG-induced shift toward the synthetic phenotype was revealed by immunofluorescence and Western blotting of the contractile marker, SM22.HG downregulated SM22, but pretreatment with citropten recovered its expression.These results suggest that citropten prevents HGinduced phenotypic switching and proliferation of VSMCs.
2.2.Effect of Citropten on HG-Induced RVSMCs Migration.Wound healing and transwell assays were performed to determine the effect of citropten on RVSMCs migration.Migration rates were recorded at 0 and 48 h.As shown in Figure 2A,B, HG promoted wound closure and cell migration compared with those in the untreated cells (Figures 2A and B) after 48 h.However, pretreatment with citropten reversed these effects.Consistently, the levels of migrationrelated markers, such as matrix metalloproteinase (MMP)-2 and MMP9, increased in response to HG stimulation, but pretreatment with citropten completely reversed these effects (Figure 2C,D).
2.3.Citropten Inhibits HG-Activated AKT and Extracellular Signal-Regulated Kinase (ERK) Pathway in RVSMCs.ERK and AKT pathways are closely associated with VSMC proliferation and migration. 24,25Therefore, the effects of citropten on the ERK and AKT pathways were investigated in this study.As depicted in Figure 3A, pretreatment with citropten abolished HG-enhanced AKT and ERK phosphorylation in a concentration-dependent manner (Figure 3A).Pharmacological inhibitors of AKT (LY294002) and ERK (PD98059) completely blocked HGinduced cell migration and proliferation (Figure 3B,C).Interestingly, these inhibitors enhanced the inhibitory effects of citropten under cotreatment conditions.Similar results were observed in the proliferation and migration assays (Figure 3D,E).Thus, these data indicate that citropten inhibits HGinduced cell migration and proliferation by inhibiting the AKT and ERK pathways.
Citropten Blocks the HG-Activated HIF-1α
Signaling Pathway in RVSMCs.HIF-1α is potentially associated with the phenotypic switching of VSMCs.−28 To examine whether citropten inhibits the migration and proliferation of RVSMCs and the HIF-1α pathway, we investigated the expression of HIF-1α under conditions of citropten treatment in the absence or presence of HG.As shown in Figure 4A,B, HG strongly increased HIF-1α expression at the transcript and protein levels.Pretreatment with citropten significantly blocked HIF-1α expression (Figure 4C).Moreover, a pharmacological inhibitor of HIF-1α (KC7F2) not only downregulated the expression of proliferation and migration markers but also enhanced the inhibitory effects of citropten on these targets.These results suggest that citropten ameliorates the HGinduced increase in the migration and proliferation of VSMCs mediated by the HIF-1α signaling pathway.
Citropten Prevents HG-Induced ROS Production in RVSMCs.
Excess ROS bioactivity is a key mechanism underlying phenotype switching of VSMC. 10 Therefore, we investigated the effect of citropten on the vascular ROS viability in this study.As shown in Figure 5A, HG significantly increased the ROS production in RVSMCs; however, citropten markedly inhibited this effect (Figure 5A).Similarly, citropten inhibited HG-induced mitochondrial ROS production (Figure 5B).To validate the role of ROS, cells were pretreated with NAC (N-acetyl-L-cysteine) (an ROS inhibitor) for 1 h and citropten for 3 h, followed by incubation in the absence or presence of HG for 24−48 h.NAC strongly blocked the expression of MMP9, cyclin D1, PCNA, KLF4, and HIF-1α induced by HG.Moreover, the inhibitory effects of the citropten and NAC combination were stronger than those of either compound alone (Figure 5C).Similar results were obtained in the proliferation and migration assays (Figure 5D,E).Therefore, citropten inhibits the proliferation and migration of RVSMCs by suppressing the ROS production.
2.6.Citropten Inhibits RVSMCs Proliferation and Migration via the TRPV1 Channel.−19 To explore whether TRPV1 is involved in the citropten-mediated alteration of RVSMCs physiological functions, we used a selective antagonist of TRPV1, capsazepine (CPZ).Pretreatment with CPZ reversed the effects of citropten on migration and proliferation markers at the transcript and protein levels (Figure 6A,B).Cell proliferation and migration assays showed similar results (Figure 6C,D).To confirm these observations, cells were transfected with a small interfering RNA against TRPV1.Consistently, TRPV1 knockdown did not affect the HG-stimulation-induced cell migration and proliferation (Figure 6E).Docking studies of epigallocatechin gallate (EGCG, a well-known agonist of TRPV1) 29 and citropten were performed based on a homology model of TRPV1.As shown in Figure 6F, crystal structures of citropten and EGCG showed hydrogen bond interactions with Ala566 and Asn551.Additionally, the flavonoid scaffold of citropten revealed hydrophobic interactions with the side chain of Leu515 in TRPV1 with a docking model similar to that of EGCG (Figure 6F).Binding affinity and root-mean-square deviation (RMSD) between citropten and the identical virtual structure were −6.3 and 1.422 Å, respectively, whereas those between EGCG and TRPV1 were −8.0 and 0.009 Å, respectively.Analysis of docking models of TRPV1 and citropten suggested that citropten acts as a TRPV1 agonist.Taken together, our data suggest that citropten inhibits the HG-induced proliferation and migration of RVSMCs via the TRPV1 channel.
DISCUSSION
Diabetes is a primary risk factor for atherosclerosis, stroke, coronary heart disease, and dyslipidemia.Increased risk and accelerated development of atherosclerosis have been reported in patients with diabetes and high blood pressure. 30In this study, we demonstrated a novel role of citropten in regulating VSMCs proliferation and migration in response to HG.The mRNA and protein expression of proliferation markers (cyclin D1, cyclin E1, and PCNA) and migration targets (MMP2 and MMP9) was inhibited by citropten in HG-challenged cultured RVSMCs.Citropten greatly decreased RVSMCs proliferation and migration rate, and the ROS production induced by HG mediated the AKT, ERK, and HIF-1α signaling pathway.The effect of citropten was partly dependent on TRPV1 receptor activation.
Abnormal proliferation and migration of RVSMCs play crucial roles in the development of atherosclerotic lesions.Among the multiple factors that drive the development of atherosclerosis, hyperglycemia is a major causative factor.Coutinho et al. reported that the risk of cardiovascular events starts at concentrations below the nondiabetic glucose range (<6.1 mM, and continues to act within the diabetic glucose range (>11.1 mM) in an exponential fashion. 31HG triggered structural and functional changes in VSMCs involved in diabetic atherosclerosis by regulating several signaling pathways, including the generation of advanced glycation end products and oxidative stress. 32In addition, HG activated the expression of genes participating in ERK-dependent mitogenic response and enhanced mitochondrial dysfunction, endoplasmic reticulum stress, and ROS accumulation in VSMCs.All of these factors stimulate the proliferation and migration of VSMCs.Therefore, we chose HG as the in vitro stimulant.Citropten (10−40 μM) significantly inhibited the proliferation and migration of RVSMCs induced by HG (25 mM).Pretreatment with citropten reduced the expression of cyclin D1 and PCNA, which are proliferation markers.We designed a wound healing test and transwell cell migration assay to examine the migration ability of RVSMCs in response to HG stimulation.Citropten inhibited wound closure and cell migration, which was consistent with the expression of MMP2 and MMP9 at the mRNA and protein levels induced by HG in RVSMCs.Citropten also increased the protein levels of SM22, a specific contractile marker of RVSMCs.−35 MAPK/ERK pathway is reported to be closely related to cell proliferation, differentiation, migration, senescence and apoptosis. 24The PI3K/AKT signaling pathway is involved in various cellular processes, such as glucose metabolism, apoptosis, proliferation, and migration. 36In this study, we observed that citropten also reduced the levels of phosphorylation of AKT and ERK in a concentration-dependent manner.These results align with those of our previous study on the effect of citropten on osteoclastogenesis inhibition in RAW264.7 cells. 22Moreover, inhibition of AKT and ERK activity by pharmacological inhibitors not only inhibited the HG-induced cell migration and proliferation but also enhanced the inhibitory effects of citropten on HG-induced expression of related molecules, including MMP2, MMP9, PCNA, cyclin D1, cyclin E1, and KLF4.These results indicate that citropten inhibits HG migration and proliferation and plays a role in the inhibition of the AKT and ERK pathways.
HIF-1α is an important determinant of healing outcomes.It contributes to all stages of wound healing via cell division, growth factor release, and matrix synthesis. 37HIF-1α was reported to promote the proliferation and migration of pulmonary arterial SMC via activation of Cx43. 26The knockdown of HIF-1α inhibits the proliferation and migration of outer root sheath cells. 27In addition, HIF-1α expression is upregulated by HG stimulation, indicating HIF-1α may widely define a novel strategic milieu for the intervention cardiac complications in patient with diabetes. 38Therefore, we examined the role of HIF-1α in citropten-abolished HGinduced migration and proliferation.In this study, citropten blocked HIF-1α expression in a concentration-dependent manner.KC7F2 (a selective inhibitor of HIF-1α) strongly prevented RVSMCs proliferation and migration.Importantly, cotreatment with KC7F2 and citropten showed greater inhibitory effects than KC7F2 or citropten treatment alone.Therefore, downregulation of HIF-1α is associated with antimigration and proliferation of citropten.
Excessive ROS accumulation causes vascular cell damage, activation of metalloproteinases, and deposition of extracellular matrix, collectively leading to vascular remodeling. 39Therefore, we investigated the effect of citropten on ROS production in whole cells and mitochondria and the relationship between this inhibitory effect and cell migration and proliferation.Incubation with citropten significantly decreased the HGinduced increase in the intracellular and mitochondrial ROS production.Moreover, ROS inhibition by NAC not only suppressed HG-induced cell migration and proliferation rate and expression of related proteins but also enhanced the effect of citropten on the inhibition of these proteins.As mentioned in a previous study, citropten reduced ROS production. 22Like several antioxidants, citropten exhibited potential and diverse effects for the prevention and treatment of cardiovascular diseases.
TRPV1 is a nonselective cation channel with a preference for calcium transmission. 17The TRPV1 channel is involved in the regulation of calcium signaling, which is crucial for many cellular processes, including proliferation, apoptosis, secretion of cytokines, and T cell activation. 40The activation of TRPV1 can reduce blood pressure and improve vascular damage. 41esearch on type 2 diabetes mellitus has confirmed that activation by capsaicin can improve glucose homeostasis, ameliorate hyperglycemia-induced endothelial dysfunction, and prevent diabetic cardiovascular complications.Hao et al. found that TRPV1 activation through chronic dietary capsaicin, reduced vascular dysfunction by preventing the generation of oxidative stress in mice with high salt intakeinduced hypertension. 42In the present study, citropten was considered a novel agonist of TRPV1 like EGCG, a wellknown agonist of TRPV1, which is indicated by molecular docking results.Both, EGCG and citropten, formed hydrogen bonds with TRPV1 at Ala566 and Asn551, and were hydrophobic at Leu515.In addition, our data demonstrated that citropten exhibited a considerable ability to inhibit the HG-induced proliferation and migration of RVSMCs mediated by TRPV1 channels using pharmacological antagonists as well as silencing RNA against TRPV1, which reversed the inhibitory effect of citropten.Our results are consistent with previous data, demonstrating that TRPV1 activation inhibits phenotypic switching, oxidative stress, migration, and proliferation of RVSMCs.
In conclusion, this study revealed that citropten inhibited the HG-enhanced migration and proliferation of RVSMCs at the transcript and protein levels.Additionally, it inhibited cell proliferation and migration.Pretreatment with citropten reduced the ROS production in RVSMCs.The effects of citropten were mediated via the AKT, ERK, and HIF-1α signaling pathways, with KLF4 as the main transcription factor.Moreover, the protective effects of citropten were mediated via the TRPV1 channel.Considering the increasing incidence of diabetes and vascular diseases worldwide, which pose a serious threat to public health problems, the application of citropten may be beneficial for the prevention and treatment of diabetesassociated cardiovascular complications.
4.3.Culture and Treatment.The rat vascular smooth muscle cell line (RVSMC) was obtained from Cell Applications, Inc. (San Diego, CA, U.S.A.).Cells were cultured in DMEM (Welgene, Gyeongsan-si, Gyeongsangbuk-do, Korea) containing 10% FBS and 1% (v/v) penicillin/ streptomycin at 37 °C in a humidified incubator with 5% CO 2 .All experiments were conducted with cells between passages 5 and 11.High glucose (HG; 25 mM) was used as a model to investigate the role of citropten in VSMCs migration and proliferation as described in ref 23.Citropten was diluted in DMSO and applied for the experiment as described in the figure legends, respectively.
Cell Viability.
To determine the toxicity effect of citropten, a MTT assay was conducted.Briefly, RVMSCs were seeded in 96-well plates at a density of 1 × 10 4 cells per well.Following that, the cells were pretreated with various concentrations of citropten and then treated with HG for an additional 48 h.In the MTT assay, the medium was discarded and cells were incubated with MTT solution (final concentration, 0.5 mg/mL) for 30 min.The formed formazan crystals were solubilized with DMSO and quantified by the absorbance at 550 nm.
4.5.Cell Proliferation Assay.CCK-8 kit was used to detect the effect of citropten on the proliferation of RVSMCs according to the manufacturer's protocol.Briefly, RVSMCs seeded in 96-well plates at a density of 1 × 10 4 cells/well.Upon adherence to the plates, cells were starved with serumfree DMEM overnight.Next, they were treated with different concentrations of citropten (10, 20, and 40 μM) and incubated for 48 h.CCK solution was directly added to each well (20 μL/well), and cells were incubated for 1−4 h in the incubator; the optical density (OD) at 450 nm was measured for each well using an automatic microplate reader.
Transwell Migration Assay.
A transwell chamber was used to detect the migratory ability of RVSMCs.Briefly, RVSMC were seeded into the upper chamber at a density of 1 × 10 5 cells/well with serum-free DMEM.The lower chambers were filled with 800 μL DMEM with 20% FBS.On next day, cells were exposed with different concentrations of citropten (10, 20, and 40 μM) in absence or presence of HG.After incubation for 48 h, the lower side was fixed with 4% paraformaldehyde.After 20 min, the migrated cells were stained with 0.1% crystal violet staining solution for 15 min.The migrated cells were randomly captured using an inverted fluorescence microscope (Olympus U-CMAD3, Tokyo, Japan).The average number of cells that migrated was assayed by using ImageJ software.4.7.Wound Healing Assay.Briefly, RVSMCs were treated with different concentrations of citropten (10, 20, and 40 μM) for 24 h in DMEM in the absence or presence of HG.Then, a horizontal wound was scratched with a sterile 10 μL pipet tip, and suspended cells were washed away with PBS twice.Images of each well were captured at 0 and 48 h.Wound closure was estimated based on the widths of the wounds at 0 and 48 h with a microscope using ImageJ v1.42l analysis software.Wound closures were calculated via the equation below: ((0 h) area − (48 h) area)/(0 h) area.
4.9.Realtime-PCR.Total RNA from cell culture was extracted from cultured VSMCs using TRI Reagent Solution (Invitrogen, Massachusetts, USA) according to the manufacturer's instructions.The reverse transcription of the RNA was performed by using an RT PreMix kit (Enzynomics, Daejeon, Korea).Real-time PCR reactions were run on a CFX connect optics Module (Biorad, California, USA) instrument using BioFACT Real-time PCR Master mix was adopted following the sets: predegeneration for 10 min at 95 °C, and 45 cycles of 15 s at 95 °C and 60 s at 60 °C.The mRNA levels of the genes were normalized to gapdh.The gene expression was calculated using the following equation: gene expression = 2 −ΔΔCT , where ΔΔCT = (Ct target − Ct gapdh).The PCR primer sequences used in this study are listed in Supporting Information, Table 1.
4.10.Reactive Oxygen Species (ROS) Assay.The ROS levels in VSMCs were detected using DCFHDA as a probe to estimate the ROS of the cell.Briefly, RVSMC cells were incubated with DCFHDA (5 μM) at 37 °C for 30 min in the dark.The cells were then rinsed with PBS twice, and the fluorescence intensity was spectrofluorometer at an excitation and emission wavelength of 485 and 530 nm, respectively.4.11.Immunofluorescence Staining.VSMCs were seeded on coverslips in 12-well plates for 24 h.After the designated incubation, the cells were fixed with 4% PFA for 20 min and permeabilized with 0.2% Triton X-100 for 10 min at RT. Next, the cells were blocked with 5% BSA for 1 h and incubated with the primary antibody against SM22 (1:200) at 4 °C overnight, followed by incubation with secondary antibody (1:1000; Goat antimouse Alexa Fluor 488, Cat.#A28175; Thermo fisher Scientific) in the dark for 4 h at RT.They were rinsed with PBS several times.The cells were mounted with ProLong Gold Antifade Mountant containing DAPI (Invitrogen, 168 Third Avenue, Waltham, MA, U.S.A.) and then viewed using an EVOS FL Cell Imaging System (Thermo Fisher Scientific, 168 Third Avenue, Waltham, MA U.S.A.).
4.12.Silencing RNA Transfection.Cells were transfected with silencing RNA (siRNA): TRPV1 siRNA (sc-108093) and control siRNA (sc-37007) were purchased from Santa Cruz Biotechnology, Inc. siRNAs were transfected into VSMC cells using Lipofectamine RNAimax reagent (Thermo Fisher Scientific) When VSMCs reached approximately 70−80% confluence, a mixture of siRNA and Lipofectamine RNAimax reagent was added to the cells.The medium was replaced at 6 h post-transfection, and citropten was applied as indicated.Transfection efficacy was assessed by Western blotting.
4.13.MitoSOX Red Mitochondrial Superoxide Staining.The assumption that mitochondria serve as the major intracellular source of ROS has been largely based on experiments.Therefore, we detected superoxide in the mitochondria of live cells under HG stimulation using MitoSOX Red mitochondrial superoxide according to the manufacturer's instructions.Briefly, RVSMC cells were seed on 48-well plate at 3 × 10 4 cells/well and treatment as described in figures legend.Cells were stained with 5 μM MitoSOX reagent solution prepared in HBSS buffer for 30 min at 37 °C and protected from light.Then, cells were washed gently three times with warm buffer and visualized under microscopy.
4.14.Molecular Docking.The structure of TRPV1 was downloaded from the RSCB Protein Data Bank Web site with the PDB ID 7L2H.The protein was prepared for docking using BIOVIA Discovery Studio v21.1 (Dassault Systemes, San Diego, CA, U.S.A.), and computational docking was predicted using AutoDockTools 1.5.6 and AutoDock Vina 1.1.229.The 3D structure of citropten was established using Spartan'18 (Wave function Inc., Irvine, CA, U.S.A.).The strongest molecular link was believed to be represented by the lowest number, based on the binding affinity score and the root-meansquare deviation (RMSD).The interaction between citropten and 7L2H was predicted, and BIOVIA Discovery Studio v21.1 was utilized to illustrate the graphic result.π−π bonds, hydrogen bonds, van der Waals interactions, and the interaction distance between amino acids and the active sites of citropten were predicted.
4.15.Statistical.Three independent biological experiments were repeated, and the quantitative data are presented as the mean ± standard deviation.One-way ANOVA analysis of variance accompanied by Student−Newman−Keuls test were performed to compare the means.P < 0.05 was considered to indicate a statistically significant difference.
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.4c03539.Primer sequences for real-time PCR analysis (Table S1); The effect of citropten with SM22 expression in RVSMCs under HG stimulation (Figure S1); The expression of TRPV1 after transfection with siRNA TRPV1 in RVSMCs (Figure S2) (PDF)
Figure 1 .
Figure 1.Citropten inhibited HG-induced RVSMCs proliferation.(A) Structure of citropten.(B, C) Cell cytotoxicity was evaluated by MTT assay.(D) Cell proliferation was examined by CCK-8 assay.(E) Realtime-PCR were conducted to detect mRNA expression of proliferation makers.(F) Western blot analysis was used to analyze protein levels in whole RVSMCs.Data are expressed as the mean ± SD.All experiments were performed in triplicate.## p < 0.01 and ### p < 0.001 compared with control; *p < 0.05, **p < 0.01, and ***p < 0.001 compared with HG incubation.
Figure 2 .
Figure 2. Citropten reduced HG-induced RVSMCs migration.RVMSCs were pretreatment with citropten for 1 h, followed with HG treatment for 48 h.(A) Wound-healing assay, the black lines indicate a clear zone after scratching (0 h). (B) Cell migration was evaluated by a trans-well migration assay.Scale bar = 100 μm.(C) PCR results were used to determine the expression of mmp2 and mmp9 mRNA.(D) Western blot analysis was used to detect protein levels in whole RVSMCs.Data are expressed as the mean ± SD.All experiments were performed in triplicate.# p < 0.05 and ### p < 0.001 compared with control; *p < 0.05, **p < 0.01, and ***p < 0.001 compared with HG treatment.
Figure 3 .
Figure 3. Citropten prevented HG-induced migration and proliferation in RVSMCs related ERK and AKT signaling pathway.(A) RVSMCs were pretreated with citropten for 1 h before being incubated with HG for 30 min.(B−E) RVSMCs were preincubated with LY294002 or PD98059 and/or citropten for 1 h and then exposed to HG for 24−48 h.(B, C) Western blot analysis was used to analyze the levels of the indicated proteins in-cell lysates.(D) Migration effect was assessed by trans-well assay.Scale bar = 100 μm.(E) CCK-8 assay was conducted to evaluate cell proliferation.Results are represented as means ± SD (n = 3), # p < 0.05, ## p < 0.01, and ### p < 0.001 compared with control; ***p < 0.001 compared with HG treatment; $ p < 0.05, $$ p < 0.01, and $$$ p < 0.001 compared with citropten plus HG treatment.
Figure 4 .
Figure 4. Citropten prevented HG-induced migration and proliferation in RVSMCs related to the HIF-1α pathway.(A, B) RVSMCs were pretreated with citropten for 1 h before being incubated with HG for 24 h.Results was assessed by PCR (A) and Western blot assay (B).(C−E) RVSMCs were preincubated with KC7F2 and/or citropten for 1 h, and then exposed to HG for 24−48 h.(C) Western blot analysis was used to analyze the levels of the indicated proteins in-cell lysates.(D) Migration effect was performed by trans-well assay.Scale bar = 100 μm.(E) Cell proliferation was obtained from CCK-8 assay.Results are represented as means ± SD (n = 3), # p < 0.05 and ### p < 0.001 compared with control; *p < 0.05, **p < 0.01, and ***p < 0.001 compared with HG treatment, $$$ p < 0.001 compared with citropten plus HG treatment.
Figure 5 .
Figure 5. Citropten prevented HG-induced ROS production in RVSMCs.(A, B) RVSMCs were pretreated with citropten for 1 h and then stimulated with HG for an additional 24 h.The intracellular superoxide levels were measured by DCFHDA assay (A).Mitochondrial superoxide generation was examined using MitoSOX staining (B).Scale bar = 200 μm.(C−E) RVSMCs were pretreated with citropten in the presence or absence of NAC for 1 h, before 24−48 h of exposure to HG. (C) Western blot was used to analyze the levels of proteins in cells.(D) Migration effect was assessed by trans-well assay.(E) CCK-8 assay was conducted to evaluate cell proliferation.Results are presented as means ± SD (n = 3).## p < 0.01 and ### p < 0.001 compared with control; *p < 0.05, **p < 0.01, and ***p < 0.001 compared with HG treatment; $$ p < 0.01 and $$$ p < 0.001 compared with citropten plus HG treatment.
Figure 6 .
Figure 6.Citropten inhibited RVSMCs proliferation and migration through TRPV1 channel.(A−D) RVSMCs were pretreated with citropten in the presence or absence of CPZ for 1 h, before 24−48 h of exposure to HG. (A) Results were assessed by PCR.(B) Western blot was used to analyze the levels of indicated proteins in the whole-cell lysates.(C) Migration rate was assessed by a trans-well assay.Scale bar = 100 μm.(D) CCK-8 assay was conducted to determine cell proliferation rate.(E) RVSMCs cells were transfected with a siRNA TRPV1 for 24 h, followed by incubation with citropten in the absence or presence of HG for 24 h.The results were evaluated by Western blot.(F) Proposed docking model of TRPV1 homology model in 3D in 2D with Citropten (a, c) and EGCG (b, d).## p < 0.01 and ### p < 0.001 compared with control; ***p < 0.001 compared with HG treatment with or without siRNA control transfection; $$$ p < 0.001 compared with citropten plus HG treatment with or without siRNA control transfection.All experiments were performed in triplicate.
|
2024-06-27T15:12:22.130Z
|
2024-06-24T00:00:00.000
|
{
"year": 2024,
"sha1": "52bede3007d829281569dd2b01a1b7db63963813",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.4c03539",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8c4be4132227d8a3768e197c79cf635c203f2b4",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195766987
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Network Structure and Initial Group Norm Distributions in Norm Conflict
Social norms can facilitate societal coexistence in groups by providing an implicitly shared set of expectations and behavioral guidelines. However, different social groups can hold different norms, and lacking an overarching normative consensus can lead to conflict within and between groups. In this paper, we present an agent-based model that simulates the adoption of norms in two interacting groups. We explore this phenomenon while varying relative group sizes and homophily/heterophily (two features of network structure), and initial group norm distributions. Agents update their norm according to an adapted version of Granovetter's threshold model, using a uniform distribution of thresholds. We study the impact of network structure and initial norm distributions on the process of achieving normative consensus and the resulting potential for intragroup and intergroup conflict. Our results show that norm change is most likely when norms are strongly tied to group membership. Groups end up with the most similar norm distributions when networks are heterophilic, with small to middling minority groups. High homophilic networks show high potential intergroup conflict and low potential intragroup conflict, while the opposite pattern emerges for high heterophilic networks.
Introduction
In this chapter, we study the impact of network structure and initial group norm distributions on the process of arriving at a normative consensus between groups and the potential for intragroup and intergroup conflict that might emerge under different conditions. To this end, we first provide a brief theoretical overview on social norms, normative group conflict and the process of finding consensus trough social influence. Secondly, we give an overview on the role that network structure as well as the initial distributions of norms can play in this process. Specifically, we argue that homophily/heterophily (preference for forming connections to similar/dissimilar others) between members of different groups, relative group sizes, and the initial distribution of norms within groups are all important factors for reaching normative consensus, and consequently relevant determinants of conflict potential. Based on this reasoning, we develop an agent-based model that simulates social networks of agents from two different social groups where each agent holds one of two social norms. In an adapted version of Granovetter's threshold model [1], each agent updates its social norm by comparing the proportion of norms held by its immediate neighbors to an internal threshold drawn from a uniform distribution. Agents are thus "observing" the "openly displayed behavior" of their neighbors and adapt their own behavior accordingly if enough of their neighbors display a different norm. We apply this model to different network structures, defined by relative group sizes and homophily/heterophily between agents from different groups. This allows us to assess the impact of these structural network properties on the process of reaching normative consensus and associated conflict potential. In addition, we run our model for different levels of initial group norm distributions, so that we can also assess the influence of alignment (or independence) of norms and social group membership. We define and examine three relevant outcomes: The degree to which norm distributions change, the degree to which the difference in norm distributions between the two groups changes, and the potential for conflict within and between the groups. Lastly, we discuss our results with respect to their applicability, the limitations of our model and possible directions for future research.
Social Norms
Social norms can be defined as unwritten behavioral rules [2] or "social standards that are accepted by a substantial proportion of the group" [3]. They are a shared set of situation-specific behaviors that facilitate social interaction by providing an implicitly shared set of expectations and behavioral guidelines [4]. Such behaviors can range from an implicit dress code at work, to the expression of religious and political symbols, or (not) interacting with other social groups. Norms are implicitly negotiated between members of a group and enforced through informal sanctions, such as gossip, censoring or ostracism [4]. They are passed through generations via socialization processes in childhood [5] and are, in contrast to laws, not necessarily enforced by an institution. Norms come in multiple types; for example, prescriptive norms define behaviors that one should enact (e.g. "offering elderly people a seat on the subway"), while proscriptive norms define undesirable behaviors that one should avoid (e.g. "interrupting people while they speak"). The most important distinction for our purposes is between injunctive and descriptive norms. Injunctive norms focus on beliefs about how people should act, while descriptive norms are defined by the observation of how people actually do act [6,7]. For instance, "everybody should recycle" is an injunctive norm, while the observation that many people do not recycle represents a descriptive norm [7]. Both types of norms are important determinants of behavior, but previous research suggests that injunctive norms primarily elicit behavioral change by changing attitudes [6,8], while descriptive norms directly impact behavior [9]. In this chapter, we are interested in descriptive norms, because they are directly inferred from the observed behavior of others. Injunctive norms can differ from directly observed behavior, and can involve more complex cognitive processes [5], which are beyond the scope of our model. Therefore, when we are referring to social norms with respect to our model, we are specifically addressing descriptive social norms.
Normative Conflict
A large body of previous research has focused on the potential for positive impact of social norms on behavior. Predominantly, these studies were interested in changing individual beliefs or behavior by presenting normative information at odds with the individual's current beliefs or behavior. Examples include the reinforcement of nondelinquent behavior through the influence of peers [8], positive effects of punishment on cooperative behavior [10], effects of social norms on compliance to vaccination programs [11], reduction of binge-drinking in college students [12] and littering [7]. However, inconsistent norms do not only elicit behavioral change; they can lead to interpersonal and intergroup conflict [13]. The potential risk of such normative conflicts is especially high in multicultural contexts where different cultural groups must coexist [14]. A recent example of normative conflict in Europe is women wearing a veil to cover their face in public. This practice is a prescriptive social norm in some predominantly Muslim countries and it has elicited mixed reactions when immigrants engaged in the practice in their new countries [15]. Some western countries such as France, Belgium and Switzerland have banned the practice. In France, lawmakers claimed that a ban was necessary to ensure "peaceful cohabitation" [16]. Likewise, in Germany, face veils have been controversially discussed in the past years: For instance, the German Minister of the Interior stated "[...] we reject this. Not just the headscarf, any full-face veils that only shows eyes of a person [...] It does not fit into our society for us, for our communication, for our cohesion in the society ... This is why we demand you show your face" [17]. This backlash reflects an underlying normative conflict, with a large majority (81%) of Germans supporting a ban in public institutions and a substantial group (51%) even supporting a general ban. Only a minority of the national population (15%) indicate that they are not in favor of any kind of regulation [18].
However, such normative societal conflicts exist not only along established cultural and religious divides, but can cover a wide array of topics and elicit intergroup and intragroup conflicts [13]. For instance, gun ownership is a controversial normative debate within U.S. society [19], involving subgroups with different cultural orientations [20]. Abortion is another topic debated worldwide, with disagreements concerning womens rights, health care systems and moral constraints [21]. Empirical research shows how the controversy around abortion leads to a polarization of opinions within Protestants and Catholic groups in U.S. society [22]. Other inconsistent norms can concern controversial national traditions such as Zwarte Piet ("Black Pete"), a folklorist character and helper of Sinterklaas (Santa Claus) in the Dutch culture. The character is typically displayed with blackface makeup, bright red lips and colorful clothing. The display has been increasingly criticized as a racist stereotype, predominantly by minority and immigrant groups, while many native Dutch citizens argue that "Black Pete" is a positive character and part of their national tradition [23]. In essence, inconsistent social norms within a larger collective have the potential to lead to intergroup, as well as intragroup conflict. With respect to trends of increasing globalization and migration, effectively resolving these normative conflicts is becoming a striking priority for many societies in the future.
Finding Consensus
Despite their potential for negative outcomes, normative conflicts are not an indication that a collective is inherently unfit to live together peacefully. In contrast, they can be fundamental to the formation of social units at different scales. Georg Simmel defines shared consensus on social roles and their supporting norms as necessary features of human society [24]. Similarly, normative conflicts are frequently observed in the literature on group formation and described as a necessary step towards a common group identity. For example, in Tuckman's stage model of group development, the norming stage focuses on resolving disagreement and establishing a shared set of behavioral guidelines; it is a crucial step in the formation of an effective group [25]. Some recent, empirically validated models such as the Normative Conflict Model [26] confirm this mechanism. According to the model, members strongly identified with the group are more likely to openly express dissent compared to weakly identified members [26]. Dissenters help uncover the causes of the conflict and discuss possible solutions. To form an effective group with committed members, it is necessary to effectively resolve conflicts due to incompatible norms and to find a consensus on which most members agree. Failure to reach such a consensus might result in a lack of common group identity and task effectiveness, leading to the dissolution of the group [25].
Interactions between people from different social groups are a steadily increasing occurrence in societies that are socially, economically and culturally diverse [27]. Such diversity is likely to increase in the future, along with changing relations between majority and minority groups due to demographic and socioeconomic changes [28]. As ongoing political and societal polarization in Western societies already demonstrate, incompatible social norms associated with different groups have the potential to elicit conflict [29]. For these reasons, we argue that it is crucial to understand the conditions enabling social groups to effectively reach a normative consensus and how this process relates to conflict potential within and between social groups.
Network Structure & Group Norm Distributions
Individuals do not adopt norms in isolation; the structure of their social environment is a key determinant of social behavior. The social networks in which we are embedded determine the kinds of people and behavior to which we are exposed, thereby shaping the descriptive norms we hold. Thus, the interpersonal processes which contribute to finding normative consensus [30], as well as the intergroup and intragroup processes [13], are crucially contextualized within networks of social interaction. Consequently, we argue that finding normative consensus is a continuous process of group members mutually exerting social influence [9] on each other until a relatively stable equilibrium is reached [31,32]. This often requires that at least some individuals react to social influence exerted on them by their social networks by changing their norms. For instance, [33] show how networks such as family and friends are the best predictors in forging a culture favoring gun ownership. As for the normative conflict of gay marriage in the U.S., a longitudinal time-series study shows how the decision of the U.S. Supreme Court in June 2015 eventually led to an increase in perceived social norms supporting gay marriage independently of individual attitudes [34]. In short, the social networks people are embedded in appear to play a crucial role in the process of reaching a normative consensus within and between groups.
In this chapter, we will focus on homophily/heterophily between people from different groups and relative group sizes as determinants of network structure, and on the initial distribution of norms within groups when they come into contact.
Homophily & Heterophily
Homophily is the tendency to preferentially connect and interact with similar others [35], while heterophily is the tendency to preferentially connect and interact with dissimilar others [36]. Homophily has been observed extensively in many social networks, including school friendships [37], scientific collaborations [38], and online communications [39]. It is likely a manifestation of the similarity bias, a fundamental human tendency to like and value others that are similar to the self and to consequently be disproportionally influenced by them [9]. For example, a controlled experimental study on the spread of a health innovation through social networks varied the level of homophily, showing that homophily significantly increased the overall adoption of new health behavior, especially among those in more clustered networks [40]. Similar effects have been shown in diverse health behaviors in large social networks, such as the spread of smoking [41] and obesity [42]. Since social influence is exerted through social ties in networks [43,44] and homphily/heterophily determines how these ties are formed, we argue that it is an important factor in the process of negotiating a normative consensus through mutual social influence.
Group Size
Almost no collective group is made out of completely homogeneous members. Instead, they consist of demographic subgroups, such as those defined by gender, nationality, or education [35]. Mostly, these subgroups are not equally sized, so that people are either part of a majority or minority group [45] with respect to a certain social category. The pervasive influence of majority opinions, customs, and norms is well established in theoretical accounts of group-based social influence [31]. The dominant role of the majority has been experimentally validated in numerous studies replicating the seminal work by [46], both for individual social influence [47,48] and group influence [49,50]. Greater influence of the majority is generally assumed for acculturation processes of minority immigrants in host countries [51,52]. Yet, other studies have demonstrated that under certain conditions, minorities can successfully exert social influence on the majority and consequently redefine the normative consensus in their favor [13,53,54]. For these reasons, we argue that the sizes of interacting subgroups within a larger society are an important factor in the process of negotiating normative consensus.
Initial Group Norm Distributions
Agreement on social norms is considered to be a part of the collective identity people derive from the social groups to which they belong [55,13]. Norms vary, however, in how much they align with group membership. Even in the case of German opinions on face veils, a full 15% do not agree with the normative opinion to ban face veils [17,18]. That is, despite sharing group membership, individuals disagree on this norm. Conversely, in the social group of Muslim immigrants in Germany, some will support the norm of face veils while others will oppose it. People can hold the same norm on face veils even though they are from different social groups, or they can hold different social norms while belonging to the same social group. In terms of our example, there will be some Muslim immigrants agreeing with Germans who oppose face veils. There will also be some Germans agreeing with the Muslim immigrants who do not oppose face veils. In short, even in this case of strong consensus, group membership is not the single determinant of norms held on an individual level. Social norms are often aligned with group membership to a degree, but the two are not synonymous.
This interplay of social group membership versus agreement in moral or normative issues has been shown to be influential in previous studies. For instance, the influence that a group exerts on individuals is not only a function of its size, but also of its unanimity, with stronger pressure towards conformity for more unanimous groups [56]. Furthermore, studies have shown that people react more negatively to dissenters from their own in-group [57] and consequently punish them harder. The initial distribution of norms within groups thus seems to be important for negotiating a normative consensus, even though it is not necessarily influencing the structure of the social network.
Agent-Based Model
Agent-based modeling can be of particular interest to understand social phenomena because it enables researchers to study complex macro-level outcomes that emerge from a clearly defined set of micro-level processes [58,32]. In addition, simulations allow us to systematically vary agents' behavioral rules or the circumstances in which they act [59]. In short, agent-based models help us to gain insight into the emergence of complex systems by systematically testing a variety of different parameters and the combined impact they exert on the emergent system [58]. Previous research has extensively used agent-based models to study phenomena such as spatial segregation [60], opinion diffusion [61], the adoption of innovation [62], and cascade effects [63].
For the purpose of modeling normative conflict in social networks with respect to relative group sizes, homophily/heterophily, and group norm differences, we developed a modular simulation framework based on a network generation algorithm using preferential attachment, group size and homophily/heterophily [64], and Granovetter's threshold model [1]. We utilized R [65] for our model as it appears to be more widespread among the social science community than Python and offers more customizability, better parallelization and scalability than NetLogo. Consequently, probabilistic processes in our model are implemented using the sample() function in R, which relies on the current system time to generate a seed for pseudo-random num-ber generation. All code, documentation and an animated visualization are available on GitHub [66] under the MIT License.
Simulating Norm Conflict
In our agent-based model, we aim to simulate the impact of group size, homophily/heterophily between agents from different groups, and initial group norm distributions on the process of reaching normative consensus and resulting conflict potential. To this end, we generated networks with 2000 agents each, where network structure is determined by one parameter for relative group size (g) and one parameter for homophilic/heterophilic preferences of agents (h) [64]. In addition, initial norms for agents were assigned based on three different pairs of binomial probabilities, resulting in three conditions for initial group norm distributions. Once the network structure is generated and agents are assigned their initial norms, each agent is assigned a threshold from a uniform distribution [1] and the model simulates normative social influence processes between agents by repeating 50 iterations of Granovetter's threshold model. Once the simulation is complete, we extract the percentage of agents holding each norm for each group, and the number of ties between agents within each group and between the groups. Crucially, we differentiate ties between agents holding the same norm and ties between agents with incompatible norms. Our model thus consists of four subsequent steps: Generation of network structure, initialization of group norm distributions, the norm updating process, and the extraction of outcome metrics.
In total, we simulate 150 unique parameter combinations with 20 networks per combination, resulting in 3000 unique networks (for an overview of the parameter space, see Table 1). For each of these networks, we are saving each iteration of Granovetter's threshold model as an individual network object, resulting in 150.000 networks with 2000 agents each. Simulation was carried out on the High Performance Computing Cluster of the University of Cologne on 150 MPI nodes. We opted for 50 iterations of Granovetter's threshold model because it was the highest number of feasible iterations in the maximum computation time limit for the MPI nodes (360 hours) of the High Performance Computing Cluster. The simulation took approximately 13 days (315 hours) and resulted in approximately 40GB of output data.
Generation of Network Structure
To generate different network structures that resemble real social networks and enable comparison of effects of g and h, we implemented the network generation algorithm by [64]. This algorithm combines the preferential attachment mechanism, which has been observed in many large-scale social networks [67], with tunable parameters Homophily/Heterophily Parameter [0.1, +0.1 . . ., 1] a Each of the three conditions compares different initial distribution of the majority norm in the majority group (p 1 ) and in the minority group (p 2 ) b U:Uniform distribution for group sizes and homophilic/heterophilic tendencies of agents in the model. As a point of terminology, we will refer to the group containing more agents as the "majority group" and the group containing less agents as the "minority group". The network generation model implements an iterative growth process where we start out with a small number of m initial agents for both the majority group and the minority group. After this initial setting, one agent is added to the network at a time. Each new agent has a probability of g to be assigned to the minority group and a probability of 1 − g to be assigned to majority group. For example, with a value of g = 0.4, each new agent has a probability of 40% to be assigned to the minority group and a probability of 60% to be assigned to the majority group. Each new agent forms m ties to the agents that are already present from previous steps. In this way, the parameter m also defines the minimum degree of agents in the network. We keep this parameter constant at m = 2 across all our generated networks because it ensures that no agent is isolated in the network. Previous research demonstrated that the choice of m does not change the properties of the network [67].
Connecting these m ties from the new agent to existing agents is probabilistic, and relies on the homophily parameter h and the degree of the present agents [64]. The parameter h ranges from 0 to 1 and defines the likelihood of agents to form ties to agents from the same group or from a different group (1 − h). A value of 0 represents perfect heterophily (ties will only be formed between agents assigned to different groups) and a value of 1 represents perfect homophily (ties will only be formed between agents assigned to the same group). In addition, agents also have a build-in preference for agents with high degree (preferential attachment), which is interacting with their group preference determined by h. Specifically, the probability p ij of each added agent j to form a tie with a present agents i, depends on the degree of the present agent (k i ) and the specified homophily parameter between i and j, h ij , divided by the sum over all existing agents denoted by (l): Minority group agents (20%) are represented by black circles, majority group agents (80%) are represented by white squares. When the network is heterophilic (left), the minority group increases their degree rapidly due to the combination of preferential attachment and smaller group size. In the homophilic network (right), the minority group cannot grow their degree by attracting majority group agents The processes of assigning agents to a group and selecting present agents to connect with are not deterministic, so the same set of initial parameters will generate slightly different network structures each time. To capture this variance, we generate 20 networks per parameter combination and report averaged results. See Figure 1 for an example, and see appendix for analytical derivations.
Initialization of Group Norm Distributions
After creating network structures based on the parameters g and h, we initialize a norm as an attribute in each agent. We will use "majority norm" and "minority norm" when we discuss our results with respect to the two different norms in our model. Specifically, majority norm will refer to the norm held by the larger proportion of agents in the larger of the two groups after initializing the network structure. In cases where the amount of agents holding each norm is equal, we simply track one of the two norms over the course of the simulation.
We use a probabilistic process with two different parameters p 1 and p 2 for the initial group norm distributions, where p 1 describes the probability of agents in the majority group to be assigned the majority norm, while 1 − p 1 describes the probability of agents in the majority group to be assigned to the minority norm. Vice versa, p 2 describes the probability of agents in the minority to be assigned the majority norm while 1 − p 2 describes the probability of agents in the minority being assigned the minority norm. For example, with p 1 = 0.7 and p 2 = 0.3, each agent in the majority group has 70% probability of being assigned the majority norm and probability 30% of being assigned the minority norm. Conversely, each new agent assigned to the minority has a probability of 30% to be assigned the majority norm and probability of 70% to be assigned the minority norm. In this example, we can see that p 1 and p 2 define how closely the assignment of norms is related to the group membership of new agents. If p 1 and p 2 are both 0.5, then there is no connection between group membership and norm -every agent of either group has an equal probability (50%) to endorse either norm. If p 1 is large and p 2 is small, then initial norm proportions are associated with group membership -the majority and the minority group preferentially use different norms. In our model, we will be testing one case where the initial norm distribution is unrelated to group membership (p 1 = 0.5 and p 2 = 0.5), one where the initial norm distribution is weakly related to group membership (p 1 = 0.6 and p 2 = 0.4) and one where initial norm distribution is strongly related to group membership (p 1 = 0.8 and p 2 = 0.2). We thus generate models where (a) 50% of the majority group and 50% of the minority group start with the majority norm, (b) 60% of the majority group and 40% of the minority group start with the majority norm, and (c) 80% of the majority group and 20% of the minority group start with the majority norm.
Norm Updating Process
After initializing one of the two norms in each agent according to parameters p 1 and p 2 , we simulate the adoption of norms over time within each network using Granovetter's threshold model [44,43,68]. In our simulation, we use a modified version [1] where each agent in the model is assigned a threshold value from a uniform distribution [0,1]. A central point in Granovetter's threshold model is the variability of thresholds within a group. Once people with lower thresholds adopt a norm, they will raise the proportion of people with that norm, increasing the chance of shifting those who have higher thresholds [1]. In his seminal work, [1] showed these dynamics both with a uniform distribution and a normal distribution of thresholds. In our model, we decided to use a uniform distribution of thresholds because our aim is to understand the role of network structure and initial norm distributions in normative conflict, and not primarily to investigate the effects of the threshold. To clearly understand emergent properties in agent-based models without extraneous mechanisms, it is beneficial to avoid unnecessary complexities [69]. Non-uniform distributions require particular choices: either the single value of the threshold held by all agents, or the mean and variance of a normally-distributed threshold parameter. Thus, any distribution besides the uniform requires additional assumptions without adding a concrete contribution [70] to our research questions. We use a uniform distribution in our model to control the effect of the threshold distribution [71] while testing the effect of network structure and initial norm distribution. We also allow agents to change back and forth between norms as appropriate given their threshold and the norms of their neighbors. This is distinct from some models where an agent can only change once (e.g. learning of a new innovation), and we consider it appropriate for modeling our phenomenon of interest -descriptive social norms.
In the updating process, each agent compares its threshold value to the proportion of its immediate neighbors holding a particular norm. If the proportion of neighbors that are expressing a given norm is equal to or higher than the agent's threshold, the agent will update its currently held norm. For example, if agent j has a threshold of t j = 0.6, it will update to the norm that 60% or more of its neighbors display. Depending on the current norm of the agent, this can either mean switching to a different norm or keeping the agent's current norm. If both proportions fail to reach the threshold (e.g. 50/50 distribution of norms in neighborhood of agent while the threshold value is 0.6), the agent will also keep the current norm. In cases where observed proportions of both norms are equal and exceeding an agents threshold, the agent will choose one of the two norms at random. Each network goes through 50 iterations of the updating process, so all agents update their norms 50 times.
In each iteration of the norm updating process, all agents are updated asynchronously, meaning that only one agent is updated at a time and the order in which agents update their norms is randomly shuffled before each iteration of the updating process. Thus, each agent's updating process can affect the updating process of the next agent. We chose this procedure as opposed to having a fixed order for updating agents or updating all agents at the same time because natural social interactions neither occur in a predetermined order nor do all people in a social network exert influence on each other simultaneously. For this reason, we argue that our approach more closely resembles real-life interactions and social influence processes between people.
Outcome Metrics
After the agent-based model finishes, we extract our outcomes of interest: The degree to which norm distributions change, the degree to which the difference in norm distributions between the two groups changes, and the potential for conflict within and between the groups.
To operationalize the degree to which norm distributions change, the initial proportion of agents holding the majority norm is subtracted from the final proportion of agents holding the majority norm. We subsequently call this Change in Majority Norm because it expresses the degree to which the group has adopted the majority norm relative to the group's starting point. If this number is positive, the group's use of the majority norm has increased over the course of the simulation. For example, if the network starts with an 80-20 group norm distribution and ends with 60% of the minority group endorsing the majority norm, the minority group has adopted the majority norm by 40%. If change in majority norm is negative, the group has rejected the majority norm. In a similar example, if the network starts with an 80-20 group norm distribution and ends with 10% of the minority group endorsing the majority norm, the minority group has rejected the majority norm by 10%. This is a group-level outcome: it tells us how the normative consensus within the majority group and within the minority group have changed over time. It is worth noting that the initial norm distribution limits the possible change within the majority group and the minority group. In the 80-20 initial norm distribution, only 20% more of the majority group could hold the majority norm, while 80% more of the minority group could do so.
At a system level, we are interested in the degree to which the difference in norm distributions between the two groups changes. Specifically, we are interested in whether the two groups express the two norms in similar proportions after the last iteration and if they have become more similar in their norm proportions over time.
To calculate this, we first calculate the initial group norm difference by subtracting the initial proportion of the minority group holding the majority norm from the initial proportion of the majority group holding the majority norm ∆(p) initial = p 1 initial − p 2 initial (see section 4.3). Then we calculate the final group norm difference by subtracting the final proportion of the minority group holding the majority norm from the final proportion of the majority group holding the majority norm, ∆(p) final = p 1 final − p 2 final . We subtract the final group norm difference from the initial group norm difference to define Change in Group Norm Difference ∆(p) final − ∆(p) initial . If this is positive, then difference has increased; the groups have become less similar over the course of the simulation in terms of their norms. If this is negative, then the group norm difference has decreased; the groups have become more similar. Once again, it is worth noting that the initial group norm distribution limits total possible change.
At a dyadic level, we are interested in the potential for interpersonal conflict between and within groups. To look at this, we define Conflict Ties as ties connecting two agents with different norms after the last iteration. Crucially, we distinguish between conflict ties of agents from the same group as a proxy for potential intragroup conflict, and conflict ties of agents from different groups as a proxy for potential intergroup conflict. In particular, we are extracting the proportion of ties in the majority group that connect agents with inconsistent norms, the proportion of ties in the minority group that connect agents with inconsistent norms, and the proportion of ties between the groups that connect agents with inconsistent norms.
Simulation Results
Our results are structured around the three overarching outcome metrics outlined above. For each metric, we consider the aggregated output of our runs by averaging over the values obtained from the 20 simulated networks per parameters combination.
Change in Majority Norm: Which combinations of parameters increase or
decrease the prevalence of the majority norm? In which cases does the majority norm become prevalent among the majority and the minority group? In which cases does the minority norm gain prevalence? 2. Change in Group Norm Difference: Which combinations of parameters reduce between-group norm differences? Which make convergence of norms most likely? 3. Conflict Ties: Which sets of parameters make it most likely that potential withingroup or between-group conflict will emerge? Which make it most likely that there will be little potential for conflict?
Change in Majority Norm
Our first interest is how the representation of norms within groups changes, using our Change in Majority Norm metric (see section 4.5). Figure 2 displays the results of the simulations, showing how this is influenced by homophily/heterophily, group sizes, and initial group norm distributions.
Fig. 2
Change in Majority Norm for majority and minority group. This set of heatmaps displays the influence of network homophily/heterophily, group size, and initial norm distributions on change in the majority norm. Each square represents the degree to which representation of the majority norm has increased or decreased in each group. Darker blue means shift towards the majority norm and darker orange means shift towards the minority norm. When norms are initially distributed equally (50-50, top row), the change in group norm difference is essentially random and does not depend on the properties of network structure and group size. When norms are initially distributed unequally e.g. (80-20, bottom row) we observe the impact of homophily and group size. For small homophily values, majority members are more likely to change their norm to the minority norm. As homophily increases, the majority and the minority are both likely to adopt the majority norm (until h = 1, when the pattern is reversed). In general, as the minority group increases in size, it is more likely to retain its own norm and influence the majority.
This visualization highlights several findings. First, the effect of homophily and group size on the results is clearest when the initial group norm distribution is 80-20.
That is, when norms are highly aligned with group membership, the influence of network structure is most pronounced. When the initial norm distributions are 50-50 in each group, the change in norm proportion is random -the system is not changing systematically even with varying levels of group sizes and homophily/heterophily. Second, the pattern of results for majority and minority groups are distinct. In the majority group, high heterophily (i.e. a greater proportion of connections to the minority) leads to stronger adoption of the minority norm. Similarly, as the size of the minority group increases, the majority group is more likely to adopt the minority group norm. Within Granovetter's threshold model, this is very reasonable: increased minority group size makes it more likely for a majority group member to be connected to members of the minority group and take on the minority norm.
The minority group adopts the majority norm most when homophily is middling and the minority group is small. The minority group maintains or increases its own norm most when it is relatively large, or when homophily is very high or very low. This suggests the operation of multiple mechanisms at different intersections of homophily and group size (see analytical derivations in the appendix). When the network is highly homophilic, the minority maintains its own norm because it is selectively attached to members of its own group, thereby avoiding exposure to majority-group influence. When the network is heterophilic and the minority group is small, the minority is also more able to maintain its own norm. This is because this network parameterization results in minority group members becoming hubs: each majority group member connects to minority group members, and there are not many of them. This means each minority group agent has disproportionate influence. With a large minority group, minority agents have a higher likelihood to be attached to other minority agents, again making it more likely that they will maintain their own norm distribution. The results of the simulation are in agreement with our analytical results provided in the appendix.
Change in Group Norm Difference
Our second point of interest is the degree to which the two groups become more similar in their group norm distributions. To address this, we use our Change in Group Norm Difference metric (see section 4.5). The more negative this number, the more similar the groups have become in their norm distributions; the more positive, the more the groups have diverged in their norm distributions. Figure 3 displays the results of the simulations.
As with the change in norm proportions, the effects of homophily and group proportion are clearest when the initial group norm distribution is strongly associated with group membership (i.e. 80-20 initial group norm distribution). In this case, we can see there is a strong pattern of the two groups moving towards similar norm distributions (i.e. reduce their differences). This pattern is less pronounced or reversed as homophily increases and minority group size increases. This suggests that heterophily is important for producing between-group norm similarity, while high homophily may actually increase between-group norm difference.
Conflict Ties
Our third question revolves around the remaining potential for normative conflict, once the simulation has run. For this, we look at the proportion of within-and between-group ties that are Conflict Ties at the end of the simulation. Figure 4 shows the results of our simulation for proportion of within-group and between-group ties that are conflict ties. We display results from the 80-20 initial group norm distribution, where group membership and initial norm distribution are closely connected. As with the prior analyses, the results of the 50-50 initial norm distribution were essentially random, and the pattern in the 60-40 initial norm distribution is similar to the 80-20 case but not as strong.
Comparing the three graphs in Figure 4, we see that the level of network homophily determines the trade-off between intergroup and intragroup conflict. In high-homophily networks high potential for intergroup conflict remains at the end of the simulation, but there is little potential for intragroup conflict. In contrast, highheterophily networks have very little remaining potential for intergroup conflict, but slightly higher potential for intragroup conflict.
The role of minority group size also emerges clearly in Figure 4. For betweengroup ties and majority-group ties, having a small minority group reduces potential conflict. This effect is relatively consistent across all the levels of homophily, though it is more exaggerated at more extreme ones. Within the minority group, group size does not appear to have as consistent of an effect on conflict ties. We see that highly homophilic networks still have relatively high potential for between-group conflict (top row). In contrast, when there is low homophily, the between-group conflict decreases. A reverse pattern appears for within-group conflict ties (second and third rows). As homophily increases withingroup conflict decreases.
Discussion and Conclusion
We see three important strands in our pattern of results. First, they speak to the degree to which the alignment of initial group norm distributions and group membership is crucial for the process of reaching normative consensus. Second, they point towards the impact of homophily and heterophily in balancing between ingroup and outgroup conflict. Finally, they point towards strategies that could be used to maintain minority norms in minority groups and to avoid large-scale assimilation.
The Alignment of Norms and Group Membership
One clear result of our simulation is that, in a system with conflicting norms, substantive change occurs only when the norm is highly aligned with group membership. In our model, this took the form of an 80-20 initial norm distribution, where 80% of the majority group but only 20% of the minority group initially held the majority norm. In cases where the norm was not aligned with group membership (50-50 initial norm distribution, top row of Figure 2 and Figure 3), we do not observe any clear globally dominant norm at the end of the simulation. Even in cases with a relatively large majority group (minority group only 10% of the network), there was no particular norm change because the social influence of the majority group was evenly split between two norms. When the norm is moderately aligned with group membership (60-40 initial norm distribution), we see intermediate results -not entirely random as with the 50-50, but less clear than when the norm is strongly associated with group membership.
In intergroup situations, we see that group-level and system-level influence arises not out of small pockets of extremely strong beliefs (i.e. the small minority group in an 80-20 initial norm distribution), but rather out of the consistent homogeneous norm of a majority group. There are cases of normative disagreement that take on proportions like this -our headscarf example from the beginning, for instance, showed 81% of Germans in favor of banning the headscarf in public institutions, with only 15% contradicting that opinion. Though such distinct norms are likely to be newsworthy, perhaps there are many instances of intergroup norm non-conflict that receive less attention. Newspapers are unlikely to report that two neighbors from different cultural backgrounds both like to eat dinner with their families, but it may be important for collective cohesion nonetheless.
This also supports prior literature suggesting that groups with consensual norms are most likely to prompt normative change in their outgroup. A recent survey in the U.S., for instance, indicated a 50-50 split on whether football players should be required to stand during the national anthem [72]. In this case, Americans as a single majority group are unlikely to exert much normative force on out-group members (e.g. Canadians) about this issue. If we consider subgroups of Americans (i.e. Republicans and Democrats), this norm may be much more strongly associated with group membership and thus more likely to have an effect.
Our model focuses on the shift in a specific norm within a network. This fits our interest in descriptive norms, though real cultural practices might be whole clusters of normative behaviors rather than single binary norms. A contrast between Jewish and Christian people, for example, is not only that they attend different religious services, but also that they can have distinct injunctive norms around weekend hours, food, and marriage that are culturally transmitted. One option is to consider the norm in our model as an aggregate, i.e. not a single behavior, but a cluster of group-based behaviors. Another option is to consider the norm in our model to be the behavior which people notice within a specific context.
Homophily Balances In-Group and Between-Group Conflict
One of the primary aims of this model was to understand when and how subgroups would conform to each other. In Figure 3, we see that between-group differences in norms are clearly reduced by the norm updating process, particularly when norms are strongly associated with group membership (i.e. 80-20 initial norm distribution). Except in cases of large minority groups or extremely homophilic networks, there is a meaningful reduction in between-group norm differences: the groups become more similar as the individual agents change their norms. Looking at Figure 2, it is clear that most of the norm change happens in the minority group -they tend to update their norm to that of the majority group, especially when homophily is intermediate and the minority group not large. In contrast, we see the majority group leaving their norm and adopting the minority norm when the network is extremely heterophilic (i.e. h = 0.1) (Figure 2). This occurs while the minority group is updating to the majority norm. In this situation, heterophily is so strong that the members of the majority group are disproportionately exposed to the norm of the minority group; this allows for strong influence of the minority group even when the minority is quite small. Thus, though the system overall produces mutual conformity, the level of homophily balances which group is changing their norms to accommodate to the other group.
In Figure 4, homophily again balances group-level and system-level outcomes when considering the remaining potential for conflict within the network. When the network is heterophilic or neutral, few between-group conflict ties remain. In contrast, when the network is very homophilic, we see the potential for intergroup conflict almost doubled. The reverse is true for within-group conflict ties. When the network is heterophilic or neutral, a fair number of within-group conflict ties remain. When the network is very homophilic, this potential intragroup conflict is reduced by at least half. Thus, we see that both in terms of which group changes their norms and the potential conflict that remains, homophily balances between group-level and system-level outcomes.
Strategies to Maintain Minority Norms
The maintenance of a cultural identity, partially defined by normative practice, can be extremely important. Our simulation lends support to three methods for maintaining minority cultural practice visibly employed by minority groups in reality: isolationism, adopting positions of influence, and increasing the group size of one's minority. Within the model, the minority group was best able to maintain their own norm in extremely homophilic networks, extremely heterophilic networks, and when their group was large.
Extremely homophilic networks in our simulation mimic strongly isolationist cultures in reality. Such isolation can be imposed upon a minority group (e.g. being excluded from mainstream culture), but can also be sought out as a source of cultural affirmation and strength (e.g. resisting assimilation into mainstream culture) [73]. This latter motivation has been expressed by groups as different as the Amish in the United States and anti-capitalist leadership in China. The recognition of communitylevel benefits of culturally affirming and relatively homogeneous environments can be seen in the push to maintain historically black colleges and universities, even as black students in America have increasing access to other institutions [74]. Though isolationism may draw critique as backward-looking, it can be a deep recognition that intergroup contact can fundamentally affect the culture of a minority group.
Extremely heterophilic networks in our simulation, in contrast, are closely related to minority groups which attempt to have their members in positions of overall societal power. Rather than completely preserving group norms through isolation, this strategy attempts to change the larger culture by exerting influence on the majority. This can be observed in efforts to get members of minority groups elected to positions of power, with the explicit goal of increasing minority voice in the government. By holding positions of power within a larger society, minority group members can become hubs to spread their own group norms.
The final strategy we can relate to our results is to increase one's group size. The logic here is fairly straightforward: the larger a group, the greater chance it has of influencing the whole system. We can see this strategy in the tendency of minority group members to define their groups expansively, stressing the similarities with the majority group [14], and the converse tendency of majority group members to define their groups strictly [75].
The three strategies which emerge from our study are far from a complete set; there are many other strategies well outside the scope of our current work. For example, minority groups actively resist norm change [76], cultural institutions formally negotiate over cultural practices, and younger generations modify their inherited cultural practices. We leave model-based exploration of these possibilities for future work.
Limitations and Future Directions
In the effort to construct a parsimonious model from existing theory, we acknowledge that there are many assumptions in this model that could be productively expanded. First, one could incorporate more than two groups, or multiple kinds of interpersonal ties. Second, one could make the model more realistic by having a series of intercorrelated norms held by each group, such that individuals have different thresholds to specific norms, or a different weight for norms depending on in-group membership of neighbors. Third, one could integrate psychological theories of preferential information processing to have agents differentially weight the norms expressed by their neighbors based on shared in-group membership. Such modifications would allow us to expand from descriptive to injunctive norms, involving higher-order cognitive processes such as persuasion [77] and contrast with personal values [78] that could be modelled in agents. Finally, it would be valuable to explore other distributions of thresholds within the network to explore more realistic and complex scenarios. These further developments would also increase the options for validating this model against real world data (e.g. gathering experimental data or found social network data measuring intergroup norm spread). Thus, continuing to grow this work can increase its contribution to the nexus between networks, social norms and conflict.
Despite these limitations, the current study provides a novel and meaningful insight by providing a streamlined example of how group size and homophily can affect the adoption and maintenance of group-affiliated norms. We have shown that even in this simplified version of reality, differences in group proportions and homophily have different effects for majority and minority groups, and can affect the degree to which groups eventually adopt similar distributions of norms. We also contribute to the exciting interdisciplinary growth of computational social science by providing a novel agent-based model that includes both structure of social networks and social influence in one framework.
Finally, we hope that this work contributes to existing knowledge on assimilation, acculturation, and between-group conflict over norms. Our simulation demonstrates that assimilation is most likely at low (heterophilic networks) and intermediate levels of homophily. At intermediate levels, the minority group largely conforms to the majority group. This moves the system towards collective harmony, but does so at the cost of the minority group giving up its own norms. At low levels of homophily, when minority group members have a structural advantage within the network (i.e. central positions with many ties), we see accommodation from both directions: the minority members take on the norm of the majority group, but the majority members also take on the norm of the minority group. Taken together, these suggest that collective harmony is maximized when groups are interconnected, and that this is accompanied by the dispersion of minority norms when there is a strong preference for out-group contact.
the Regional Computing Center of the University of Cologne (RRZK) for providing computing time on the DFG-funded High Performance Computing (HPC) system CHEOPS as well as support. Rocco Paolillo completed this work while on the programme EU COFUND BIGSSS-departs, Marie Skłodowska-Curie grant agreement No. 713639. Natalie Gallagher completed this work while on the Graduate Research Fellowship from the National Science Foundation.
Contribution All authors jointly came up with the idea and research questions. J.K. implemented the simulation model, wrote the method part and contributed to the theoretical background part. N.G. analyzed results and wrote results and discussion. Z.M.K, R.P. and L.P. conducted literature research and wrote the theoretical background part. F.K. provided mentoring, computed analytical derivations and contributed to results and discussion.
Appendix: Analytical Derivations for Norm Endorsement
In this appendix, we derive the probabilities of norm endorsement in each group using the mean-field approach. This analysis enables us to gain insights on the relationship between the model parameters of homophily, group size, and group norm distribution. In addition, the analytical derivations help us to interpret the outcome of the simulations in section 5.
More specifically, we calculate the probabilities of a minority agent to update to the majority norm and vice versa. We use mean-field approximation (also known as the deterministic approximation) which means that we look at the average behavior of the group in an equilibrium state [79]. That means, we do not consider the changes over time and the heterogeneity of the agents. Nevertheless, the mean-field approach gives us a useful insight on forecasting the overall behavior of the system. Let us assume that the minority is denoted by a and the majority is denoted by b. Two norms are denoted by norm A and norm B. Homophily is denoted by h and group proportion is denoted by g. In order to calculate the probability of a minority agent to update the majority norm (B), we need to estimate the probability of a minority to be connected to majority agents (p ab ) and the probability of the minority agent to be connected to minority agents (p aa ). Since our agent-based model assumes a preferential attachment mechanism and defines group proportion (g), the probability of two agents to be connected depends on their homophily (h) and the degree of the agent (k). Link formation is a combination of two mechanisms, namely homophily and preferential attachment, and thus the probability of connectivity follows a nonlinear function. To estimate the link probabilities, apart from homophily, we need to estimate the degree growth function (C) of each group of agents. The degree growth determines the attractivity of the agents with regards to their degree. The degree growth in this model follows a polynomial function of order three with one valid solution and it can be calculated numerically [64]: The probability of two agents of group a (p aa ) and two agents of group b (p bb ) to be connected are: , In addition, the degree growth function has the following relation to the probability of linkage [64]: The probability of a minority agent to update to the majority norm ( f aB ) depends on the probability of being connected to majority (p ab ) and minority (p aa ). Thus, for a minority agent, the fraction of neighbors with norm B is: f aB = p aa p aB + p ab p bB p aa + p ab .
The numerator consists of two parts; the probability of connecting to another minority with norm B (p aa p aB ) and the probability of connecting to majority with norm B (p ab p bB ). To estimate the fraction, the nominator should be divided by the total probability of connectivity between the majority to majority and minority. Inserting .
Similar relation can be found for the probability of a majority agent to update to the minority norm ( f b A ): Analytical results for the probability of minority (left) and majority (right) to update to the norm of the other group. Initial norm proportion is set to 20-80. We observe asymmetrical results as the group balance deviates from 50-50 condition. For small values of homophily 0 ≤ h ≤ 0.2 we observe similar behavior for majority and minority. However, as homophily increases, we observe that minority members update their norm to that of the majority with high probability while majority does not update to the minority norm. The asymmetric relation is more pronounced as the minority group size decreases. Figure 5 displays the analytical results derived from the above derivations. It is interesting to note that the update to the norm of other group follows a nonlinear and asymmetrical trend both for the minority and the majority. In the intermediate level of homophily (0.5 < h < 0.8), while the majority members resist to switch its norm to minority norm, the minority updates to the majority norm with high probability. That would create a higher advantage for the majority norm to persist and stabilize. Only when homophily is very high, the probability of the minority members to update to the majority norm starts decreasing. As the minority size shrinks, the inequality in norm adoption increases.
|
2019-07-01T16:01:30.000Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "378263fb5385cbf814b3bfcce2013ffd34d48738",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-29333-8_6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "830b61d5b6092483f73dd29f7f133593eab38fbd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Psychology",
"Physics"
]
}
|
216107755
|
pes2o/s2orc
|
v3-fos-license
|
Factors Affecting Direct and Transfer Entrants' Active Coping and Satisfaction with the University.
Psychological wellbeing is vital to public health. University students are the future backbone of the society. Direct and transfer entrants might encounter different adjustment issues in their transition from secondary school or community college to university studies. However, worldwide, the factors affecting their active coping and satisfaction with the university are currently unknown. The purpose of this study was to address this gap. Nine-hundred-and-seventy-eight direct entrants and 841 transfer entrants, recruited by convenience sampling, completed a cross-sectional survey study in 2018. A valid and reliable Hong Kong modified Laanan-Transfer Student Questionnaire (HKML-TSQ) was used to collect data. Multiple methods of quantitative data analysis were employed, including factor analyses, test of model fit, t-tests, correlations, and linear regression. The results showed that the transfer entrants had relatively less desirable experiences in their adjusting processes than did the direct entrants. There was evidence of both common and different factors affecting the two groups' active coping and satisfaction with the university. Different stakeholders from community colleges, universities, and student bodies should work collaboratively to improve students' transitional experiences before, during and after admission to the university.
Introduction
The common pathways to university studies are through post-secondary admission [known as direct entrance] and community college transfer [known as vertical transfer entrance] [1,2]. Students from different pathways to university studies might experience different adjustment issues, which might affect their psychological wellbeing. However, most studies have investigated students' university experiences solely from the perspectives of individual groups (i.e., direct entrants (DEs) [3] or transfer entrants (TEs) [4,5]), but not many have compared the two. Studies involving both groups of students have focused mostly on their academic performances [4,[6][7][8][9][10][11][12]. Although the findings have been inconsistent, TEs have been shown to have similar, or even higher, grade point averages (GPAs) than DEs at graduation [4,13]. This could, however, be a result of the students' GPAs not accounting for the grades of subjects of which the credits have been transferred [4]. Nonetheless, adjustment to the new environment (i.e., university) can affect students' academic and social involvement differently.
For instance, some studies have found the "transfer shock" phenomenon [4,14] in TEs. Besides, the term "campus culture shock" has been used to describe TEs' struggles with the new and unfamiliar university campus culture [13,15]. Compared with DEs, TEs have been found to have higher study loads [4], more mental health problems [13], and higher attrition rates [7]. Another problem that has been identified in TEs is related to their integration, engagement and adaptation in university, which are thought to be tied strongly to academic success [10]. While better adjustment to university life can be mediated by more use of active coping, less use of avoidance coping, and active seeking of social support [16], it remains unclear whether the factors affecting these coping strategies differ between students with different routes of entry. In a study using the National Survey of Student Engagement, TEs were found to engage less than direct entrants [1]. In our ongoing literature review of 29 studies of both groups of entrants, there were only two that examined the social adjustment of TEs, but neither considered this in relation to academic adjustment. One of these [10] did not yield generalizable findings since only final year engineering and computer science students were investigated, and their social activities were limited to sororities and fraternities, community service groups, spiritual groups, sports and clubs. The other study, conducted by Wang and Wharton [12], found that TEs participated less in social activities such as campus or student organizations, and made less use of student support services. To fill the research gap and contribute to the literature, this study explored and compared the two groups' experiences of academic and social adjustment in their university studies, with the goal of identifying the factors affecting their active coping and satisfaction with the university [13,17].
Theoretical Framework
This study was guided by a synthesis of various notable theories for conceptualizing the factors that might affect the direct and TEs' active coping and satisfaction with university. Figure 1 presents the theoretical framework. Astin's Input-Environment-Outcomes (IEO) model [18] is adopted frequently to explore the impacts of university study on students. Input is defined as students' characteristics at the time of entry to university, that can be operationalized as demographics, academic backgrounds, and previous learning experiences [19].
Int. J. Environ. Res. Public Health 2020, 17, x FOR PEER REVIEW 3 of 16 support services), academic involvement, social involvement, and participation in student organizations [12]. These four dimensions of students' undergraduate experiences can be considered environmental variables according to Astin's IEO model [18]. These concepts and theories serve as the theoretical foundation of various constructs in the modified Laanan-Transfer Students' Questionnaire (L-TSQ and ML-TSQ) to measure the factors affecting transfer students' active coping and satisfaction with the university [29,30], and can be considered as outcome variables under Astin's IEO model [18].
Differences in Adjustment Experiences between DEs and TEs
Both DEs and TEs encounter challenges in their academic and social integration into university [10], such as large class sizes (DEs [32]; TEs [4]); impersonal organizational structures (DEs [33]; TEs [4]); the need to learn to exercise more self-discipline than in secondary school, adjustment to new learning styles [10]; and mental health issues (DEs [34]; TEs [13]). However, the adjustment experiences of the two groups of students can differ due to their different routes of entry. DEs might be less mature at the time of admission but they have 4 years to acclimatize to the university culture. On the other hand, even though TEs arrive with some post-secondary education experience from their community colleges, they have shorter study periods (i.e., most with two years) at university [ [5]. In recognition of their prior learning, TEs can often be given credit transfer for some subjects at the junior level, meaning that they can start their university studies by enrolling straight in senior subjects. In these classes, they are newcomers to the cohort of DEs who have already been acquainted for 2 years [13]. In terms of social integration, there might be insufficient interactions between the two groups of students [4]. As a consequence, TEs might find it difficult to make new friends [13]. If they The Theory of Student Involvement [18] has been adopted to explain how the environment, as defined in Astin's IEO model, can influence student development. Others have found this theoretical perspective useful for studying students' academic and social adjustment processes [20]. Both the original [18] and updated [12] versions of the theory entail students' academic and social involvement. According to Tinto's model of student attrition [21], which has been applied to studies of transfer students (e.g., Getzlaf et al. [22]), integration into the academic and social systems of university leads to an increased level of commitment to university study [23] and enhanced quality of student persistence in learning [24]. Students' social involvement can also contribute to their social capital.
The notion of social capital was first proposed by Bourdieu [25] and is considered to be "one of the most influential concepts in sociology" (p. 279) [26]. It has been deployed in a diversity of contexts including higher education [27]. The concept of social capital refers to the presence of one's "institutional relationships of mutual acquaintance and recognition", or one's membership of a group (p. 286) [25]. Social capital can be in the form of social networks from which individuals can draw upon social support [28] which, along with coping styles, also appear as newly added constructs in the updated version of the transfer student capital model [29,30]. Social capital not only offers educational benefits, but also facilitates the pursuit of social outcomes during the process of attaining a certain status [31], for example, social adjustment to university study.
The transfer student capital model, applying to both DEs and TEs, involves a host of factors in bringing about successful transition to university [30]. The model refers to the process by which students acquire the knowledge, skills and experience needed to achieve success at the university [20]. Added to this model are the four dimensions identified in an extended version of Astin's theory of student involvement: experience of academic advising (conceptualized as their use of university support services), academic involvement, social involvement, and participation in student organizations [12]. These four dimensions of students' undergraduate experiences can be considered environmental variables according to Astin's IEO model [18]. These concepts and theories serve as the theoretical foundation of various constructs in the modified Laanan-Transfer Students' Questionnaire (L-TSQ and ML-TSQ) to measure the factors affecting transfer students' active coping and satisfaction with the university [29,30], and can be considered as outcome variables under Astin's IEO model [18].
Differences in Adjustment Experiences between DEs and TEs
Both DEs and TEs encounter challenges in their academic and social integration into university [10], such as large class sizes (DEs [32]; TEs [4]); impersonal organizational structures (DEs [33]; TEs [4]); the need to learn to exercise more self-discipline than in secondary school, adjustment to new learning styles [10]; and mental health issues (DEs [34]; TEs [13]). However, the adjustment experiences of the two groups of students can differ due to their different routes of entry. DEs might be less mature at the time of admission but they have 4 years to acclimatize to the university culture. On the other hand, even though TEs arrive with some post-secondary education experience from their community colleges, they have shorter study periods (i.e., most with two years) at university [5]. In recognition of their prior learning, TEs can often be given credit transfer for some subjects at the junior level, meaning that they can start their university studies by enrolling straight in senior subjects. In these classes, they are newcomers to the cohort of DEs who have already been acquainted for 2 years [13]. In terms of social integration, there might be insufficient interactions between the two groups of students [4]. As a consequence, TEs might find it difficult to make new friends [13]. If they are unable to join communities such as study groups, their academic outcomes might be hampered [35]. Furthermore, DEs are more likely to have known the teachers and to have adapted to the learning styles [36]. A further major difference between the two groups is that DEs have been found to receive more attention in various aspects such as orientation and counselling [4,17]. These differences in adjustment experiences, serving as part of the holistic university experience, could give rise to different coping styles or strategies [36] and levels of satisfaction with the university [37].
In summary, the majority of prior studies comparing DEs and TEs have focused on their academic performances, with less attention given to their psychosocial adjustment experiences. Moreover, research on TEs has been conducted mostly in western contexts, where student populations are demographically more diverse [30]. There is a scarcity of studies investigating TEs in Asian contexts. Therefore, the objectives of this study were twofold: (1) to explore the similarities and differences between TEs and DEs in their perceptions of university experiences, particularly in terms of their academic and social adjustments, and (2) to identify the factors affecting their active coping and satisfaction with the university. These questions are important because universities have a responsibility to provide socially supportive environments to all students, regardless of their entry routes.
Research Design and Context
This was a cross-sectional survey study using a mix of convenience and snowball sampling. Ethical approval for conducting the institution-wide survey (HSEARS20180104005-01) was obtained from the Institutional Review Board. All full-time undergraduate students from one local university in Hong Kong were invited via email, posters and in-class promotion to fill in an online questionnaire between April and November 2018. Local students who had been admitted to university from both secondary schools and community colleges were included.
In this study, the DEs were those admitted from secondary schools and completing their undergraduate study in the normal duration of 4 years, while TEs were those admitted from local community colleges. However, some DEs had finished either 1-year or 2-year community college or university study-their study durations and the resources they received were the same as for students admitted from secondary school. In Hong Kong, a quota (i.e., a certain number of places) is assigned to the government-funded universities to accommodate TEs to complete their undergraduate studies within 2 years (hereafter referred to as 2yTEs). It is a noteworthy common practice in Hong Kong that these 2yTEs are largely fresh graduates of local community colleges.
Instrument: The HKML-TSQ Questionnaire
The modified Laanan-Transfer Student Questionnaire (ML-TSQ) [29] was adapted and employed in this study with the permission of its author. A range of tests was performed on the original ML-TSQ to establish its content validity, construct validity and reliability, and to examine the relationships between the independent and the dependent variables [29]. The internal consistencies of the constructs ranged from 0.74 to 0.94 [30]. For this study, the adapted version of the ML-TSQ (hereafter HKML-TSQ) was reviewed and refined, first by a panel of eight local educational research experts, and then a panel of nine local and overseas experts. In this version, some items were modified to fit the local context. A content validity index (CVI) of 0.99 was found, which was higher than the standard acceptable level of 0.75 [38]. For the appropriateness and readability, 11 local undergraduate students were invited to fill in the questionnaire. Minor revisions were made to some wording.
The HKML-TSQ consisted of: (a) items eliciting students' socio-demographic information (e.g., year of birth, gender, year of intake); (b) 8 items on their perceptions of the university (renamed to perceived disparity: transfer vs non-transfer students); (b) 10 items on processes of adjusting to university life; (c) 22 items on satisfaction with the university (renamed to university support), and one item on overall university experience; (d) 15 items on coping style at the university, and (e) 10 items on social support at the university. A 5-point Likert scale (1 = completely disagree and 5 = completely agree) was used to assess the students' levels of agreement with each item, except for the items on university support, which were rated on a 4-point Likert scale (1 = very dissatisfied, 4 = very satisfied) to avoid any central tendency.
Data Analysis
Exploratory factor analyses (EFA) were conducted for each construct by using the general rule of an eigenvalue > 1 [39]. The maximum likelihood extraction method and oblimin rotation were used. Kaiser-Meyer-Olkin (KMO) tests were conducted to measure the sampling adequacy. Cronbach's alpha reliability statistics were used to test for the scales' internal consistency.
The tolerance values and the variance inflation factor (VIF) were computed to examine the multicollinearity among the independent variables included in the analysis. Confirmatory factor analyses (CFA) were conducted on the original study factors and the new factors emerged from EFA. The chi-square test of model fit, the goodness-of-fit index (GFI), the comparative fit index (CFI), the Tucker-Lewis index (TLI), and the root mean square error of approximation (RMSEA) were applied to assess the fit of the model. Independent samples T-tests were used to investigate the differences between the scores of the DEs and 2yTEs on the scales and on the individual items measuring perceived disparity, process of adjusting to university, university support, coping style at university, and social support received.
Pearson's correlation test was used to test the correlations between the scales. Variables with statistically significant correlations with student coping and student satisfaction were selected for linear regression analysis (forward) to explore the strongest predictor of the two factors. SPSS analytical software version 25 was used for the data analysis. CFAs were performed with SPSS AMOS 25 (IBM Corp., Armonk, NY, USA).
Ethical Approval
Ethical approval was obtained from the Human Subjects Ethics Sub-Committee of the Hong Kong Polytechnic University (HSEARS20180104005-01).
Student Demographics
There were 1819 respondents, comprising 841 (46.2%) 2yTEs and 978 (53.8%) DEs. The students represented all 28 academic departments of the university. The sample consisted of 34% male and 66% female, aged between 19 and 52 years (mean = 21.6, SD = 1.92). Most of the participants were in the third (35.3%) or fourth (28.2%) year of study, with the others distributed across first (17.6%), second (12.5%), and fifth years (6.3%). It is noteworthy that all 2yTEs were admitted to the university as junior-year students, which is comparable to DEs in their third year of study. As shown in Table 1, these data were consistent and comparable with the university-wide data.
Factor Analysis
For perceived disparity: transfer vs non-transfer students, two factors were loaded and accounted for more than 42.00% of the total variance, among 2yTEs and DEs (Table 2), which were different from the original study [29] with one factor. Two factors were loaded and accounted for the process of adjusting to university (Table 3), and accounted for more than 31.00% of the total variance. This was similar to the original study with two factors. For university support, three factors were loaded and accounted for more than 47% of the total variance, among 2yTEs and DEs (Table 4). These were different from the original study with two factors. Four factors were loaded for coping style and accounted for more than 53% of the total variance (Table 5), similar to the original study with four factors. Similarly, two identical factors were loaded and accounted for more than 47% of the total variance for social support at the university, among 2yTEs and DEs (Table 6), which were similar to the original study with one factor. The internal consistencies of altogether 13 factors were calculated using Cronbach's alpha and were found to be acceptable (Table 7).
Differences between Two Groups of Students on Their Experience and Perceptions
As the results of factor analyses between the two groups of students were similar, the factor structure that emerged from the whole data was used. Table 8 shows the comparison of experience and perceptions between the two groups of students. Results of t-tests yield that the perceptions of DEs scored statistically significantly higher for social adjustment; all the university support factors (i.e., general support and advising, academic experience and advising, institutional attributes, and overall satisfaction on university experience); coping style: active and social; and social connections. The perceptions of 2yTEs scored statistically significant higher (p < 0.0001) than the DEs in transition adjustment (with items "drop in GPA" and "increase in stress") and perceived greater disparity in university resources (higher poor perceptions).
Comparisons at the item level revealed that the DEs had better adjustments to the academic standards (p < 0.05) and social environment (p < 0.01), received more university support (p < 0.05) at the university, while the 2yTEs experienced a heavier study load (p < 0.001), a drop in their academic performance (p < 0.001), felt stigmatized (p < 0.001), received insufficient resources and support (p < 0.001), and had less opportunities for overseas exchanges (p < 0.001) (results not shown).
Correlations among Independent and Dependent Variables
In both groups, student coping and student satisfaction were positively and significantly correlated with most of the variables, including social adjustment; general support and advising, academic experience and advising; institutional attributes; emotional (coping style); social connections; and sense of belonging, with correlation coefficients R ranging from 0.266 to 0.586 (p < 0.01) for 2yTEs and 0.227 to 0.531 (p < 0.01) for DEs.
However, 2yTEs' overall university satisfaction correlated mildly negatively and significantly with academic study (R = −0.175 at p < 0.01) and transition adjustment (R = −0.129 at p < 0.01), but positively with these variables for DEs. On the other hand, for DEs, their overall satisfaction with the university was found to be mildly positively correlated with resources and stigma (R = 0.103 at p < 0.01), and escape (coping style) (R = 0.079 at p < 0.05) (results not shown). Table 9 shows the results of linear regression using the various factors as the independent variables to predict the variance of students' active coping as the dependent variable. Social connections and coping style (emotional) are significant predictors in both groups. Institutional attributes were significant predictor exclusives to 2yTEs, whereas for DEs, sense of belonging, social adjustment and academic experience and advising were the exclusive significant predictors. Table 10 shows the results of linear regression using the various factors as the independent variables to predict the variance of students' satisfaction with the university as the dependent variable. p < 0.001 p < 0.001 p < 0.001
Discussion
This is the first study to investigate DEs and TEs' experiences of adjusting to university life and factors affecting their active coping and satisfaction with university in an eastern educational context. Our study found that the DEs experienced better adjustment processes and social connections, received more university support, were more likely to use active coping strategies, and felt more satisfied with university than the TEs did. On the flip side, the TEs experienced stigmatization, heavier study loads, less opportunities for overseas exchanges, and received less university resources. Both groups of students considered general support and advising, social adjustment, and institutional attributes to be the factors affecting their overall satisfaction with the university. Sense of belonging was the factor affecting TEs' overall satisfaction, while academic experience and advising, and active coping were the factors for the DEs. On the other hand, both groups of students considered social connections and emotional coping to be the factors affecting active coping. The factors specifically affecting the TEs' active coping were institutional attributes, whereas those affecting DEs' included sense of belonging; social adjustment; overall satisfaction with the university and academic experience; and advising. In the following sections, we will discuss how the findings of this study validated the instrument used, differences between the two groups in terms of their academic and social involvement, the factors affecting overall university satisfaction and active coping for the two groups of students, and then the implications of the study.
Instrument Validation
The results of the study indicate that the HKML-TSQ is applicable to university students (both direct and TEs) in eastern countries, with high Cronbach's alphas ranging from 0.728 to 0.903 (except for "transition adjustment" with a Cronbach's alpha of 0.583). For the five constructs used in our study: perceived disparity (transfer vs non-transfer students); adjustment processes; university support; coping style; and social support at the university; the total factor variances ranged from 31.86 to 54.84. Beside CFA, other models such as GFI, CFI, TLI, and RMSEA have been used to test the fit indices, with promising results. The original ML-TSQ was tested on 319 TEs in one university [29]. Perhaps due to the smaller sample size, some factors were not identified in the original study. In this study, most of the items from the original ML-TSQ were retained, but some were modified for the local context. Because of these modifications, it is difficult to make direct comparisons of the constructs. Of the factors identified in the US [29] and in our study, (1) the four-item subscale of active coping was identical; and (2) the subscale of social connections in both contexts consisted of eight items. We also found that the subscales identified between the DEs and TEs were comparable. These similar findings might be due to the fact that the students shared similar cultural backgrounds and received education under the same system and policies.
Differences of Academic and Social Experience between the Two Groups of Students
Our study findings support that the TEs' experiences of academic and social integration are generally negative or less desirable than those of the DEs. The finding that the TEs in this study experienced the feeling of stigmatization is consistent with previous studies in the US [40]. Community college students in western countries are demographically more diverse (e.g., a large range of ages) and they have various reasons for enrolling (e.g., financial constraints) [30], whereas community college students in Hong Kong are mainly fresh graduates of senior secondary education with the primary and often the only goal of articulating into university [41]. This finding that the students felt stigmatized for being TEs can be explained contextually by their self-perceptions due to Hong Kong's education system. A study of Hong Kong community college students' perceptions of self-worth found that the majority of them considered the education system serves the functions of "differentiation" and "selection" (p. 257) [42] that contrasts with "integration" for academic success in higher education [10]. In a related study, students who were unable to get into university "straight away" after completing secondary education perceived themselves as "losers" (p. 280) [43]. Such a mindset might be rooted in their beliefs and linger on even after they have articulated to university, thus resulting in self-stigmatization that can be mentally unhealthy [44].
On the other hand, TEs bear heavier study loads, most likely due to the long-standing ills of credit transfer [4]. For instance, some credits for subjects studied in their community colleges might not be accepted by their universities, possibly leading not only to heavier course loads but also to delays in graduation [5]. This presents a typical mismatch between the idealistic situation (i.e., all credits successfully transferred) and the reality (e.g., credit loss) that has been shown as a significant predictor of TEs' transition experiences [2]. According to Tinto's model of student attrition [21], this obstacle to the TEs' academic integration can even lead to a higher risk of dropping out [22]. Another finding that TEs perceived themselves as receiving less resources and support than DEs, brings about a three-fold explanation. Without the comprehensive induction or orientation that is sometimes exclusive to DEs [17], TEs are less likely to be aware of university support services such as counselling [12]. At the same time, their lack of awareness of, and thereby access to, these resources and support can also be explained by their hesitation to be proactive in asking for help [15]. This can, in turn, be a consequence of feeling "underprepared" and "unconfident" (p. 4) [45]. Additionally, the perception of university administrators that TEs with prior experience from community colleges can navigate the university environment well [4] further affects the allocation of resources to TEs.
Factors Affecting Students' Active Coping and Satisfaction with University
We found that academic experience and advising and active coping were the key factors for the DEs but not for the TEs. With the theoretically validated importance of academic integration and involvement in students' commitment to university studies [12], our study offers an interesting finding that academic experience and advising was not a key factor affecting the TEs' satisfaction with the university. One of the possible explanations comes from their perceived experience of stigmatization. They might, in advance, have expected less support and thus did not expect much from the university. In addition, their heavy study loads might have overshadowed their expectations to seek advice. Furthermore, as mentioned, the transfer process entails a strong mechanism of screening such that only the best-performing students from community colleges can articulate to universities [4]. This might further lower the likelihood of their requiring academic advising [46].
Rather, sense of belonging was the key factor specifically affecting the TEs' overall satisfaction. The heavy study load, coupled with the shorter duration of their university courses, might have affected their participation in university activities. Our study results (Table 9) supported that, compared with the DEs, the TEs experienced more difficulties in making friends, participated less in social activities, and had less opportunities for overseas exchange or other types of university support. Without such integration into the academic and social systems of the university, based on our study's theoretical framework of I-E-O [18], TEs would receive less support for the "environmental variables". Thus, they would be more likely to have lower levels of commitment to the university [24]. With a poor sense of belonging to the university, as a consequence, TEs have higher dropout rates [7].
On the other hand, both groups of students considered social connections and emotional coping to be the key factors influencing their active coping. Establishing social connections and coping emotionally with a new environment are common adaptive practices for students transitioning into university study [47]. After all, in the context of this study (i.e., Hong Kong), both groups of students shared similar socio-cultural backgrounds and experienced their growth and development under the same education system [48].
The construct of institutional attributes was the only key factor affecting the TEs' active coping, while sense of belonging, social adjustment, overall satisfaction with the university and academic experience, and advising were factors for the DEs. Before articulating to university, TEs had already spent 2 years in community colleges, half the duration of the 4-year undergraduate study. They might already have become immersed in the culture of the community college and would therefore find it difficult to cope with the new institutional environment (i.e., "campus culture shock") [15,30]. As critical institutional attributes of universities, large class sizes and impersonal organizational structures might overwhelm TEs [4,40]. Overall, these findings suggest that TEs might feel underprepared and stigmatized, particularly in non-academic aspects, so that they are more likely to adopt active coping to adjust to university study and campus life. In other words, they have to rely on their own planning and efforts (i.e., active coping) to discover their idiosyncratic paths within a limited period of time in university [49].
Implications
The findings of this study have implications for various stakeholders of community colleges and universities, including the management, administrators, academic advisors, student affairs officers, and student bodies. In terms of academic integration, academic advisors, who are often academic staff, should be aware of essential information useful for credit transfer, including TEs' course loads, the requirements of course selection, and the process of credit transfer. University counsellors should also be notified about the potential of heavy study loads, among other issues (e.g., mental health) encountered by TEs, so that they can be better prepared to assist students in their academic and social adjustment [11]. Excessive study loads also indirectly take away their opportunities to participate in overseas exchanges. While community colleges and universities should continue working hand-in-hand towards improving the system and policies associated with credit transfer [50], the internationalization-at-home (IaH) experience can be introduced to TEs. IaH refers to exposing students to both formal and informal learning experiences via technology-mediated communication [51]. This can serve as an "alternative to student exchange", with less time [52].
On the other hand, TEs' unique experience of feeling stigmatized about their status is noteworthy. The self-stigmatization might lower their expectations about the amount of campus support and resources they would receive. Nonetheless, from the perspective of service quality assurance [53], the management and administration personnel have the obligation to maintain the equity of access to campus support services for students, regardless of their entry paths. In fact, in order to help TEs towards both academic and social integration, orientation, advising and support services should all be well-provided to welcome and acclimatize them, yet previous studies have criticized current efforts as being inadequate [4]. In addition, student affairs officers can conduct campus visits for incoming TEs before the semester starts, to minimize "unpleasant surprises" (p. 13) [50]. These actions taken by universities could mitigate problems associated with the campus culture shock, which in turn could enhance their sense of belonging and thereby satisfaction with the university [15]. Additionally, to enhance communication and collaboration between students and faculty, representatives of transfer students can be elected to staff-student consultative committees as a formal communication channel between students and the university [54]. Strengthening faculty-student interactions via such channels might also improve students' sense of belonging [55]. Furthermore, student-run organizations (e.g., student associations or societies) can also play an important role to facilitate the interaction between DEs and TEs and help them adapt to campus life, through different activities such as orientation and campus experience camps [56].
Limitations
After numerous university-wide attempts were made to recruit students using multiple methods and incentives, 12% of all students in the university, including 27% of all TEs, participated in this study. The sample size of more than 840 students in each group, involving all departments, suggests high generalizability of the study's results to the study university. However, the study's generalizability to other universities is questionable, given that the university in which this study was conducted contains the largest number of TEs of all the universities in Hong Kong. Future research could involve other universities in Hong Kong to gain a more comprehensive understanding of the two student populations. Besides, the cross-sectional nature of our study creates a limitation that respondents might find it difficult to reflect accurately on their past and current experiences. Thus, longitudinal studies could be adopted so that changes over time could be considered. Furthermore, the construct "transition adjustment" should be interpreted with caution, because of its slightly low internal consistency reliability (Cronbach alpha 0.583). Although the alpha was close to the acceptable level of 0.6-0.7 [57], the inter-relatedness of the items in this construct should be examined further. For instance, the item of "drop in GPAs" might not be applied to DEs who were straight from secondary school, because they did not have GPAs in their secondary schools with which to compare.
Conclusions
To the best of our knowledge, this is the first study exploring and comparing the experiences and perceptions of DEs and TEs in their adjustment to the university from both academic and social perspectives. The study found that TEs have relatively less desirable experiences in the adjustment processes than do their direct-entry counterparts. Different stakeholders from community colleges, universities, and student bodies should work collaboratively to improve students' transitional experiences before, during and after admission to the university.
|
2020-04-23T09:14:40.640Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "65149d08909a62a511bf95fb1e1b9467ee094347",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7215749?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "48d92e7caffb4c0b46d9b8d78e3ef11ffb7ff143",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
199382462
|
pes2o/s2orc
|
v3-fos-license
|
A Contactless and Biocompatible Approach for 3D Active Microrobotic Targeted Drug Delivery
As robotic tools are becoming a fundamental part of present day surgical interventions, microrobotic surgery is steadily approaching clinically-relevant scenarios. In particular, minimally invasive microrobotic targeted drug deliveries are reaching the grasp of the current state-of-the-art technology. However, clinically-relevant issues, such as lack of biocompatibility and dexterity, complicate the clinical application of the results obtained in controlled environments. Consequently, in this work we present a proof-of-concept fully contactless and biocompatible approach for active targeted delivery of a drug-model. In order to achieve full biocompatiblity and contacless actuation, magnetic fields are used for motion control, ultrasound is used for imaging, and induction heating is used for active drug-model release. The presented system is validated in a three-dimensional phantom of human vessels, performing ten trials that mimic targeted drug delivery using a drug-coated microrobot. The system is capable of closed-loop motion control with average velocity and positioning error of 0.3 mm/s and 0.4 mm, respectively. Overall, our findings suggest that the presented approach could augment the current capabilities of microrobotic tools, helping the development of clinically-relevant approaches for active in-vivo targeted drug delivery.
Introduction
Considered a figment of imagination until a few decades ago, microbotic surgeries are steadily approaching clinically-relevant scenarios [1,2]. Over the past years technological advances in the fields of nanofabrication and microrobotic control have allowed to miniaturize and control robots as small as a few microns [3,4]. Consequently, microrobotics has been drawing significant attention for minimally invasive surgeries, such as biopsies, cytoreductions and endarterectomies, as well as cardiac and ophthalmic surgeries [5,6].
Among these interventions, microrobotic targeted drug delivery is arguably the closest to the reach of the current technology [7,8]. In point of fact, techniques have been presented to miniaturize microrobots well beyond the constraints of the main human vascular vessels [9]. Moreover, microrobots have been demonstrated to be capable of navigation in three dimensional (3D) and dynamic environs [10,11]. Therefore, drug-coated microrobots, capable of navigating in unpredictable environments, would be able to perform targeted drug delivery inside the human body. Not only would this microrobotic approach supersede the current approach in terms of effectiveness of drug delivery, but also under numerous other aspects, such as reduction of side effects as well as improved regulation of drug-intake [8].
However, most microrobotic research is still mainly confined to controlled environments [9][10][11]. In fact, the untethered and biocompatiblity requirements pose a main challenge to the development of sensors and actuators for in-vivo microrobotic testing. Particularly, a clinically-relevant microrobotic system has to possess biocompatible propulsion mechanisms (1), imaging systems (2), and it also has to be able to execute tasks beyond those required for navigation (3).
The development of biocompatible microrobotic propulsion mechanisms (1) is challenging due to scale requirements. Customary battery-powered actuators cannot be used, as the current technology does not allow sufficient miniaturization of on-board power sources. Consequently, researchers have investigated the use of chemical, thermal, acoustic, and electromagnetic microactuators capable of wirelessly harnessing power stored within the material of the microrobot or its surroundings [12][13][14]. Nonetheless, due to power attenuation and lack of biocompatibility, a considerable number of these actuators is either incompatible or only compatible with a limited number of clinical procedures [9].
Also imaging techniques present a significant hurdle with regard to clinical compatibility (2). Historically, optical images have been used to sense the state of microrobots. Yet optical cameras are not suitable for minimally invasive clinical interventions, as they would require incisions for their insertion. However, biocompatible alternatives often suffer from major drawbacks, such as limited bandwidth, low resolution, artifacts or long-term toxicity. Therefore, robust tracking procedures have to be developed for pose reconstruction.
Finally, to perform minimally invasive surgery, microrobots generally have to be capable to execute actions beyond those required for navigation (3). Case in point, active drug-delivery microrobots need to perform chemical release of substances. Clearly, also the actuators for such actions have to be untethered and biocompatible, as they must not unintentionally alter physiological parameters, such as pH, temperature, or intravascular pressure.
In this study, we present a fully biocompatible approach for microrobotic targeted drug delivery that addresses these challenges. Such approach exploits the physical properties of the used microrobot to develop a contactless actuation and sensing system that would only affect the targeted tissue. In particular, medical ultrasound imaging is used for position sensing, while quasi-static and high-frequency magnetic fields are used for navigation and active drug release, respectively. This approach is then validated using an anatomically accurate phantom of the human vascular network. Overall, this proof-of-concept work presents the first closed-loop active targeted drug-model delivery using a thermoresponsive coating and a fully biocompatible microrobotic system. Moreover, our results show reductions of several orders of magnitude in completion time of the task with respect to previous literature using similar technology [15].
Materials and Methods
This section presents the custom setup and the techniques used to perform clinically-relevant targeted delivery of a drug-model. From a hardware perspective an electromagnetic testbed, a generator of high-frequency magnetic fields, a clinical ultrasound machine, and an anatomically accurate phantom are used ( Figure 1) [16]. Moreover, we develop software algorithms for the control of the electromagnetic testbed, as well as for control and tracking of the microrobots.
Electromagnetic Setup
The electromagnetic setup used for motion control of the microrobots consists of nine metal-core electromagnets and a camera attached to a microscope ( Figure 2). While the setup was originally presented in our previous work, several modifications have been done to address for this study [10]. Most notably, a coil (Ultraflex Power Technologies, New York, USA) for the generation of high-frequency magnetic fields has been added to the setup ( Figure 1). Additionally, a linear stage (MISUMI, Tokyo, Japan) is now used to move the ultrasound probe (SIEMENS AG, Erlangen, Germany) in directions orthogonal to the imaging plane ( Figure 2). The addition of this linear stage allows to use a traditional 2D medical ultrasound for 3D tracking of the microrobots. Conversely, the additional coil is used to generate sinusoidal electromagnetic fields used to trigger the release of the drug-model.
Ultrasound Tracking
High-frequency acoustic waves are used for biocompatible tracking. For this purpose, an 18 MHz transducer is connected to a 2D medical ultrasound machine (SIEMENS AG, Erlangen, Germany). As the used transducer only allows two-dimensional imaging, additional procedures are required to obtain the 3D position of the microrobot. Specifically, the additional component has to be detected by allowing the ultrasound probe to move orthogonally to the image plane.
An intuitive approach would be to use this motion to follow the microrobot in the workspace, maintaining it always in the image plane. However, due to the presence of noise and artifacts, changes in the imaged size of the robot do not reflect the actual footprint of the scanned section. Consequently, it is challenging to compute the gradient of the position of the microrobot, which is required to follow its movements with the ultrasound probe.
Alternatively, we use a sonar-inspired approach. In this approach the ultrasound transducer is swept along the height of the workspace, while a Region Of Interest (ROI) in the surrounding of the estimated position of the microrobot is scanned for 2D detection (Figures 3 and 4). The joint variable of the stage in the point of detection, can then be used to triangulate the position of the microrobot. While this approach results in a reduced bandwidth of the tracking algorithm, it also provides a significant gain in robustness with respect to a gradient-based approach. Overall, this approach grants extremely robust tracking at a frequency of 2 Hz. The user provides the position reference. This is preprocessed by a fourth order filter that removes frequency components above the bandwidth of the controller and ensures continuous derivatives. This prevents the filtered reference from increasing with a dynamic that is faster than the maximum one of the controller The filtered reference is then provided to a Proportional, Integral and Derivative (PID) controller. This controller designed to minimize disturbances with frequencies higher than a decade below that of the tracker [17]. A feedforward component is added to improve the control performance. As the low-level controllers feed the currents determined by the force-to-current map, the microrobot moves. This motion is detected by the ultrasound tracking algorithm using the procedure shown in the image (combined with the approach of Figure 3). Such tracking procedure begins using a Gaussian mixture-based segmentation algorithm for background subtraction [18]. A Region Of Interest (ROI) around the estimated position of the microrobot is then selected and binarized using a variable-threshold. A dilation morphological filter is then applied to the image. Finally, the center of the largest blob is selected as the position of the microrobot. The computed position is then provided to a sampled-data observer, which provides the controller with intersample state estimations [19].
Microrobot Selection
The microrobots used in the study are selected in order to satisfy a set of conditions. First, the microrobots must have a continuous and smooth surface to guarantee uniform drug release, as well as to minimize contact pressures on the vessels in case of collisions. Second, the microrobots size must be small enough to grant access to the major body vessels, while remaining sufficiently bigger than the ultrasound resolution (100 µm) to minimize artifacts. Third, the robot has to be able to withstand unaltered the temperatures required for drug release. Finally, we want the magnetic dipole moment of the microrobot to be as strong as possible. This allows to control the robot with weak magnetic fields and gradients, as the ones generated by distant electromagnets, effectively enlarging the accessible workspace. Addressing all these requirements we select Neodymium Iron Boron (NdFeB) microspheres with diameter of 800 µm for our study. In particular, we use N45 grade NdFeB, which has a Curie temperature of 80 • C and offers a remnant magnetization of 1.35 T. Moreover, NdFeB has a conductivity of 6.7×10 6 S/m which renders it particularly sensitive to induction heating.
Induction Heating
In point of fact, microrobots are heated using high-frequency magnetic fields, exploiting the phenomenon commonly known as induction. Therefore, the heat generated in the microrobot is a result of both hysteresis losses and eddy currents [20]. However, due to the low permeability-at the magnetic fields reported in this study-of the pre-magnetized NdFeB microrobots, the heat generated due to hysteresis losses is minimal with respect to eddy losses. Consequently, a custom-coil is designed to maximize these effects (Ultraflex Power Technologies, New York, USA).
The resulting RLC circuit is capable of locking at two frequencies (126 kHz and 228 kHz) producing a field of 18 mT in amplitude. Depending on the magnetic energy stored in the electromagnetic field induced in the material, a power of up to 1.7 kW is required to generate such high-frequency fields. Part of such power is dissipated on the microrobot, while most of the remaining power is dissipated on the coil. In order to prevent overheating, such coil is designed to be hollow. This allows to run water inside the coil for cooling purposes (6.8 L/min at 3.4 bar). Finally, it is interesting to notice that the human body is not affected by these magnetic fields as it does not contain sufficient amounts of conductive or hard magnetic materials [16].
Motion Modeling and Control
After developing a testbed for drug release, we look at closed-loop motion control. For this purpose, we model the microrobots according to the following state space model: where ρ ∈ R is the density of the medium, and C D ∈ R, A ∈ R and M ∈ R are the drag coefficient, cross sectional area, and mass of the sphere, respectively (Table 1) [10]. In turn, d ∈ R 3 collects all the modeled components that are not influenced by the state of the system or by the control inputs. Moreover, p i ∈ R, F em,i ∈ R, and d i ∈ R are the i-th component of the position (p ∈ R 3 ), electromagnetic force (F em ∈ R 3 ), and d, respectively. These are defined as follows: where B ∈ R 3 is the magnetic flux density, m ∈ R 3 is the magnetic dipole moment of the microrobot, and ∇ is the gradient operator. Further, ∆F d ∈ R 3 represent the inaccuracies in the modeling of the drag forces, g ∈ R 3 is the acceleration due to gravity, and V ∈ R is the volume of the microrobot. Based on such model, a closed-loop controller is designed to regulate the motion of the microrobots, using the quasi-static electromagnetic fields generated by the testbed (Figure 4). The reference of the controller-provided by the user-is filtered to guarantee continuous derivatives and eliminate components outside of the control bandwidth. The filtered reference is then processed by feed-forward and feed-back controllers.To avoid instabilities and undetermined behavior, the Proportional Integral Derivative (PID) feed-back controller is designed to minimize disturbances with frequencies higher than a decade below that of the tracker [17]. A feed-forward component is added to improve the control performance with an additional control action (u FF ) Finally, the controller outputs forces that are mapped into currents at the electromagnets using a force-current map. As the setup is overactuated, we select a map that aims at minimizing the Frobenius norm of the third-order tensor collecting the Hessian matrices of each component of the field [10,21]. This choice minimizes the spacial variation of the electromagnetic gradient, and consequently, of the electromantic force (3). Overall, such map minimizes the sensitivity of the system to tracking errors.
Vascular Phantom and Drug-Model
Such closed-loop motion control is performed inside an anatomically accurate model of the human vessels [22]. This phantom is fabricated using polydimethylsiloxane (PDMS), due to its tissue-mimicking and optical properties, which allow us to compare the results of ultrasound tracking with those of offline optical tracking [23]. The 20 mm ×20 mm ×20 mm phantom represents a human vessel forking into three different channels. In order to ensure the anatomical accuracy, we construct the phantom so that the sum of the cross-sectional area of the resulting vessels (10 mm 2 each) is equal to that of the original vessel (30 mm 2 ). Finally, the phantom is filled with liquid polymerized siloxane to ensure acoustic transparency to ultrasound waves. It is worth noting that, the used polymerized siloxane has a kinematic viscosity of 50 mm 2 /s, about ten times that of blood [24]. This means that the experiments are conducted in a medium with lower Reynold number than blood. Therefore, interventions in blood would have lower C D (1) and drag forces than those reported in this study.
To further enhance the clinical relevance of the study, the microrobots are coated with a lipid-based thermoresponsive layer. This layer is embedded with a Sudan red dye that allow to identify the behavior of the coating. Moreover, the coating is designed to melt at 39 • C, to minimize the amount of heat necessary for drug-release in the human body.
Experimental Evaluation
In order to validate the developed setup and techniques, we demonstrate contactless delivery of a drug-model. The intention is to mimic a targeted drug delivery application, in which a microrobot is released by a catheter, steered towards the region of interest where it delivers a drug, and finally returns to the catheter for recollection. Consequently, in the presented experiments, an 800 µm sphere-starting at the end of a channel in the vascular phantom-is steered towards the target area (the end of another channel). As the target is reached, the heating system is activated, triggering the release of the drug-model. After the delivery, the microrobot returns to the starting point where the experiment terminates. In order to guarantee clinical compatibility, the microrobot is tracked using exclusively ultrasound imaging. However, for reasons of comparison and data analysis the procedure is also recorded with a 2D color camera (FLIR Systems, Wilsonville, OR, USA) attached to a microscope (Qioptiq, St Asaph, United Kingdom). Please, refer to the accompanying video for the visualization of the experiment in the Supplementary Materials.
The experiment is repeated ten times ( Figure 5). The microrobots averagely complete the trials in 212 s, moving with an average velocity of 0.3 mm/s ( Figure 6). Moreover, about 20 s are required for the heating process (Figure 1), which is activated as the microrobot is within 10 mm of the designated target. It is worth mentioning that, as the quasi-static and high-frequency magnetic fields can be superimposed without interference, the heating process does not affect the overall completion time. Moreover, this linear superposition, presents other significant advantages; case in point, uninterrupted closed-loop control would be fundamental in the presence of blood flow or dynamic environments.
It can also be noticed that the root mean square value of the positioning and tracking error are 40% and 52% higher for the component normal to the imaging plane (z), respectively. This increased error is mainly caused by the diffraction of the ultrasonic wave around the edges of the microrobot. In point of fact, as the diameter of the microrobot is less than ten times the wavelength of the ultrasound, the artifacts resulting from diffraction are comparable in size to the footprint of the microrobot. This phenomenon renders it challenging to determine the exact center of the sphere, as the pixel-count gradient is minimum in the neighborhood of such center (Figure 3). These results, even if somewhat constrained to the tested setup and presenting an additional dimension, are comparable to previous two-dimensional studies regarding motion of microrobots with ultrasound feedback [25]. [10]. (Right) Histogram of the positioning and tracking errors over the performed ten trials. The positioning error (blue) is defined as the divergence between the filtered reference and the state as tracked by the ultrasound tracking system; i.e., the error as computed in the control loop. Conversely, the tracking error (yellow) is defined as the difference in tracked position between the ultrasound and optical (computed offline) tracker. Consequently, we can only compute the tracking error for x and z components. However, for reasons of symmetry, we expect the y component, which cannot be analyzed optically, to have similar tracking error to that of the x component.
Conclusions
This proof-of-concept work presents a biocompatible system for targeted delivery of a drug-model. In order to render the approach fully biocompatible, magnetic fields are used for motion control, a medical ultrasound system is used for imaging, and induction heating is used for active drug release. The presented system is validated in a 3D phantom of human vascular network. In this validation, we simulate a scenario in which the microrobot has been released in a vessel. We navigate such microrobot toward the targeted area, where we trigger the active drug-model release. Finally, we return the robot to the drop-off point, after the release is complete. Compared to previous approaches using induction heating-requiring up to an hour for drug-model release-this approach has an average completion time of 212 s for a release within half a millimeter of the targeted point. In spite of this, a comparison with previous literature using optical feedback shows the motion control performance is clearly constrained by the limited bandwidth of the ultrasound feedback [10,26,27]. Ultrasound scanners with higher refresh-rate that allow higher scanning frequencies could be used to address this issue. Overall, the promising results of this approach, as well as its fast and fully biocompatible nature, render it interesting for further investigation, especially in ex-vivo and in-vivo environments.
Future work will address the limitations of this work to develop a path toward clinical application. For this purpose, we will investigate hardware and smart materials solutions to improve the ultrasound scanning frequency. This will allow to extend the workspace of the quasi-static and high-frequency electromagnetic systems, permitting interventions in the larger parts of the body. Navigation in channels of other size and in the presence of flow will also be investigated. Further, we will test hardware solutions that allow to improve the scanning frequency, therefore increasing the workspace and control bandwidth. Such improvements in hardware and methodology could enable an extensive quantitative analysis with increased number of trials, thereby providing a statistical means to evaluate a myriad of clinically relevant constructs. Finally, we will analyze the use of 3D ultrasound transducers for targeted drug delivery using both individual microrobots as well as swarms. Computer vision and fusion algorithms that allow to address obstructions, such as bones and inhomogeneous tissues, will also be investigated.
|
2019-08-03T13:03:22.977Z
|
2019-07-31T00:00:00.000
|
{
"year": 2019,
"sha1": "8100bca322a4e7740291ece4c2d6c1e8714ec8d4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/mi10080504",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "492256ae7fa5995936adc42e74ff6925ec325a3a",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.